Akzeptierte Vorträge (wiss. Konferenzen)
Refine
Year of publication
Document Type
- Public lecture (36)
- Conference Proceeding (1)
Language
- English (37) (remove)
Is part of the Bibliography
- no (37)
Keywords
- Europäischen Union (2)
- Administrative Styles (1)
- European Directives (1)
- Europeanization (1)
- Europäische Union (1)
- Exportkontrolle (1)
- Friedenssicherung (1)
- Inuit Rechtssprechung (1)
- Künstliche Intelligenz (1)
- Nonproliferation (1)
Institute
- Lehrstuhl für Politikwissenschaft (Univ.-Prof. Dr. Stephan Grohs) (11)
- Lehrstuhl für Öffentliches Recht, insbesondere deutsches und europäisches Verwaltungsrecht (Univ.-Prof. Dr. Ulrich Stelkens) (5)
- Lehrstuhl für Volkswirtschaftslehre, insbesondere Wirtschafts- und Verkehrspolitik (Univ.-Prof. Dr. Dr. h.c. Andreas Knorr) (4)
- Lehrstuhl für Öffentliches Recht, insbesondere Europarecht und Völkerrecht (Univ.-Prof. Dr. Wolfgang Weiß) (3)
- Lehrstuhl für Öffentliches Recht, Staatslehre und Rechtsvergleichung (Univ.-Prof. Dr. Dr. h.c. Karl-Peter Sommermann) (1)
‘Killer Flying Robots Are Here. What Do We Do Now?’, ‘A Military Drone With A Mind Of Its Own Was Used In Combat, U.N. Says’ and ‘Possible First Use of AI-Armed Drones Triggers Alarm Bells’ – these are just some headlines to a report issued by the UN Panel of Experts on Libya. What caught the international attention was the panel’s description of the following scene in Libya’s civil war: ‘[Forces] were […] hunted down and remotely engaged by the un-manned combat aerial vehicles or the lethal autonomous weapons systems such as the STM Kargu-2 […]. The lethal autonomous weapons systems were programmed to attack targets without requiring data connectivity between the operator and the munition: in effect, a true “fire, forget and find” capability.’
However, the disruptive potential of AI is not limited to out-of-control killer drones or the military context in general – nor does it only have a negative potential. AI and its global trade promote international development and technological innovation, thereby improving lives. Therefore, the efforts to build a legal and policy framework to harness AI’s benefits and thwart its dangers is in full swing. States, the European Union, international organizations, NGOs, and scholars alike come up with ways of achieving that end. The approaches to the issue are manifold. However, most focus either on instating rules on the development of AI, for instance, how to ensure AI is built ethically or on its use, ie, banning its use in lethal auto-nomous weapon systems (LAWS). Whereas all these efforts are important, a further layer of protection has not gained much traction: regulating AI’s global trade so that responsible actors can use it to benefit humankind while preventing it from ending up in the hands of irresponsible actors.
The legal instrument to achieve this end is international export control law. It aims to mitiga-te the risks to international peace and security associated with the proliferation of sensitive items to irresponsible actors while avoiding unreasonable restrictions on global trade, eco-nomic development, and technological innovation. However, the international export control law is not yet suited to fulfill its promise regarding AI. The dual use nature of AI poses signifi-cant risks to international peace and security. Nevertheless, only in limited circumstances applies international export control law to the transfer of AI applications and technology, leaving a gap in the international export control framework. Until this gap is closed, inter-national human rights due diligence might provide fallback protection to address the issue
of mitigating the risks associated with the proliferation of dual use AI.
It has become a truism that the Internet gives a range of private actors, such as social media, substantial power. They are thus able to control the communication processes, hold considerable authority over shaping opinions, and become the arbiters of free speech. That is why legal scholars and policymakers are searching for legal tools that would ensure a fair balance between the conflicting rights of these two groups of private actors (platforms and their users).
The aim of this presentation would be to reconsider the relationship between individuals and online platforms, analyze how horizontal online conflicts may be resolved (giving examples of some national legislation and EU proposal concerning digital services), and answer the question if the discretion of the platforms can be limited in order to protect rights and freedoms. The theoretical framework of the analysis would be the doctrine of the State’s positive obligations, as established in the current European Court of Human Rights case law.
The main argument would be that it is necessary to strengthen the public supervision over Internet platforms, in particular the way they resolve horizontal conflicts. The possibility of limiting their discretion, in order to provide individual protection, does not mean however creating the unlimited right of access to the platform in order to express any opinion or view (freedom of forum).
Short presentation of the corresponding conference paper "A soft shell with a powerful core? Soft Europeanisation and social policy: a new understanding of the Open Method of Coordination and its potential to enhance social welfare in Europe", focussing on the theoretical idea and empirical evidence.