Unregulated AI is a threat to global security
A global convention is necessary and inevitable - does the EU have it right?
By Arwa Emhemed
Ever since OpenAI released ChatGPT, the world has been taken by various discussions about generative artificial intelligence (AI) and the future it can depict for the rest of us. Specifically, Generative AI models are capable of producing text, images, or other media using patterns of input training data and then generating new data that has similar characteristics.
These tools have been quickly adopted by people in every conceivable sector of our society, from artists and musicians to education to business and even politics and policy. These are enormously powerful tools and we have only started to scratch the surface of the potential scope of their use, and influence.
But as with all disruptive technologies, with opportunity comes danger. While there appears to be consensus on the fact that generative AI on its own is a mere tool without a mind of its own, controversy surrounds the intentions behind its usage and whether it should be regulated or not. This conversation has been underpinned by disillusionment and problems, such as hate speech, data privacy violations, and other harms associated with the unregulated use of Generative AI.
Because unregulated artificial technology (AI) poses economic and political threats to the world at large, we can certainly expect to witness united global efforts calling for its regulation in the future. While three key players – the United States, China and the EU – have made regulatory efforts, the EU stands out in taking highly precautionary steps by focusing on banning some uses of AI while allowing others.
However, these efforts remain constrained to the regional level with no global efforts put forth to regulate the use of artificial intelligence. A global convention could be the solution we need to minimize the risks posed by AI – lessons learned from the EU Act can point us in the right direction.
This year, the EU’s parliament passed the AI Act with 499 votes in favor, 28 against, and 93 abstentions. The Act categorizes different AI tools based on their potential risk. In doing so, the Act takes into account how misusing AIcan lead to gross violation of fundamental rights.
The EU AI Act proposes three risk-based models:
● Unacceptable Risk: Prohibiting technology that uses real-time and remote biometric identification, social scoring, etc.
● High-Risk: Prohibiting technologies classified as systems that infringe upon the safety and sanctity of fundamental freedoms. This vague classification leaves questions on what is included, but they have been loosely defined as algorithms that discriminate against individuals based on a set of criteria.
● Limited or Low-Risk: Technologies that are transparent and inform users, giving them the choice of continuing interactions.
The implementation of this Act involves generative AI developers submitting their models for review before releasing them commercially. Punitive action for violation includes compensatory measures, such as fining corporations for 7% of their annual global profits.
These positive steps taken by the EU can be utilized as a baseline for a global convention for regulating the use of AI. Its risk-based approach is already echoed by other countries, which makes it easier to be accepted on a global scale.
However, what remains elusive is the coordination and implementation of such efforts in a world system where we have global governance but not an overarching government. Such efforts could be headed by the United Nations (UN) in its capacity as an intergovernmental organization by overseeing a multilateral agreement or an ‘AI Regulation Treaty’ that could draw on the EU’s approach.
To do this, the UN should create a committee to oversee the development and implementation of such an agreement based on a multi-stakeholder approach that leverages the expertise of developers, politicians, and human rights activists.
Whether it is enforceable would depend on the successful ratification of the agreement, which is the real challenge. Finding common ground for AI regulation across states that have different morals and principles is difficult. Certain countries have national practices in place that might conflict with an international approach. For instance, China has already proposed a “social credit system” regarding AI that contradicts the EU’s definition of ‘unacceptable risk,’ which might pose challenges in achieving global consensus on an international AI agreement.
Consequently, any global initiatives would require a delicate balance between accommodating diverse practices and values to come up with a comprehensive convention that reflects the tangible human rights concerns associated with unregulated AI use. Creating incentives for countries to ratify the agreement should be at the heart of these efforts to foster compliance and develop an effective treaty.
Crucially, the agreement should take a narrow approach to defining the regulatory scope of AI. While a narrow definition would leave some types of AI models out of scope, a broad definition risks including AI algorithms that produce harm. As such, the hierarchical risk approach proposed by the EU leaves open the option of different regulatory frameworks based on the algorithm type.
A global approach certainly does not come without risks. By adopting a global approach, different localized approaches to assessing risk management may fail to be implemented. This can potentially increase the cost of AI development and training, or worse, risk misuse by authoritarian governments to violate individual freedoms under the pretext of managing AI risk.
For a healthy global convention to develop, it needs to be adaptive and consider the most recent advancements in AI. It also needs inclusive collaboration that is built on public participation and contribution of relevant stakeholders to the formulation of AI policy.
State governments will be better equipped to overcome challenges if they effectively adopt regulations more for us as global citizens, rather than for them as governors.
Arwa holds a Bachelor's Degree in Political Science from the American University of Beirut. Her final capstone project was dedicated to the development of an app referral system aimed at actively engaging community members in implementing an intervention to combat illegal migration from Lebanon. Alongside her academic pursuits, she gained practical experience in diverse fields, notably during her time at the i4policy Foundation, where she contributed to policy drafts and conducted research to assess policy-related initiatives. Arwa now seeks to leverage her experience and education to formulate policy recommendations to improve women's rights, particularly in Libya and the MENA region.
Good article, Arwa! AI may be scary, fast-moving, and difficult for many of us to grasp, but there is an encouraging precedent. In the 1940s nuclear technology offered a range of hard-to-evaluate promises and threats. Some thought it would spell the end of civilization. But we actually got through the past 78 years with few lives lost from nuclear, and many benefits in electricity, food safety, medical treatment, etc.
This happened because we built a multi-tiered regulatory structure that works (peer-to-peer, national and international). AI may be scary but we can't put it back in the can; we'll have to contain its risks and make it work for us.