In an era where artificial intelligence (AI) is reshaping the contours of society, the European Union (EU) is poised to make history. As Dragoș Tudorache, Member of the European Parliament and Vice-President of the Renew Europe Group, aptly put it, "Artificial intelligence does have a profound impact on everything we do and therefore it was time to bring in some safeguards and guardrails on how this technology will evolve for the benefit of our citizens." (The Guardian, 2023) With that being said the EU is actively calling to action by developing the so-called EU AI Act, a regulatory framework we will have a closer look at in the following.
The EU AI Act, often dubbed the "GDPR for AI", is a testament to the EU's commitment to ensuring that AI evolves in harmony with human rights and societal values. It uses a risk-based approach to ensure that AI systems like voice-activated toys promoting dangerous behavior or AI-based social scoring systems are outright banned due to their unacceptable risks, thereby increasing awareness of the importance of responsible AI among businesses, regulators, and the wider public.
The AI Act's deliberations on AI-powered live facial recognition exemplify the challenges of balancing security with privacy.
Here, the EU draft regulation was updated this year to categorize AI models and tools into "high" risk or simply "unacceptable." AI tools and uses deemed "unacceptable" will, therefore, directly be banned in the EU. This includes "remote biometric identification systems," or facial-recognition technology; "social scoring," or categorizing people based on economic class and personal characteristics; and "cognitive-behavioral manipulation," such as voice-activated AI-powered toys.
Generative AI models fall within the EU AI Act under the broader category of foundation models - being models that are trained on vast and diverse datasets for a wide range of outputs. The Act mandates providers of generative AI to implement state-of-the-art safeguards against producing content that violates EU laws and to transparently disclose the use of copyrighted training data. Furthermore, there's an emphasis on transparency, especially when generative AI is used to create manipulative content, such as "deep fakes."
Beyond these specific obligations, generative AI systems, being a subset of foundation models, must also adhere to broader obligations set for foundation models, including risk mitigation, unbiased dataset usage, and energy efficiency. The Act also outlines stringent compliance monitoring mechanisms, with substantial fines for non-compliance, ranging up to 7% of the total worldwide annual turnover or EUR 40 million, whichever is higher.
However, as discussions between the European Parliament and the European Council continue, the final legislative text may undergo changes, reflecting the dynamic nature of AI developments.
Now that we have taken a look at the EU AI Act, let’s have a look at the implications this initiative by the EU might have on further jurisdictions.
The US's Calculated Approach
The US are behind the EU when it comes to regulating AI. Last month, the White House said it was "developing an executive order" on the technology and would pursue "bipartisan regulation." While the White House has been actively seeking advice from industry experts, the Senate has convened one hearing and one closed-door "AI forum" with leaders from major tech companies.
Neither event resulted in much action, despite Mark Zuckerberg being confronted during the forum with the fact that Meta's Llama 2 model gave a detailed guide for making anthrax. Still, American lawmakers say they're committed to some form of AI regulation. "Make no mistake, there will be regulation," Sen. Richard Blumenthal said during the hearing.
The UK's Aspirational Stance
The UK, meanwhile, wants to become an "AI superpower," a March paper from its Department for Science, Innovation and Technology said. While the government body has created a regulatory sandbox for AI, the UK has no immediate intention of introducing any legislation to oversee it. Instead, it intends to assess AI as it progresses. "By rushing to legislate too early, we would risk placing undue burdens on businesses," Michelle Donelan, the secretary of state for science, innovation, and technology, said.
Brazil's Human Rights-Centric Approach
In a draft legislation update earlier this year, Brazil looked to take a similar approach to the EU in categorizing AI tools and uses by "high" or "excessive" risk and to ban those found to be in the latter category. The proposed law was described by the tech-advisory firm Access Partnership as having a "robust human rights" focus while outlining a "strict liability regime." With the legislation, Brazil would hold creators of an LLM liable for harm caused by any AI system deemed high risk.
China's Restrictive Stance
China, despite its widespread usage of tech such as facial recognition for government surveillance, has enacted rules on recommendation algorithms and "deep synthesis" tech. Now it's looking to regulate generative AI. One of the most notable rules proposed in draft legislation would mandate that any LLM, and its training data, be "true and accurate." That one requirement could be enough to keep consumer-level generative AI out of China almost entirely - especially if you understand the “logic” of how (gen) AI works.
Our Perspective on the EU AI Act The EU AI Act represents a significant stride in establishing regulatory frameworks for artificial intelligence, much akin to MiCAR in the blockchain/crypto space. We firmly believe that the early implementation of sound regulations plays a pivotal role in fostering economic growth within this sector. Thus, it is up to experts, the public, and politicians to collaboratively ensure the formulation of judicious rules that facilitate business development and societal advancement across the European Union.
Contact us today to learn how we can bring your ideas to life with our custom-built AI solutions!