See all posts

The New Era of AI Regulation: Exploring the EU's AI Act and Biden's Executive Order

June 17, 2024

The European Union (EU) has made significant strides in the regulation of Artificial Intelligence (AI) with the recent updates to the EU AI Act. This act represents a pioneering effort to create a legal framework for the development and use of AI technologies, addressing a range of concerns from data privacy to ethical implications.

After having covered the EU AI Act in one of our recent newsletter articles, we now have some updates as the AI act, initially proposed by the European Commission in April 2021 as part of the EU's Digital Strategy.

The Act aims to establish better legal conditions for AI development and use, ensuring safety, transparency, and respect for fundamental rights.
In the following, we will have a look at the recent developments and outline the implications of the AI Act for existing and future businesses.

Key Aspects of the Act

The Act introduces a nuanced, risk-based categorization for AI systems, dividing them into four levels based on the potential risk they pose.

In terms of data governance, the Act emphasizes the need for transparency and accountability. It mandates clear disclosure when AI is involved in content generation and requires detailed documentation of the data used in training AI systems. This measure aims to address concerns around copyright and rights management.

A significant aspect of the Act is its stance on biometric surveillance and social scoring. It restricts indiscriminate biometric surveillance and outright bans social scoring practices by governments or private entities, citing privacy and ethical concerns. This move is seen as a critical step in protecting individual rights and freedoms in the digital age.

To ensure compliance, the Act establishes a dedicated EU authority responsible for overseeing its implementation and adherence, ensuring uniform application across member states.

Industry Implications

The EU AI Act, with its comprehensive regulatory framework for Artificial Intelligence, presents a range of implications for the AI industry. These implications are multifaceted, affecting various aspects of AI development, deployment, and management.

Strategic Shifts and Compliance Challenges

Transparency vs. Intellectual Property

Investment in Data Quality and Bias Management

Administrative Burden and Market Dynamics

Human Oversight in System Design

Financial Risks from Non-Compliance

Legal Advisory and Preparedness

AI regulations in the United States

Besides the EU being the front runner with regard to AI regulation, the US, under President Joe Biden's administration, also initiated steps towards regulating Artificial Intelligence (AI) through an executive order - matching the EU's efforts in addressing the challenges and opportunities presented by AI technologies.

Lets have a look at the key aspects of the executive order:

  1. Government-Centric Focus: The US executive order primarily targets the deployment of AI within government sectors, emphasizing security as a core consideration. This approach marks an initial step in setting a broader policy agenda for AI governance in the US.
  2. Sectoral Approach to AI Governance: The order mandates every government agency to examine AI's relevance to their policy and regulatory jurisdictions, advancing a sectoral approach to AI governance. This includes considerations of data privacy and a call for Congress to pass relevant legislation.
  3. International Engagement and AI Ethics: The executive order highlights the importance of international engagement and establishing AI ethics, aligning with global trends and geopolitical tensions. It emphasizes the US's intent to lead the global conversation on AI ethics by example.
  4. Regulatory Burdens and Challenges: The executive order introduces regulatory burdens on AI, emphasizing safety, privacy, equity, and consumer protection, essential for building trust in AI technologies.
  5. Comparison with EU's AI Act: Unlike the EU AI Act, which is legislation with enforcement, the US executive order relies on the market influence of the federal government. This difference highlights a more market-driven approach in the US compared to the EU's regulatory-focused strategy.

Comparing the EU Act to the US AI Regulation

The EU AI Act and the US executive order on AI represent two different approaches to AI governance. With the AI Act, the EU's approach is more regulatory and comprehensive, categorizing AI systems based on risk and imposing strict compliance requirements, especially for high-risk AI applications. In contrast, the US approach, as outlined in the executive order, focuses more on setting a policy agenda and guiding principles, particularly for AI deployment within the government sector.

Both the EU and the US recognize the importance of international engagement and ethics in AI, but their methods of implementation and enforcement differ. The EU's approach is more prescriptive and enforceable, while the US leans towards a sectoral and principle-based approach.

Eventually, any regulation will be a knife-edge ride between responsible development, use of the technology and maintaining the competitiveness of companies in the respective jurisdictions. However, at the end of the day, we believe that a well-developed regulatory framework will provide benefits for companies due to legal certainty - hence we look forward to the developments to come.

Get Started

Contact us today to learn how we can bring your ideas to life with our custom-built AI solutions!