Legal Alert
How the Proposed EU Artificial Intelligence Act May Impact Your Business
March 20, 2024
The European Union’s Artificial Intelligence Act recently passed an important milestone and is now on the verge of becoming law. If it passes a final vote by the EU member states on April 10 or April 11, as expected, it is widely considered to become the world’s first substantive artificial intelligence (AI) legislation.
Does the Act Apply to My Company?
The AI Act applies to EU member states and foreign companies that sell or use AI in the EU. Examples include U.S. companies that offer access to AI applications as a service or on-premise.
The AI Act will also likely impact the development of AI law in the United States, following the lead of states like Utah, which recently passed the Artificial Intelligence Policy Act. As such, we advise businesses to become familiar with the EU’s AI Act and begin to ascertain what is necessary for compliance.
What Is the Purpose of the Act?
The EU’s AI Act is designed primarily to address risks associated with privacy and bias in AI. It uses a risk-based approach for regulating the various types of AI in use, while outright banning and otherwise restricting certain AI applications depending on their use.
"The aim of the new rules is to foster trustworthy AI in Europe and beyond, by ensuring that AI systems respect fundamental rights, safety, and ethical principles and by addressing risks of very powerful and impactful AI models," the European Commission states. "While most AI systems pose limited to no risk and can contribute to solving many societal challenges, certain AI systems create risks that we must address to avoid undesirable outcomes."
The AI Act’s DNA can be traced to the work conducted by international organizations, primarily the Organization for Economic Cooperation and Development (OECD). With that, we expect to see frequent regulatory reviews and emphasis on corporate governance.
Requirements Based on Levels of Risk
"Limited Risk" = Transparency Obligations
On the lower end of the AI Act’s restriction spectrum are general purpose AI applications such as ChatGPT. The primary requirement for these systems is for developers and end users to have and publicly demonstrate the use of transparent processes and procedures. Compliance will most likely require producing documentation of how these applications were developed, how they are used, and a requirement to identify AI content via a watermark or other similar method.
"High-Risk" = Strictly Regulated
Next in line are AI applications that are deemed “high-risk.” These range from vocational training applications that may determine access to education, to applications impacting employment outcomes (e.g., resume-sorting software for recruitment), to applications used in essential services such as healthcare and banking, to those used in elections. High-risk AI systems will be subject to strict rules before they can be put on the market. These include adequate risk assessment and mitigation systems; appropriate human oversight measures; and a high level of robustness, security, and accuracy.
"Unacceptable Risk" = Banned
AI applications that pose an “unacceptable risk” to basic human values are banned. Such applications include, for example, AI that uses emotion recognition in the workplace or in an education environment, real-time biometrics (such as face scans), and toys using voice assistance that encourage dangerous behavior.
What Should My Company Do Next?
Any company assessing, implementing, or deploying AI should review and strengthen applicable processes and procedures. U.S. companies should pay attention to developments related to the EU’s AI Act, as it is likely that we will see similar emphasis in federal and state legislation in the near future.
More specifically, developers and end users of the high-risk and limited-risk applications will need to pay close attention to implementing best practices for their respective systems. Developers of high-risk AI applications will need to become highly accustomed, for example, to conducting risk assessments routinely, documenting the results, and making them available for regulatory review.
We Can Help
Maslon can help guide your company through its assessment, implementation, and deployment of AI. We can help you determine whether the EU’s AI Act applies to you and help you remain compliant with other emerging AI and privacy regulation.