November 23, 2024

How will the EU’s proposed AI regulation impact consumers?

What is the bill’s definition of AI, how does it aim to safeguard consumers from misuse, and what are the opinions of major tech companies on it?

The European Parliament endorsed the European Union’s proposed AI law on Wednesday, marking a significant milestone in technology regulation. The vote represents a crucial step toward enacting the legislation.

The law is now expected to receive final approval from a council of ministers, making it law within weeks. However, the act will be implemented gradually, with a series of deadlines for compliance over the next three years.

“This will enable users to trust that the AI tools they use have been rigorously reviewed and are safe,” said Guillaume Couneson, a partner at the law firm Linklaters. “It’s similar to users of banking apps having confidence that the bank has implemented stringent security measures to ensure their safe use.”

The bill’s impact extends beyond the EU because Brussels is a significant tech regulator, as demonstrated by the influence of GDPR on data management. The AI act could have a similar effect.

“Many countries will closely monitor the EU’s actions following the adoption of the AI act. Other regions may only adopt a similar approach if the EU’s model proves effective,” Couneson explained.

What is the bill’s definition of artificial intelligence?

AI can be broadly defined as a computer system that performs tasks typically requiring human-like intelligence, such as writing an essay or creating a drawing.

The act provides a more detailed definition, describing the AI technology it governs as a “machine-based system designed to operate with varying levels of autonomy,” which clearly includes tools like ChatGPT.

The system can learn on the job and make decisions that impact physical or virtual environments. The legislation bans systems with unacceptable risks but exempts those for military, defence, or national security use. It also doesn’t apply to systems for scientific research and innovation. This exemption concerns advocates like Kilian Vieth-Ditlmann from Algorithmwatch, who fear it allows member states to bypass crucial AI regulations.

What measures does the bill take to address the risks associated with AI?

Certain systems will be prohibited, including those that manipulate people to cause harm; “social scoring” systems that classify people based on social behavior or personality, such as the one in Rongcheng, China, where the city rated residents’ behavior; predictive policing resembling Minority Report; monitoring emotions at work or in schools; “biometric categorization” systems that classify people based on biometric data (retina scans, facial recognition, fingerprints) to infer characteristics like race, sexual orientation, political opinions, or religious beliefs; and compiling facial recognition databases by scraping facial images from the internet or CCTV.

Exclusions for law enforcement

Facial recognition has been a contentious issue in the legislation. The use of real-time biometric identification systems, including facial recognition on live crowds, is prohibited, except for law enforcement under certain conditions. Law enforcement can utilize this technology to locate a missing person or prevent a terrorist attack, but they must obtain approval from authorities. In exceptional cases, it can be used without prior approval.

What happens with systems that are considered risky but not prohibited?

The legislation includes a special category for “high-risk” systems, which are legal but subject to close monitoring. This category includes systems used in critical infrastructure such as water, gas, and electricity, as well as those used in education, employment, healthcare, and banking. Certain law enforcement, justice, and border control systems are also covered. For example, a system used to determine someone’s admission to an educational institution or their job application outcome will be classified as high-risk.

These tools must be accurate, undergo risk assessments, have human oversight, and their usage must be logged. EU citizens are also entitled to request explanations about decisions made by these AI systems that have affected them.

How does the legislation address generative AI?

Generative AI, which refers to systems that generate plausible text, images, videos, and audio from basic prompts, falls under the provisions for what the legislation terms “general-purpose” AI systems.

The legislation introduces a two-tiered approach. In the first tier, all developers of AI models must adhere to EU copyright law and provide detailed summaries of the content used to train the model. It is uncertain how already-trained models will comply, and some are facing legal challenges. OpenAI is being sued by The New York Times, and StabilityAI is being sued by Getty Images for alleged copyright infringement. Open-source models, which are freely accessible to the public unlike “closed” models such as ChatGPT’s GPT-4, are exempt from the copyright requirement.

A more stringent tier applies to models that present a “systemic risk” due to their human-like “intelligence,” likely including chatbots and image generators. Measures for this tier include reporting significant incidents caused by the models, such as fatalities or violations of fundamental rights, and conducting “adversarial testing,” where experts attempt to bypass a model’s safeguards.

How does the legislation address deepfakes?

Individuals, companies, or organizations creating deepfakes must disclose whether the content has been artificially generated or manipulated. Even if created for artistic, creative, or satirical purposes, the content still needs to be labeled appropriately, without hindering its display or enjoyment.

Text generated by chatbots that provides information on matters of public interest must be labeled as AI-generated, unless it has undergone human review or editorial control, which exempts content with human oversight. Developers of AI systems must also ensure that their output can be identified as AI-generated, either through watermarking or other means of flagging the material.

What are the opinions of AI and tech companies?

The bill has elicited a mixed response. While the largest tech companies have expressed general support for the legislation, they are cautious about its specifics. Amazon stated its commitment to collaborating with the EU for the responsible development of AI, while Meta, led by Mark Zuckerberg, cautioned against overregulation, emphasizing the need to preserve AI’s potential for fostering innovation and competition.

In private, responses have been more critical. A senior figure at a US company noted that the EU has set a limit for the computing power used to train AI models much lower than similar proposals in the US. Models trained with more than 10^25 “flops” of computing power will face onerous requirements to prove they do not pose systemic risks. This may lead European companies to relocate to the US to avoid EU restrictions.

What penalties are outlined in the legislation?

Penalties under the act will vary: from €7.5 million or 1.5% of a company’s total worldwide turnover (whichever is higher) for providing incorrect information to regulators, to €15 million or 3% of worldwide turnover for breaching specific provisions like transparency obligations, to €35 million or 7% of turnover for deploying or developing banned AI tools. Smaller companies and startups will face more proportionate fines.

The obligations will take effect 12 months after the act becomes law, with the prohibition of certain categories beginning six months after that. Providers and users of high-risk systems have three years to comply. Additionally, a new European AI office will be established to set standards and act as the primary oversight body for GPAI models.

Copyright © All rights reserved | WebbSocial |