EU’s AI Act wins fresh backing ahead of April vote


European Union (EU) legislation that would set guardrails for the use and development of AI technology appears to be on a clear path toward ratification as two key groups of legislators in the EU Parliament on Tuesday approved a provisional agreement on the proposed rules.

The EU Parliament’s Committee on Civil Liberties, Justice and Home Affairs (LIBE) and Committee on the Internal Market and Consumer Protection (IMCO) approved the AI Act with an “overwhelmingly favorable vote,” putting the rules “on track to become law,” Dragoș Tudorache, an EU Parliament member and chair of the EU’s Special Committee on AI, tweeted on X, formerly Twitter.

The rules, on which the EU Parliament will formally vote in April, require organizations and developers to assess AI capabilities and place them into one of four risk categories — minimal, limited, high, and unacceptable risk. The act is the first comprehensive government legislation to oversee how AI will be developed and used, and has been met with both approval and caution from technologists.

Parliament’s priority is to make sure that AI systems used in the EU are safe, transparent, traceable, non-discriminatory and environmentally friendly,” the EU said in describing the legislation online. “AI systems should be overseen by people, rather than by automation, to prevent harmful outcomes.”

Set up for simplicity

At its core, the regulation is simple, said Gartner’s Nader Henein, a fellow of information privacy, research vice president-data protection and privacy. “It requires that organizations (and developers) assess their AI capabilities and place it in one of the four tiers defined by the act,” he said. “Depending on the tier, there are different responsibilities that fall on either the developer or the deployer.”


Some advocacy groups and even an analysis by the US government have pushed back against the AI Act, however. Digital Europe, an advocacy group that represents digital industries across the continent, released a joint statement in November ahead of the Act’s final weeks of negotiations warning that over-regulation could stymie innovation and cause startups to leave the region. The group urged lawmakers not to “regulate” new AI players in the EU “out of existence” before they even get a chance.

Henein argued that the law’s mandates “are in no way a hinderance to innovation. Innovation by its nature finds a way to work within regulatory bounds and turn it into an advantage,” he said.

Adoption of the rules “should be straightforward” as long as developers and resellers provide clients with the information they need to conduct an assessment or be compliant, Henein said.

Still, one tech expert said some criticisms about the prescriptive nature of the AI Act and vague language are valid — and its relevance might not last because it’s often difficult for regulations to move at the pace of technology.

“There are some parts of the regulation that make a lot of sense, such as banning ‘predictive policing’ where police are directed to go after someone just because an AI system told them to,” said Jason Soroko, senior vice president of product at Sectigo, a certificate lifecycle management firm. “But, there are also parts of the regulation that might be difficult to interpret, and might not have longevity, such as special regulations for more advanced AI systems.”

More restrictions in the offing?

Further, enterprises could face compliance challenges in the discovery process as they build a catalog of existing AI use cases, and the subsequent categorization of those use cases into the Act’s tiering structure, Henein said.

“Many organizations think they are new to AI when in fact, there is nearly no product of note they have today that does not have AI capabilities,” Henein said. “Malware detection tools and spam filters have relied of machine learning for over a decade now, they fall in the low-risk category of AI-systems and require no due diligence.” 

If the EU votes to approve the act in April, as seems likely, other countries might follow. Several nations — the US, UK, and Australia among them — already have put in place government-led groups to oversee AI development; more formal regulations could follow.

Still, any new rules will likely only apply to the most extreme cases in which AI presents significant harm to humanity or otherwise. Cases in which it’s being used responsibly and even presents benefits, such as worker productivity — which is true in the case of currently used generative AI chatbots based on large language models (LLMs) such as OpenAI’s ChatGPT — likely will see little oversight.

“What we are seeing on both sides of the Atlantic is the need to restrict certain use cases outright; these fall under the prohibited category under the AI Act and present serious harm,” Henein said.