HOME Science & Technology

EU's AI Act sparks debate: balancing regulation and innovation

2024.04.12 02:51:07 Jimin Youn

[artificial intelligence, Photo Credit: Pexels] 

As the European Union takes a decisive step forward in regulating artificial intelligence (AI), the enactment of the EU's AI Act has ignited a contentious debate, pitting the imperative of regulation against the drive for innovation.


Since the debut of ChatGPT, European officials have hastened to provide laws and warnings to tech companies, with this week being a watershed moment in defining the EU's artificial intelligence (AI) regulations.


The European Parliament passed the Artificial Intelligence Act on March 13, 2024, which adopts a risk-based approach to ensuring that corporations develop legally compliant goods before making them available to the public.


A day later, the European Commission requested that Bing, Facebook, Google Search, Instagram, Snapchat, TikTok, YouTube, and X clarify how they are mitigating the hazards of generative AI through separate laws.


While the EU is primarily concerned with AI hallucinations (when models make mistakes and make things up), the viral spread of deep fakes, and the automated manipulation of AI that could mislead voters in elections, the tech community has its own concerns about the legislation, and some researchers believe it does not go far enough.


In Particular, the Europe director of the Open Markets Institute, Max Von Thun is most concerned about tech monopolies.


"The AI Act is incapable of addressing the number one threat AI currently poses: its role in increasing and entrenching the extreme power a few dominant tech firms already have in our personal lives, our economies, and our democracies," he stated.


In a similar vein, he warned that monopolistic misuse in the AI ecosystem should concern the European Commission.


"The EU has to realize that the size and influence of the leading corporations creating and implementing AI technology are closely related to the concerns that AI poses.


Before addressing the latter, you can't effectively handle the former," von Thun stated.


The possibility of artificial intelligence monopolies gained attention in February when it was revealed that Microsoft and the French startup Mistral AI were collaborating.


It was shocking to some in the EU because France had campaigned for the AI Act to include provisions for open source businesses like Mistral.


Nonetheless, the new regulation's clarity was welcomed by a number of startups.


As co-founder and CEO of the French open-source AI business Giskard, Alex Combessie remarked, "The EU Parliament's final adoption of the EU AI Act is both a historic moment and a relief."


He stated to Euronews Next, "We're confident that these checks and balances can be effectively implemented, even though the Act imposes additional constraints and rules on developers of high-risk AI systems and foundation models, deemed as systemic risks.'"


"This historic moment lays the way for a future where AI is responsibly harnessed, fostering trust and guaranteeing everyone's safety," he continued.


Based on the processing power used to train them, the regulation distinguishes between the hazards posed by foundation models.


More regulations apply to AI products that surpass the computer power level.


Like previous definitions, the categorization is regarded as a beginning point and is subject to revision by the Commission.


However, not everyone agrees with the classification.


Katharina Zügel, policy manager at the Forum on Information and Democracy, stated, "In my opinion, AI systems used in the information space should be classified as high-risk, requiring them to adhere to stricter rules, which is not explicitly the case in the adopted EU AI Act."


"The Commission, which has the ability to modify the use cases of high-risk systems, could explicitly mention AI systems employed in the information space as high-risk taking into account their impact on fundamental rights," she stated to Euronews Next.


"Our shared future cannot be solely driven by private enterprises. AI has to benefit the people," she continued.


Others, however, contend that companies must be allowed to interact with the EU and have a voice as well.


"As the private sector will be the engine of AI in the future, it is imperative that the EU capitalize on its dynamism. Making Europe more competitive and appealing to investors will depend on getting this right”, said Julie Linn Teigland, managing partner of EY's Europe, Middle East, India, and Africa (EMEIA) division.


"Taking steps to ensure that they have an up-to-date inventory of the AI systems they are developing or deploying, and determining their position in the AI value chain to understand their legal responsibilities," she said, is how companies in the EU and beyond must be proactive in order to prepare for the law's implementation.


That might mean a lot more work for small and medium-sized businesses as well as startups.


Public affairs chief at France Digitale Marianne Tordeux Bitker remarked, "This decision has a bittersweet taste."


Despite several modifications intended for startups and SMEs, most notably through regulatory sandboxes, the AI Act nonetheless establishes significant duties for any organizations utilizing or creating artificial intelligence, even if it addresses a significant difficulty in terms of ethics and transparency.


The AI Act is set in stone, but the real test will come in putting it into practice.


The emphasis now is on how to implement and enforce it effectively.


Also, additional focus on implementing laws is needed, according to EU research head Risto Uuk of the nonprofit Future of Life Institute, who spoke with Euronews Next.


These supplementary laws include the EU AI Office, which seeks to expedite rule enforcement, and the AI Liability Directive, which supports liability claims for harm resulting from AI-enabled goods and services.


According to Uuk, "The key things to ensure that the law is worth the paper it's written on are that the AI Office has resources to perform the tasks it has been set and that the codes of practices for general-purpose AI are well drafted with the inclusion of civil society".

Jimin Youn / Grade 11
Seoul Academy Upper Division