News In Brief World News
News In Brief World News

Google, Amazon, and OpenAI Push to Influence EU’s AI Act Framework

Share Us

415
Google, Amazon, and OpenAI Push to Influence EU’s AI Act Framework
21 Sep 2024
6 min read

News Synopsis

In a final bid to shape how artificial intelligence (AI) will be regulated, the world’s leading tech companies are pushing the European Union (EU) to adopt a light-touch approach to the AI Act, a landmark piece of legislation aimed at governing AI systems. These firms are attempting to reduce the risk of facing billions in fines by advocating for more lenient enforcement mechanisms.

The EU lawmakers reached an agreement in May on the AI Act, which is the first comprehensive set of rules globally to regulate artificial intelligence. The agreement came after months of intense negotiations between various political factions. However, the enforcement of these rules remains uncertain until the accompanying codes of practice are finalized.

This ambiguity leaves open questions on how strictly the regulations will apply to "general-purpose" AI (GPAI) systems such as OpenAI’s ChatGPT and how many copyright infringement lawsuits or multi-billion-dollar fines could emerge as a result.

"The code of practice is crucial. If we get it right, we will be able to continue innovating," stated Boniface de Champris, a senior policy manager at CCIA Europe, an organization representing major tech companies such as Amazon, Google, and Meta. "If it's too narrow or too specific, that will become very difficult," he added, underscoring the delicate balance between innovation and regulation.

The AI Code of Practice: A Vital Document for Compliance

Although the AI code of practice won’t be legally binding when it comes into effect late next year, it will provide companies with a compliance framework—a checklist to demonstrate their adherence to the law. However, companies that claim to follow the law but ignore the guidelines in the code could face legal challenges.

To aid in drafting this code, the EU has invited companies, academics, and other stakeholders to contribute. Nearly 1,000 applications have been received—a significantly high number, according to a source close to the process, who spoke on condition of anonymity.

Data Scraping and Copyright Concerns A critical issue surrounding AI systems, particularly in Europe, is the use of copyrighted material to train AI models. Companies like Stability AI and OpenAI have come under scrutiny for potentially using copyrighted works, including best-selling books and photo archives, without obtaining permission from the creators.

Under the AI Act, firms will be required to provide "detailed summaries" of the datasets used for training their models. In theory, this transparency would enable content creators to seek compensation if they discover their work was used without permission. This legal landscape is still evolving and is being actively tested in court.

Some business leaders argue that the summaries should include only minimal details to protect trade secrets, while others emphasize the right of copyright holders to know if their content has been used. OpenAI, which has faced criticism for being opaque about its training data, is among the companies that have applied to participate in the working groups drafting the code of practice, according to another unnamed source.

"The AI Act presents the best chance to shine a light on this crucial aspect and illuminate at least part of the black box," stated Maximilian Gahntz, AI policy lead at the Mozilla Foundation, expressing concerns that companies may be trying to avoid transparency.

Innovation vs. Regulation: A Balancing Act

There has been criticism from some within the business community that the EU's focus on regulating technology has come at the expense of fostering innovation. Balancing these two priorities will be key for those drafting the code of practice.

Recently, former European Central Bank President Mario Draghi urged the EU to adopt a more coordinated industrial policy, make decisions more swiftly, and invest heavily to stay competitive with China and the United States. Meanwhile, Thierry Breton, the EU's Internal Market Commissioner and a vocal advocate for tech regulation, resigned from his role following a clash with Ursula von der Leyen, President of the European Commission.

This backdrop of growing protectionism within the EU has led to calls from homegrown tech companies for regulatory carve-outs that would benefit emerging European startups. Maxime Ricard, policy manager at Allied for Startups, a network representing smaller tech firms, commented, "We've insisted these obligations need to be manageable and, if possible, adapted to startups."

Once the code is finalized in early 2024, companies will have until August 2025 to ensure their compliance. Several non-profit organizations, including Access Now, the Future of Life Institute, and Mozilla, are also actively involved in drafting the code.

Conclusion:

As the European Union moves closer to finalizing the world’s first comprehensive AI Act, the ongoing efforts by tech giants highlight the delicate balance between fostering innovation and ensuring robust regulation.

While companies like Google, Amazon, and OpenAI lobby for a lighter regulatory touch to avoid stifling growth and hefty fines, concerns over transparency, data usage, and copyright infringement remain at the forefront.

The forthcoming AI code of practice, though not legally binding, will serve as a crucial guide for compliance, shaping the future landscape of AI development in Europe. As stakeholders work to craft the code, the challenge will be to create a framework that holds companies accountable while still enabling technological advancement.

The deadline for compliance in August 2025 looms large, and the outcome of these discussions could have far-reaching implications not only for Europe but for global AI governance.