EU & Bay Roundtable with Kai Zenner on AI Act: Three Takeaways

Sep 14, 2022

AI

EU & Bay Roundtable with Kai Zenner on AI Act: Three Takeaways



AI regulation is in its infancy, and we need diverse expertise to get it right. It’s top of mind for tech leaders, policymakers, researchers, developers, lawyers, and founders around the world. This is why AITLANTIC hosted a roundtable discussion with Kai Zenner, the “most important man” of the “first international law on AI”, as many say. In the EU parliament, he co-authors and drives the progress of the AI Act, which is the EU’s approach to regulating AI.

The AI Act aims to take a horizontal, sector-agnostic approach to regulating AI systems and addressing risks around issues like bias, transparency, and manipulation. It is slated to pass by the end of 2023, but many practical questions remain about its implementation and enforcement.

We don’t expect the reader to be entirely familar with the AI Act; if you’d like to dive deeper: the goal of the roundtable was for him to receive feedback on the Open-Source exemption, as well as the technical feasibility of Art 10 and the practicability of Article on the value chain (Art 28, 28a), foundation models (28b), and the labeling of AI-generated content (Art 52). In this post, we will focus on the Three main Takeaways:


Regulating Open-Source AI

One major point of discussion in the AI Act is its lack of governance for open-source AI models. While the Act exempts open source systems to avoid stifling innovation, many experts worry this leaves significant room for harm. Malicious uses of AI like disinformation campaigns often rely on open-source models that can be freely distributed and modified.

Kai acknowledged the risk of open-source AI remaining in a "legal vacuum" under the Act. But he explained the thinking behind the open-source exemption - regulators "didn't want to hamper" open source communities driving AI progress significantly. Governing decentralized open-source ecosystems poses thorny legal questions of responsibility.

This remains an open challenge as policymakers balance promoting AI innovation with managing emerging threats - one option proposed was regulating only large, well-resourced entities building on major open-source models.


Mitigating Bias in Training Data

A recurring point of debate was Article 10 of the AI Act, which mandates governance processes to ensure high-risk AI training data is free of bias and inaccuracies. Developers at the roundtable warned this would prove difficult to implement and audit in practice.

With foundation models relying on massive datasets pooled from diffuse sources, startups often lack visibility into upstream data provenance. Tracking down and validating every data source across this supply chain is a major challenge. Balancing datasets to mitigate bias can conflict with requirements like data minimization under GDPR.

While well-intentioned, Article 10 may pose compliance burdens for small companies building on third-party models and datasets. As one participant put it, the spirit is willing but the specifics are challenging. Kai acknowledged enforcement here will require flexibility as we gain experience. But oversight of training data is an important piece in ensuring trustworthy AI.


Adapting Regulation to AI's Speed

A theme across the discussions was the need to future-proof AI regulation for rapid progress. While the 2-3 year timeframe to develop the Act was fast for policymaking, AI advances even quicker. This risks new laws being outdated before they take effect.

Kai shared that his EU colleagues aim to finalize the Act quickly to score a political win. But he advocated taking time to get the details right, based on feedback from technical experts like those at our roundtable. Updating frameworks through "incorporation by reference" and public-private partnerships can help regulation adapt.

No law can ever fully keep pace with tech innovation. However regular consultation between policymakers, companies, and researchers is crucial to ensure AI regulation balances flexibility and responsibility. With future AITLANTIC events, we hope to facilitate this exchange across the Atlantic as new laws take shape.


Key Takeaways

Our roundtable with Kai Zenner underscored the complexity of regulating AI - and the need for nuanced policy shaped by multidisciplinary discourse. As an early milestone in governing AI, the EU AI Act sets the tone for responsible innovation. Especially in the hot phase and upcoming months, during which Spain pushes for finalizing the Act before the end of their presidency in the Council, collaboration between tech experts and policymakers must continue.

AITLANTIC is committed to driving this conversation forward. Stay tuned for future posts delving deeper into the topics above and other key insights from our intersection of AI developers, researchers, and policy leaders.

Magneto

Experience the power of our innovative SaaS website template. Start your journey towards success.

© Copyright 2023, All Rights Reserved by Sailr

Magneto

Experience the power of our innovative SaaS website template. Start your journey towards success.

© Copyright 2023, All Rights Reserved by Sailr

Magneto

Experience the power of our innovative SaaS website template. Start your journey towards success.

© Copyright 2023, All Rights Reserved by Sailr

Magneto

Experience the power of our innovative SaaS website template. Start your journey towards success.

© Copyright 2023,

All Rights Reserved by Sailr