Exclusive interview with MEP Axel Voss on the AI Act: what’s next?

24/05/2024
UGGC - Interview 01

Written by Anne-Marie Pecoraro, Rodolphe Boissau & Estelle Groff

A few weeks after the adoption of the AI Act and before the European elections, which will take place between 6 and 9 June 2024, we were honoured to speak to MEP Axel Voss, shadow rapporteur for the European Parliament’s Legal Affairs Committee on the legislation on artificial intelligence.

A Member of the European Parliament since 2009, the German politician is committed to digital issues and has also been shadow rapporteur on the General Data Protection Regulation[1] and the Copyright Directive[2].

Who will be covered by the AI Act?

Axel Voss is delighted to see that for the first time, “the Parliament has managed to adopt a text with such extra-territoriality, which says to international players: if you use works protected in the European Union to run your algorithm outside the European Union, you are obliged to respect that copyright”.

According to Article 2, the AI Act applies to:

“(a) providers placing on the market or putting into service AI systems in the Union, irrespective   of whether those providers are established within the Union or in a third country;

(b) users of AI systems located within the Union;

(c) providers and users of AI systems that are located in a third country, where the output produced by the system is used in the Union;

For the MEP, a global approach is needed because personal data knows no borders. Such an approach will also be needed to address issues related to the interaction between copyright and artificial intelligence, as regards the remuneration of copyright holders.

On this subject, he also returned to the balance between respect for copyright and the promotion of innovation, which is an important point in the text and the subject of much debate. Indeed, the development of intelligence requires the collection of significant amounts of data, including copyrighted works, to produce viable and accurate results. Therefore, rights holders and innovators have tried to influence the negotiation of the text to protect their interests.

Intellectual property protection, limited by trade secrets?

Prior to the adoption of the AI Act, there was no requirement for developers of artificial intelligence systems to be transparent about their training data.  This raised the question of how to ensure that publishers exercising their opt-out right, i.e. the right to refuse to have their works reproduced for training purposes, would not see their works used as training data.

The text seeks to remedy this problem by requiring AI developers to publish a sufficiently detailed summary of the training data used by generative AI, so that authors can know whether their content has been exploited and, if so, claim compensation. However, the French executive has managed to include a reference to trade secrets, which weakens the transparency obligation and creates legal uncertainty over the implementation of this provision.

Axel Voss told us that the initial idea was to find a fair burden-sharing situation in terms of information and related to the foundation models, the limit of which is commercial confidentiality. The next step is to consider the most effective way of ensuring that copyright, the right to privacy, and intellectual property rights in general are respected.

For the MEP, “we must now urge the European Commission to draw up clear guidelines very quickly in order to reduce this legal uncertainty“.

Ensuring remuneration for copyright holders

Regarding remuneration, and in particular the collection of royalties, Axel Voss hopes that we will not end up with the same controversial situations that we have seen with social media platforms.

The challenge will be to ensure fair and equitable remuneration for right holders, in particular for press companies that are remunerated by the publication of articles on a daily basis, but also to enable developers of artificial intelligence systems to have better access to protected works in return.

In this respect, the MEP argues that it will be necessary to align all copyright regimes to find effective means to detect protected works in training data and the results generated, or to identify and remunerate their holders in a machine-readable manner.

Discussions along these lines need to be initiated with the developers of artificial intelligence systems and the creative industries. Axel Voss has also launched a survey on issues already raised or encountered by stakeholders in order to identify all outstanding issues.

In particular, the Commission proposes to take as a starting point the 2019 Copyright Directive, which foresees a reform of Article 4 on “text and data mining”. According to the MEP, the provisions of the aforementioned directive cannot be applied to artificial intelligence as it stands, since the text and data mining may be used to reproduce works or generate similar works, which would ultimately allow the text to be circumvented for commercial purposes. Although parasitism or unfair competition may be raised,Article 4 of the Directive will probably need to be revised and adapted accordingly.

Legal uncertainty about the classification of high-risk systems

The obligations set out in the text depend on the level of risk of the AI system used (unacceptable, high, limited, or minimal) and the actor involved (supplier, distributor, user, other).

High-risk systems (as defined and listed by the European Commission) are subject to several obligations relating to documentation, risk management, governance, transparency or security, depending on their qualification (supplier, user, distributor and other third parties). These systems must also be notified to the EU and bear a CE mark.

The overall spirit of the text was to strike a balance between the protection of citizens and promoting innovation. Alex Voss explains to us that “When you interpret these provisions, you should not have an extreme interpretation in mind, but a protective one. That was the idea behind this risk approach”.

On this point, the MEP acknowledges that a good balance has not been struck, under Article 6 of the text and that “the result will probably be legal uncertainty for the next 40 years because of this vague wording“.

It is likely going to be difficult for companies to determine for themselves whether they fall under the classification of high-risk AI systems. According to the MEP, it should be up to the AI Office and the European Commission to clarify the regulation in the interests of European innovation and development. Their guidelines and clarifications will be needed.

Articulation between the General Data Protection Regulation and the AI Act

For Alex Voss, the European Parliament “has set up a different system than that of the GDPR. While each Member State can interpret the provisions of the GDPR, the AI Act considers the cross-border case as a kind of common path to follow“.

Thus, the system adopted under the AI Act is better in that it provides a competitive interpretation for Europe, and one that Member States can complain about if necessary.

The AI Office will interpret the text from the point of view of the internal market, rather than that of the Member States. One of the challenges going forward will be to find an area of overlap between the provisions of the GDPR and those of the AI Act.

The processing of personal data does not have the same purpose as the processing of training data for an AI system. To develop competitive AI systems, the text establishes ‘sandboxes’ that will allow actors to test their innovative technology or service without necessarily having to comply with the full regulatory framework that would normally apply.

In particular, the aim is to enable small and medium-sized companies to generate high-quality data. In this sense, the MEP would like to see the emergence of a concept of sandboxes based on a sectoral approach and not on a competitive approach at the state level, in particular between Germany and France.

Axel Voss also mentioned the possibility for developers of artificial intelligence systems to use synthetic data, i.e. data artificially generated by an AI algorithm that has been trained on a set of real data. The algorithm generates new data with the same characteristics as the original data, but most importantly, non personal, and making impossible to reconstruct that original data, either from the algorithm or from the synthetic data it has generated.

The next steps

  • The text is expected to enter into force in mid-May.
  • The AI Office has reportedly already begun to work on guidelines to supplement the text with additional measures.
  • The text provides for a gradual implementation of the following measures:
MeasurementsDeadline for entry into force from publication of the regulation
Prohibited practices (Article 5)6 months
Rules governing the general use of AI1 year
Sanctions1 year
Setting up national monitoring bodies1 year
Others measurements2 years
Classification of high-risk AI systems (must be assessed by a third party)3 years

Notes: The French Data Protection Authority (Commission nationale Informatique et Libertés – CNIL) has already published its recommendations on the development of artificial intelligence systems.


[1] REGULATION (EU) 2016/679 OF THE EUROPEAN PARLIAMENT AND OF THE COUNCIL of 27 April 2016 on the protection of natural persons with regard to the processing of personal data and on the free movement of such data, and repealing Directive 95/46/EC

[2] DIRECTIVE (EU) 2019/790 OF THE EUROPEAN PARLIAMENT AND OF THE COUNCIL of 17 April 2019 on copyright and related rights in the Digital Single Market and amending Directives 96/9/EC and 2001/29/EC