AI Models and Regulatory Compliance: The European Code of Conduct Proposal

Introduction

In the evolving European regulatory environment, artificial intelligence (AI) represents one of the most innovative and transformative technologies of our time. With the entry into force of the AI Act on 1 August 2024, the European Union has taken a significant step towards regulating this technology, aiming to ensure that the development and adoption of AI takes place in a manner that respects the fundamental rights and security of users. In this landscape, the draft Code of Conduct for Providers of General Purpose AI Models, the final version of which is expected by 1 May 2025, aims to act as a bridge towards harmonised compliance standards and is a practical guide for those working in this complex field.

As mentioned by the speakers themselves, the main objective of the Code is to establish clear and shared guidelines that are ‘future proof’, i.e., able to adapt to the technological evolutions and needs of the next decade, and that enable AI model providers to operate in a safe and responsible environment, while promoting innovation and competitiveness in the industry. 

This article will explore the highlights and most relevant components of the draft Code of Conduct, dividing the analysis into two main parts: general purpose AI models and generative AI with systemic risks. Specifically on the latter 

General Purpose AI Models

Definition and Obligations

According to the AI Act, a general-purpose AI model is defined as an artificial intelligence system designed to perform a wide range of tasks without being specifically optimised for a single application. These models are characterised by a flexibility that allows them to adapt to different situations and contexts, making them powerful tools but also potentially risky if not managed correctly. This peculiar characteristic entails unique responsibilities for suppliers, as these models can be integrated into downstream systems with different purposes.

The Code of Conduct imposes a number of obligations on suppliers of such models to ensure that the development and implementation of AI models is done in accordance with European regulations and fundamental ethical principles. These obligations include the need for comprehensive technical documentation, which must include relevant details such as architectural structure, number of parameters, and energy consumption specifications, and which must be provided to the AI Office and downstream providers, thus ensuring traceability and transparency in the use of AI models.

Fundamental Principles of the Code

The Code of Conduct is closely aligned with the principles established by the European Union and the AI Act, emphasising the importance of regulation that is proportionate to the risks associated with the use of AI models. One of the fundamental principles, dear to the European legislator, is the proportionality of the measures taken with respect not only to the risks identified, but also to the capabilities of the provider, ensuring that mitigation strategies do not stifle innovation but rather ensure its responsible use. In the latter respect, the draft code proposes to dictate less stringent measures for SMEs, start-ups and open source projects (except for those posing a systemic risk), which by their very nature have limited resources and capacities to devote to the stringent compliance obligations of the AI Act.

Furthermore, the Code promotes support for the AI ecosystem by encouraging collaboration between vendors, researchers and regulators. This collaborative approach is essential to create an environment in which the safety and reliability of AI models are continuously monitored and improved, while fostering the development of innovative solutions that meet societal needs.

Rules and Obligations for Suppliers of AI Models for General Purposes

One of the key components of the Code concerns the specific rules to which the providers of general purpose IA models should adhere. Among these, as mentioned in the preceding paragraph, technical documentation plays a central role: suppliers who wish to adopt the code of conduct will be required to describe the model’s training, testing and validation processes, including detailed information on the data used, such as ‘acquisition methods, fractions of data from different sources and main characteristics’.

Another key aspect concerns copyright protection. The code imposes an obligation to respect copyright, including in the use and collection of data during the training phase of the model. This means that suppliers must ensure that the data used to train AI models do not infringe the intellectual property rights of third parties by taking preventive and corrective measures to avoid potential infringements. The Code in emphasising the need to respect the exceptions to text and data mining in Directive (EU) 2019/790 requires those who train such models, for example, to respect robots.txt files, which may restrict automated access to web-accessible content.

Transparency in how the template is distributed and used is a further essential requirement. According to this first draft, in fact, providers will be required to clearly communicate to end-users how the AI model works, including its limitations and potential applications, ensuring that the use of the technology is done in an informed and aware manner.

Conclusion

The Code of Conduct for Providers of General Purpose AI Models, although still in draft form, represents a first step towards shared and operational regulation. With its final version expected by May 2025, the Code will provide a key reference to prepare for the direct applicability of the pending AI Act obligations on model providers as of 2 August 2025.

Here we have analysed the provisions on general purpose AI models. However, the regulatory and technical details related to models with systemic risk remain to be explored, a topic that requires specific attention and will be the subject of a forthcoming in-depth study.

A further aspect worthy of interest is the introduction of a balance between risk-proportionate obligations and flexible measures for realities such as start-ups and SMEs. This approach not only recognises the differences in capabilities between the various actors, but also ensures that a one-size-fits-all approach does not penalise innovation. It therefore becomes essential that providers, regardless of their size, consider the Code as a tool not only for compliance but also for operational efficiency, capable of improving the management and traceability of models throughout their life cycle.

Finally, it should be emphasised that, being a draft, the Code leaves room for further improvements and adjustments. This consultation period is an opportunity for providers to actively contribute to shaping the rules that will influence the industry.