The New Draft Code of Conduct for General Purpose AI Models: Systemic Risk

With the evolution of Artificial Intelligence (AI), concerns about the potential systemic risks associated with General Purpose AI Models (GP-AIM) have become increasingly relevant. The recent draft Code of Conduct dedicated to these models, published as part of the AI Act, provides a detailed outline of how to address these challenges, introducing new obligations and risk mitigation techniques. 

In our previous article, we analysed the features and obligations for general purpose AI models outlined in the draft Code of Conduct. We examined the basic principles and specific rules aimed at ensuring transparency, regulatory compliance and copyright protection.

In this article, we will focus on generative AI models that present systemic risks, an area that requires additional technical, organisational and governance measures. These models, which by their nature have a potentially more significant and widespread impact, require a thorough regulatory approach to manage the dangers that could arise from their adoption and use.

General-purpose AI models and systemic risk in the AI Act

General-purpose AI models are defined by the AI Act as AI systems designed to be adaptable to multiple tasks and application contexts, often undefined at the time of development. These models are distinguished by their versatility, but this very characteristic makes them particularly vulnerable to misuse or unintended use. The AI Act further classifies them into two categories: models with systemic risk and generic models. Systemic risk, as specified in section 3(65) of the AI Act, refers to potential far-reaching effects on society, the economy or the environment resulting from system vulnerabilities or malfunctions.

A model is generally considered to be at systemic risk when it reaches computational capacities exceeding the threshold of 10(25) FLOPS (floating-point operations per second). This level of computational power implies that the model is capable of performing extremely complex operations, significantly increasing the potential systemic impact.

The taxonomy of systemic risk

The Code of Conduct introduces an articulated taxonomy of systemic risks, divided into types, nature and sources. Risk types include threats such as the offensive use of cyber capabilities, proliferation of chemical or biological weapons, loss of control over autonomous models, and mass manipulation through disinformation. According to Section 3(2) of the AI Act, these risks must be assessed by considering both the severity and probability of each event. In particular, the classification must be based on standardised metrics to ensure consistency and comparability between different application scenarios.

The nature of risks is described through dimensions such as origin, intent (intentional or unintentional), speed of occurrence and visibility. For example, a risk might emerge gradually but in a way that is difficult to detect, complicating mitigation strategies. As specified within the code proposal, the analysis should also include the potential impact on end users and critical infrastructure. Finally, sources of risks include dangerous capacities of models, undesirable propensities such as bias or confabulation, and socio-technical factors, such as distribution mode or social vulnerability.

Bonds for model providers with systemic risk

GP-AIM providers with systemic risk are subject to more stringent obligations than generic models. As stipulated in Section 55(1) of the IA Act, they must:

  1. Assess the model according to standardised protocols, including adversarial testing to identify vulnerabilities and mitigate risks. Section 55(2) emphasises suppliers may rely on codes of practice under Section 56 to demonstrate compliance with obligations. Compliance with European harmonised standards grants suppliers the presumption of conformity to the extent that these standards cover such obligations.
  2. Mitigate risks at EU level by documenting their sources and taking corrective measures. These measures must be updated periodically according to technological and regulatory developments.
  3. Ensure the cyber security of both the model and related physical infrastructure.
  4. Monitor and report relevant incidents, maintaining constant communication with the AI Office and the relevant authorities, including a detailed description of incidents and corrective actions taken.

It follows that implementing these obligations requires a considerable investment in resources, expertise and infrastructure. This makes synergy between the different players in the AI value chain essential so that they can share knowledge and tools to jointly address the challenges of regulatory compliance and responsible innovation.

Mitigation strategies and governance

The Code of Conduct proposes a framework for security and governance. In this scenario, suppliers will have to implement a security and risk mitigation framework, known as the Safety and Security Framework (SSF). This framework must include technical and organisational measures aimed at preventing, detecting and responding effectively to identified risks, ensuring that AI models operate safely and reliably throughout the product life cycle. Specifically, the main measures include:

  • Continuous risk identification and analysis: through robust methodologies, vendors must map hazardous capabilities, propensities and other sources of risk, categorising them into levels of severity.
  • Evidence gathering: The use of rigorous assessments and advanced methodologies, such as red-teaming tests and simulations, is critical to understanding the potential and limitations of models. In addition, providers are required to engage in an ongoing process of gathering evidence on the specific systemic risks of their general-purpose AI models. This process, outlined in the Code, involves the use of advanced methods, including forecasting and best available assessments, to investigate the capabilities, propensities and other effects of these models.
  • Mitigation Measures: Providers will be required to map potential elements of systemic risk and proportionately necessary mitigation measures, based on AI Office guidelines, if available. The mapping should be designed to keep systemic risks below an intolerable level and further minimise the risk beyond this threshold. In addition, vendors commit to detailing how each level of severity or indicator of risk translates into specific security and mitigation measures, in line with available best practices. This process aims not only to contain risks within acceptable levels, but also to promote continuous risk reduction through iterative and thorough analysis.

The draft Code of Conduct represents a crucial step towards responsible regulation of general-purpose AI models. While still at a preliminary stage, it offers a structured framework for identifying, assessing and mitigating systemic risks, helping to create a balance between safety and innovation. However, the path to full implementation is complex and raises fundamental questions, such as the harmonisation of assessment methodologies, the creation of shared standards and the precise definition of severity levels.

This challenge represents not only a regulatory obligation for suppliers, but also a strategic opportunity to demonstrate ethical and technical leadership in an evolving industry. The ability to address these issues with transparency and foresight can become a differentiator for organisations, helping to build an ecosystem of trust between regulators, users and developers.

AI Models and Regulatory Compliance: The European Code of Conduct Proposal

Introduction

In the evolving European regulatory environment, artificial intelligence (AI) represents one of the most innovative and transformative technologies of our time. With the entry into force of the AI Act on 1 August 2024, the European Union has taken a significant step towards regulating this technology, aiming to ensure that the development and adoption of AI takes place in a manner that respects the fundamental rights and security of users. In this landscape, the draft Code of Conduct for Providers of General Purpose AI Models, the final version of which is expected by 1 May 2025, aims to act as a bridge towards harmonised compliance standards and is a practical guide for those working in this complex field.

As mentioned by the speakers themselves, the main objective of the Code is to establish clear and shared guidelines that are ‘future proof’, i.e., able to adapt to the technological evolutions and needs of the next decade, and that enable AI model providers to operate in a safe and responsible environment, while promoting innovation and competitiveness in the industry. 

This article will explore the highlights and most relevant components of the draft Code of Conduct, dividing the analysis into two main parts: general purpose AI models and generative AI with systemic risks. Specifically on the latter 

General Purpose AI Models

Definition and Obligations

According to the AI Act, a general-purpose AI model is defined as an artificial intelligence system designed to perform a wide range of tasks without being specifically optimised for a single application. These models are characterised by a flexibility that allows them to adapt to different situations and contexts, making them powerful tools but also potentially risky if not managed correctly. This peculiar characteristic entails unique responsibilities for suppliers, as these models can be integrated into downstream systems with different purposes.

The Code of Conduct imposes a number of obligations on suppliers of such models to ensure that the development and implementation of AI models is done in accordance with European regulations and fundamental ethical principles. These obligations include the need for comprehensive technical documentation, which must include relevant details such as architectural structure, number of parameters, and energy consumption specifications, and which must be provided to the AI Office and downstream providers, thus ensuring traceability and transparency in the use of AI models.

Fundamental Principles of the Code

The Code of Conduct is closely aligned with the principles established by the European Union and the AI Act, emphasising the importance of regulation that is proportionate to the risks associated with the use of AI models. One of the fundamental principles, dear to the European legislator, is the proportionality of the measures taken with respect not only to the risks identified, but also to the capabilities of the provider, ensuring that mitigation strategies do not stifle innovation but rather ensure its responsible use. In the latter respect, the draft code proposes to dictate less stringent measures for SMEs, start-ups and open source projects (except for those posing a systemic risk), which by their very nature have limited resources and capacities to devote to the stringent compliance obligations of the AI Act.

Furthermore, the Code promotes support for the AI ecosystem by encouraging collaboration between vendors, researchers and regulators. This collaborative approach is essential to create an environment in which the safety and reliability of AI models are continuously monitored and improved, while fostering the development of innovative solutions that meet societal needs.

Rules and Obligations for Suppliers of AI Models for General Purposes

One of the key components of the Code concerns the specific rules to which the providers of general purpose IA models should adhere. Among these, as mentioned in the preceding paragraph, technical documentation plays a central role: suppliers who wish to adopt the code of conduct will be required to describe the model’s training, testing and validation processes, including detailed information on the data used, such as ‘acquisition methods, fractions of data from different sources and main characteristics’.

Another key aspect concerns copyright protection. The code imposes an obligation to respect copyright, including in the use and collection of data during the training phase of the model. This means that suppliers must ensure that the data used to train AI models do not infringe the intellectual property rights of third parties by taking preventive and corrective measures to avoid potential infringements. The Code in emphasising the need to respect the exceptions to text and data mining in Directive (EU) 2019/790 requires those who train such models, for example, to respect robots.txt files, which may restrict automated access to web-accessible content.

Transparency in how the template is distributed and used is a further essential requirement. According to this first draft, in fact, providers will be required to clearly communicate to end-users how the AI model works, including its limitations and potential applications, ensuring that the use of the technology is done in an informed and aware manner.

Conclusion

The Code of Conduct for Providers of General Purpose AI Models, although still in draft form, represents a first step towards shared and operational regulation. With its final version expected by May 2025, the Code will provide a key reference to prepare for the direct applicability of the pending AI Act obligations on model providers as of 2 August 2025.

Here we have analysed the provisions on general purpose AI models. However, the regulatory and technical details related to models with systemic risk remain to be explored, a topic that requires specific attention and will be the subject of a forthcoming in-depth study.

A further aspect worthy of interest is the introduction of a balance between risk-proportionate obligations and flexible measures for realities such as start-ups and SMEs. This approach not only recognises the differences in capabilities between the various actors, but also ensures that a one-size-fits-all approach does not penalise innovation. It therefore becomes essential that providers, regardless of their size, consider the Code as a tool not only for compliance but also for operational efficiency, capable of improving the management and traceability of models throughout their life cycle.

Finally, it should be emphasised that, being a draft, the Code leaves room for further improvements and adjustments. This consultation period is an opportunity for providers to actively contribute to shaping the rules that will influence the industry.

Scraping and Generative Artificial Intelligence: the Data Protection Autority’s Notice

Automated online data collection, commonly known as web scraping, has become a widespread practice in many sectors for data analysis and the development of applications based on generative artificial intelligence (GIA). However, this practice raises important legal issues, especially in relation to the protection of personal data. Recently, the Italian data protection authority (Garante per la protezione dei dati personali) issued specific guidelines that provide guidance on measures to be taken to mitigate the risks associated with web scraping. This article examines the new guidelines in detail, exploring the legal implications and best practices for compliance.

What is Web Scraping?

Web scraping is the process of automatically extracting data from websites using specific software, known as a scraper. These programmes can automatically browse web pages, collect structured and unstructured data, and save it for further analysis. Web scraping can be performed through various methods, including:

  • HTML parsing: Parsing the HTML code of web pages to extract specific information.
  • APIs: Use of programming interfaces to access data offered by websites.
  • Bots: Automated programmes that simulate human navigation to collect data.

Risks Associated with Web Scraping

Although it may have legitimate applications, such as collecting information for market analysis, it is often associated with less legitimate uses, such as the theft of personal data for commercial or even fraudulent purposes. The indiscriminate use of web scraping may in fact entail various legal and security risks such as:

  • breach of privacy: the collection of personal data without consent may violate privacy regulations, such as the GDPR.
  • Abuse of Terms of Service: Many websites prohibit web scraping in their terms of service, and violating these terms may lead to legal action.
  • Data security: Bulk data collection may expose information to security risks, such as unauthorised access or malicious use of data.

The Autoruty’s Notice

The Garante per la protezione dei dati personali (Italian Data Protection Authority) has recently published a document providing guidance on how to manage the risks associated with web scraping. The notice focuses on several aspects that revolve around the protection of personal data and compliance with existing regulations. Below are the main recommendations:

  • Creation of Restricted Areas: one of the measures suggested is the creation of restricted areas on websites, accessible only after registration. This practice reduces the availability of personal data to the general public and can act as a barrier against indiscriminate access by bots. This will also make it possible to monitor who accesses the data and to what extent, improving traceability and accountability. On the other hand, it is crucial that the collection of data for registration is proportionate and respects the principle of data minimisation.
  • Clauses in the Terms of Service: the inclusion of specific clauses in the Terms of Service explicitly prohibiting the use of web scraping techniques is another effective tool. These clauses can act as a deterrent and provide a legal basis for taking action against those who violate these conditions.
  • Network Traffic Monitoring: implementing monitoring systems to detect anomalous data flows can help prevent suspicious activities. Adopting measures such as rate limiting makes it possible to limit the number of requests coming from specific IP addresses, helping to reduce the risk of excessive or malicious web scraping.
  • Technical interventions on bots: The document also suggests the use of techniques to limit access to bots, such as implementing CAPTCHAs or periodically modifying the HTML markup of web pages. These interventions, although not decisive, may make scraping more difficult.

Conclusions

The Data Protection Authority’s statement represents a significant step forward in regulating the use of web scraping and the protection of personal data. For operators of websites and online platforms, it is crucial to take the recommended measures to ensure regulatory compliance and protect users’ personal data.

Compliance with data protection regulations is not only a legal obligation, but also a key element in building and maintaining user trust. Companies must be proactive in adopting data protection best practices and monitoring regulatory developments.

Contact us

If you have questions or need legal assistance with regard to web scraping and data protection, our firm is at your disposal. Contact us for a personal consultation and to find out how we can help you navigate the complex landscape of privacy regulations.

When is the EU representative required under the GDPR?

Perhaps not everyone is aware that Article 27 of the GDPR requires the appointment of a European representative to companies located outside the EU and carrying out data processing activities of European citizens.

In brief, the representative’s role is to act as a point of contact between the data controller, located outside the territory of the EU, and national data protection authorities and data subjects.

As an obligation imposed only on non-European companies, it is not surprising that, within the European Union, this regulatory imposition had never been given particular importance.

Nonetheless, companies that fail to comply with this requirement can often face large fines.

Inside this article we try to answer some of the most frequently asked questions about the EU representative.

What is the role of an EU representative under the GDPR?

The role of an EU representative under the General Data Protection Regulation (GDPR) is to act as a point of contact for EU data protection authorities and individuals whose personal data is processed by the non-EU based organization that the representative is representing. Although the representative is not responsible for the organisation’s compliance with the GDPR and may still be required to cooperate with and assist the DPAs in carrying out their tasks. This includes responding to inquiries from individuals whose personal data is processed by the organization and providing information to data protection authorities when requested. The EU representative is also responsible for ensuring that the organization keeps records of its processing activities, and for making those records available to data protection authorities upon request.

Should my company appoint an EU representative?

Whether a company is required to appoint an EU representative under the General Data Protection Regulation (GDPR) depends on several factors. The GDPR requires non-EU based organizations that:

  • offer goods or services to individuals in the EU, or
  • that monitor the behavior of individuals in the EU,

to appoint an EU representative if they do not have a physical presence in the EU.

According to the EDPB guidelines (guideline 3/2018), there are several factors that need to be considered in order to determine whether a company is offering its goods or services to individuals in a particular territory within the EU. Some of these factors are:

  • using the languages of a specific region or offering payments in the currency of that region;
  • using Google, Facebook or TikTok ads to target a specific market, or any other marketing activity directed at customers in that market;
  • the use of top-level domains in that market;
  • offering delivery of goods to individuals in the European region.

Furthermore it is important to note that the GDPR applies to organisations of all sizes, so even if your company is small, you may still be required to appoint an EU representative. It is always best to consult with a legal advisor to determine whether your company is required to appoint an EU representative.

What happens if I do not appoint an EU representative under the GDPR?

If a non-EU based organization that is required to appoint an EU representative under the General Data Protection Regulation (GDPR) does not do so, it may be subject to penalties and fines. The GDPR provides for a range of administrative fines, including fines of up to 20 million euros or 4% of the organization’s global annual revenue, whichever is greater, for violations of certain provisions of the GDPR. Failing to appoint an EU representative when required to do so could be considered a violation of the GDPR, and could result in the organization being fined. Additionally, EU data protection authorities may take other enforcement actions against the organization, such as requiring it to appoint an EU representative or suspending or prohibiting the processing of personal data. It is important for non-EU organizations to comply with the GDPR and appoint an EU representative if required to do so.

How to appoint an EU representative?

To appoint an EU representative under the General Data Protection Regulation (GDPR), your company can take the following steps:

  • Identify an individual or organization based in the European Union (EU) that is willing and able to act as your company’s EU representative.
  • Have the EU representative sign a written mandate that outlines the scope of their responsibilities and the duration of their appointment.
  • Keep a copy of the mandate on file, along with any other relevant documents, such as proof of the EU representative’s identity and location.
  • Make the contact information for your company’s EU representative available on your website and in your privacy policy, and provide it to any individuals or data protection authorities who request it.

It is important to note that the EU representative must be based in the EU and must be easily accessible to individuals and data protection authorities. The representative must also be able to communicate in the language(s) used by the individuals and authorities with whom they will be interacting. It is also important to ensure that the EU representative is able to fulfill their responsibilities under the GDPR and is familiar with the organization’s processing activities. You may wish to consult with a legal advisor to ensure that your company’s appointment of an EU representative complies with the GDPR.