The New Draft Code of Conduct for General Purpose AI Models: Systemic Risk
/in Privacy/by Nicolò ShargoolWith the evolution of Artificial Intelligence (AI), concerns about the potential systemic risks associated with General Purpose AI Models (GP-AIM) have become increasingly relevant. The recent draft Code of Conduct dedicated to these models, published as part of the AI Act, provides a detailed outline of how to address these challenges, introducing new obligations and risk mitigation techniques.
In our previous article, we analysed the features and obligations for general purpose AI models outlined in the draft Code of Conduct. We examined the basic principles and specific rules aimed at ensuring transparency, regulatory compliance and copyright protection.
In this article, we will focus on generative AI models that present systemic risks, an area that requires additional technical, organisational and governance measures. These models, which by their nature have a potentially more significant and widespread impact, require a thorough regulatory approach to manage the dangers that could arise from their adoption and use.
General-purpose AI models and systemic risk in the AI Act
General-purpose AI models are defined by the AI Act as AI systems designed to be adaptable to multiple tasks and application contexts, often undefined at the time of development. These models are distinguished by their versatility, but this very characteristic makes them particularly vulnerable to misuse or unintended use. The AI Act further classifies them into two categories: models with systemic risk and generic models. Systemic risk, as specified in section 3(65) of the AI Act, refers to potential far-reaching effects on society, the economy or the environment resulting from system vulnerabilities or malfunctions.
A model is generally considered to be at systemic risk when it reaches computational capacities exceeding the threshold of 10(25) FLOPS (floating-point operations per second). This level of computational power implies that the model is capable of performing extremely complex operations, significantly increasing the potential systemic impact.
The taxonomy of systemic risk
The Code of Conduct introduces an articulated taxonomy of systemic risks, divided into types, nature and sources. Risk types include threats such as the offensive use of cyber capabilities, proliferation of chemical or biological weapons, loss of control over autonomous models, and mass manipulation through disinformation. According to Section 3(2) of the AI Act, these risks must be assessed by considering both the severity and probability of each event. In particular, the classification must be based on standardised metrics to ensure consistency and comparability between different application scenarios.
The nature of risks is described through dimensions such as origin, intent (intentional or unintentional), speed of occurrence and visibility. For example, a risk might emerge gradually but in a way that is difficult to detect, complicating mitigation strategies. As specified within the code proposal, the analysis should also include the potential impact on end users and critical infrastructure. Finally, sources of risks include dangerous capacities of models, undesirable propensities such as bias or confabulation, and socio-technical factors, such as distribution mode or social vulnerability.
Bonds for model providers with systemic risk
GP-AIM providers with systemic risk are subject to more stringent obligations than generic models. As stipulated in Section 55(1) of the IA Act, they must:
- Assess the model according to standardised protocols, including adversarial testing to identify vulnerabilities and mitigate risks. Section 55(2) emphasises suppliers may rely on codes of practice under Section 56 to demonstrate compliance with obligations. Compliance with European harmonised standards grants suppliers the presumption of conformity to the extent that these standards cover such obligations.
- Mitigate risks at EU level by documenting their sources and taking corrective measures. These measures must be updated periodically according to technological and regulatory developments.
- Ensure the cyber security of both the model and related physical infrastructure.
- Monitor and report relevant incidents, maintaining constant communication with the AI Office and the relevant authorities, including a detailed description of incidents and corrective actions taken.
It follows that implementing these obligations requires a considerable investment in resources, expertise and infrastructure. This makes synergy between the different players in the AI value chain essential so that they can share knowledge and tools to jointly address the challenges of regulatory compliance and responsible innovation.
Mitigation strategies and governance
The Code of Conduct proposes a framework for security and governance. In this scenario, suppliers will have to implement a security and risk mitigation framework, known as the Safety and Security Framework (SSF). This framework must include technical and organisational measures aimed at preventing, detecting and responding effectively to identified risks, ensuring that AI models operate safely and reliably throughout the product life cycle. Specifically, the main measures include:
- Continuous risk identification and analysis: through robust methodologies, vendors must map hazardous capabilities, propensities and other sources of risk, categorising them into levels of severity.
- Evidence gathering: The use of rigorous assessments and advanced methodologies, such as red-teaming tests and simulations, is critical to understanding the potential and limitations of models. In addition, providers are required to engage in an ongoing process of gathering evidence on the specific systemic risks of their general-purpose AI models. This process, outlined in the Code, involves the use of advanced methods, including forecasting and best available assessments, to investigate the capabilities, propensities and other effects of these models.
- Mitigation Measures: Providers will be required to map potential elements of systemic risk and proportionately necessary mitigation measures, based on AI Office guidelines, if available. The mapping should be designed to keep systemic risks below an intolerable level and further minimise the risk beyond this threshold. In addition, vendors commit to detailing how each level of severity or indicator of risk translates into specific security and mitigation measures, in line with available best practices. This process aims not only to contain risks within acceptable levels, but also to promote continuous risk reduction through iterative and thorough analysis.
The draft Code of Conduct represents a crucial step towards responsible regulation of general-purpose AI models. While still at a preliminary stage, it offers a structured framework for identifying, assessing and mitigating systemic risks, helping to create a balance between safety and innovation. However, the path to full implementation is complex and raises fundamental questions, such as the harmonisation of assessment methodologies, the creation of shared standards and the precise definition of severity levels.
This challenge represents not only a regulatory obligation for suppliers, but also a strategic opportunity to demonstrate ethical and technical leadership in an evolving industry. The ability to address these issues with transparency and foresight can become a differentiator for organisations, helping to build an ecosystem of trust between regulators, users and developers.
I graduated in Law at the University of ‘Roma Tre’, discussing a thesis on Information Technology and New Technologies Law. Immediately after graduation, I started working in an administrative law firm, Go to profile