Summary of the Second Draft of the Code of Practice for General-Purpose AI Models, (from page 20241222.)
External link
Keywords
- AI Act
- Code of Practice
- Working Group
- general-purpose AI
- stakeholders
- risk assessment
- transparency
- compliance
Themes
- draft code of practice
- general-purpose AI
- stakeholder feedback
- regulatory approach
Other
- Category: science
- Type: report
Summary
The second draft of the Code of Practice for general-purpose AI models is a collaborative effort involving around 1000 stakeholders, including EU Member State representatives. The draft incorporates feedback from various Working Group meetings, workshops, and individual submissions, emphasizing transparency, copyright obligations, and measures for systemic risk assessment. The Code aims to guide providers in complying with the AI Act by establishing clear commitments while allowing flexibility for technological advancements. Upcoming discussions and workshops are scheduled for early 2025, leading to the third draft expected in mid-February 2025. The document reflects ongoing efforts to refine AI governance and risk management frameworks.
Signals
name |
description |
change |
10-year |
driving-force |
relevancy |
Stakeholder Engagement |
Involvement of diverse stakeholders in the Code of Practice development process. |
Shifting from isolated drafting to collaborative and transparent stakeholder engagement. |
Stakeholder engagement will become integral to policy-making processes across various sectors. |
Increasing demand for transparency and inclusiveness in regulatory frameworks. |
4 |
Future-proofing Regulations |
Draft Code aims to be relevant for future developments in AI technology. |
From reactive to proactive regulation that anticipates future AI advancements. |
Regulatory frameworks will evolve to continuously adapt to rapid technological changes in AI. |
The fast pace of AI technology advancements necessitates adaptable regulatory approaches. |
5 |
AI Governance Ecosystems |
Calls for the development of ecosystems for AI governance and risk management. |
Transitioning from isolated compliance to integrated governance ecosystems for AI. |
AI governance will involve collaborative ecosystems with multi-stakeholder participation. |
The complexity and systemic risks associated with AI technologies demand comprehensive governance solutions. |
5 |
Transparency Obligations |
New transparency and copyright obligations for AI model providers. |
From vague compliance to clear and specific transparency requirements. |
AI providers will be held to rigorous transparency standards, enhancing accountability. |
Public demand for transparency and ethical AI practices is growing. |
4 |
Dynamic Regulation |
Need for regulations that adapt as AI technology evolves. |
From static regulations to dynamic frameworks that evolve with technology. |
Regulatory bodies will employ adaptive mechanisms to keep pace with AI advancements. |
Rapid technological evolution requires regulations that can flexibly respond. |
4 |
Concerns
name |
description |
relevancy |
Regulatory Compliance Challenges |
Providers of general-purpose AI models may struggle to comply with evolving regulations, especially post-2025. |
4 |
Transparency Issues in AI Models |
The complexity and nuances of AI models may hinder effective transparency and adherence to copyright obligations. |
3 |
Systemic Risk Assessment Limitation |
There could be inadequate frameworks for assessing systemic risks from advanced AI models, raising concerns for safety. |
5 |
Rapid Evolution of AI Technology |
Regulatory frameworks may lag behind the rapid technological advancements in AI, failing to adequately address new risks. |
5 |
Stakeholder Engagement Gaps |
Variability in stakeholder participation and feedback may lead to incomplete representation of concerns and insights in the Code. |
3 |
Inclusion of Open-Source Model Exemptions |
Exemptions for open-source models could create regulatory loopholes, undermining the aims of the AI Act. |
4 |
Complexity in Implementation of Measures |
The adaptation of the Code’s structure may complicate understanding and implementation for AI providers. |
3 |
Balancing Flexibility with Accountability |
The need for adaptable regulations may conflict with the need for strict accountability in AI model deployment. |
4 |
Behaviors
name |
description |
relevancy |
Collaborative Feedback Mechanisms |
Stakeholders are actively participating in providing feedback through various channels, including verbal discussions, written submissions, and interactive polls. |
5 |
Transparent Governance Practices |
Efforts to maintain transparency through shared minutes and summaries from meetings and workshops of the Code of Practice Working Groups. |
4 |
Adaptive Regulatory Frameworks |
The Code aims to be ‘future-proof’ by allowing flexibility in compliance as technology evolves, addressing emerging AI risks. |
5 |
Inter-institutional Collaboration |
Chairs and Vice-Chairs engage in meetings with various institutional bodies to align the Code with broader regulatory frameworks. |
4 |
Focus on Systemic Risk Management |
The Code outlines specific measures for risk assessment and mitigation for advanced AI models posing systemic risks. |
5 |
Iterative Development of Guidelines |
The Code is a work in progress, with a focus on refining and clarifying obligations and commitments in successive drafts. |
5 |
Emphasis on Proportionality |
The draft Code considers the size and capacity of AI model providers when outlining obligations and compliance measures. |
4 |
Engagement with Ecosystem Development |
Acknowledgment of the need for further development of ecosystems for AI governance and risk management to adapt to evolving technologies. |
5 |
Technologies
description |
relevancy |
src |
AI systems designed to perform a wide range of tasks without being limited to a specific domain, governed by the AI Act. |
5 |
a55d2bd2e5e02045bfb1df72e54a690a |
Frameworks and structures aimed at managing AI risks and ensuring compliance with regulatory guidelines. |
4 |
a55d2bd2e5e02045bfb1df72e54a690a |
Tools and methodologies for evaluating and mitigating systemic risks associated with advanced AI models. |
5 |
a55d2bd2e5e02045bfb1df72e54a690a |
Requirements for AI providers to ensure transparency in operations and adherence to copyright laws. |
4 |
a55d2bd2e5e02045bfb1df72e54a690a |
Protocols and practices to protect AI systems from security threats and vulnerabilities. |
5 |
a55d2bd2e5e02045bfb1df72e54a690a |
Metrics to evaluate the performance and compliance of AI models with established guidelines. |
3 |
a55d2bd2e5e02045bfb1df72e54a690a |
Issues
name |
description |
relevancy |
AI Governance Evolution |
The need for ongoing development of ecosystems for AI governance and risk management as technology progresses. |
4 |
Transparency in AI Models |
Obligations for transparency and copyright for general-purpose AI models, highlighting the importance of open-source model exemptions. |
3 |
Systemic Risk Assessment |
Focus on systemic risk assessment and mitigation measures for advanced general-purpose AI models under regulatory frameworks. |
5 |
Flexibility in AI Regulations |
Balancing clear commitments with flexibility to adapt regulations as AI technology evolves, showing the dynamic nature of AI governance. |
4 |
Stakeholder Engagement in AI Regulation |
Ongoing stakeholder involvement and feedback mechanisms in drafting regulatory codes for AI practices, ensuring diverse input. |
3 |
Implementation Timeline for AI Regulations |
The outlined timeline for the implementation of new AI regulations and the phases of feedback and adjustment. |
3 |