Futures

Best Practices for Transparency in AI Under the EU AI Act: Insights from a Prototyping Project, (from page 20240428.)

External link

Keywords

Themes

Other

Summary

The Knowledge Centre Data & Society conducted a policy prototyping project to explore transparency requirements under the EU AI Act, focusing on best practices for deepfake and chatbot transparency. The EU’s AI Act emphasizes transparency for AI systems to build trust, establishing requirements for high-risk AI systems and certain AI-generated content. The project involved stakeholders in testing these requirements, leading to the identification of best practices: ensuring accessibility of disclaimers, providing sufficient information without overwhelming users, and tailoring disclaimers to target audiences. While the prototypes showed promise, further improvements are needed, particularly for use cases like emotion recognition and biometric categorization, as well as developing personalized transparency measures for individuals.

Signals

name description change 10-year driving-force relevancy
Emerging Transparency Standards Best practices for transparency requirements under the EU AI Act are evolving based on stakeholder feedback. From vague transparency requirements to clearly defined best practices for AI systems. A standardized framework for transparency in AI systems, tailored to various use cases and audiences. The increasing demand for accountability and trust in AI technologies among users and regulators. 4
Diverse Accessibility Measures Need for disclaimers to be accessible to individuals with disabilities is becoming a priority. From one-size-fits-all disclaimers to multi-modal accessibility in communication. Disclaimers and transparency measures universally accessible to all, enhancing user experience and compliance. Growing awareness and advocacy for inclusivity in digital communications and AI interactions. 5
Layered Information Delivery Stakeholders prefer more information than required, but with a balance to avoid overwhelming users. From minimal disclosure to a layered approach to information delivery. Users receive clear, concise, and relevant information without feeling overwhelmed, enhancing understanding. The need for users to feel informed yet not overloaded, balancing transparency and usability in AI. 4
Personalized Transparency The concept of personalized transparency based on individual needs is gaining recognition. From generic transparency measures to personalized approaches catering to user needs. Transparency measures that dynamically adapt to users’ knowledge and requirements, fostering trust. The expectation of tailored experiences in digital interactions, driven by user-centric design principles. 3
New Use Cases for AI Regulation Emerging use cases such as emotion recognition and synthetic media require additional transparency considerations. From limited use case testing to a broader scope of AI applications in regulatory frameworks. Comprehensive transparency regulations that cover a wider array of AI applications, including emerging technologies. Rapid advancements in AI technologies prompting the need for updated and relevant regulatory frameworks. 4

Concerns

name description relevancy
Accessibility of Transparency Measures Disclaimers and transparency measures may not be fully accessible to individuals with disabilities, limiting their ability to understand AI interactions. 4
Information Overload Providing excessive information may overwhelm users and negatively impact understanding of deepfake and chatbot interactions. 3
Target Audience Misalignment Disclaimers may not be adequately tailored to diverse user groups, leading to misunderstandings or misinterpretations. 4
Testing Limitations of Regulations Lack of testing for emotion recognition and biometric categorization systems under the new AI Act may overlook critical transparency issues. 5
Relative Nature of Transparency Understanding of transparency varies by individual, complicating the establishment of universally effective transparency measures. 3

Behaviors

name description relevancy
Accessibility in Transparency Disclaimers should be accessible in various formats (text, audio, visual) to cater to diverse audiences, including those with disabilities. 5
Proportional Information Disclosure Stakeholders prefer providing more information than legally required, valuing additional context while ensuring information overload is avoided. 4
Audience-Tailored Disclaimers Disclaimers must be customized to the target audience’s needs and understanding levels, enhancing accessibility and information relevance. 5
Visual Decision-Making Tools Utilizing visual aids like flowcharts or matrices can streamline the decision-making process regarding transparency measures in AI interactions. 4
Personalized Transparency Standards Future approaches to transparency should aim for personalization, considering individual users’ specific needs and existing knowledge. 4

Technologies

name description relevancy
AI Transparency Standards Standards that govern the transparency of AI systems, ensuring accountability and trust among users and stakeholders. 5
Deepfake Detection Technologies Technologies designed to identify and verify deepfake content to ensure authenticity and transparency in media. 5
Chatbot Transparency Protocols Protocols that define how chatbots should disclose their artificial nature and provide context to users during interactions. 4
Emotion Recognition Systems AI systems that interpret human emotions through various inputs, requiring clear transparency measures for ethical use. 4
Biometric Categorisation Systems AI systems that analyze and categorize individuals based on biometric data, necessitating transparency in their usage. 4
Synthetic Media Watermarking Techniques to watermark synthetic media, ensuring users are aware of AI-generated content. 5
Personalized Transparency Measures Tailored transparency solutions based on individual user needs and knowledge, enhancing understanding and trust. 4

Issues

name description relevancy
Transparency in AI Systems The need for improved transparency measures in AI systems, especially in high-risk applications, to foster trust and accountability. 5
Accessibility of Disclaimers Ensuring that disclaimers about AI interactions are accessible to diverse audiences, including those with disabilities, is becoming increasingly important. 4
Layered Transparency Approaches The preference for layered transparency, providing concise disclaimers that lead to more detailed information, is emerging as a best practice. 4
Target Audience Adaptation Disclaimers and transparency measures need to be tailored to the specific needs and characteristics of diverse target audiences. 4
Emotion Recognition and Biometric Categorization The lack of testing for transparency requirements related to emotion recognition and biometric categorization signals a gap in current AI transparency practices. 3
Personalized Transparency Requirements The concept of personalized transparency, adapting information to individual users’ needs, is a complex but necessary future direction. 4
Synthetic Media Transparency Emerging regulations for transparency in synthetic media, such as watermarking, highlight the need for clearer guidelines. 4
Stakeholder Engagement in AI Regulation Engaging stakeholders in the development of AI regulations can enhance the feasibility and effectiveness of transparency measures. 4