The U.S. Department of Commerce has announced new initiatives following President Biden’s Executive Order on AI, aimed at enhancing the safety and trustworthiness of AI systems. The National Institute of Standards and Technology (NIST) released four draft publications to manage the risks associated with generative AI, including guidance documents and a plan for global AI standards. Additionally, NIST launched a challenge series to differentiate human-created content from AI-generated content. The U.S. Patent and Trademark Office is seeking public comments on the impact of AI on patent evaluations. Secretary of Commerce Gina Raimondo emphasized the department’s commitment to responsible AI innovation and transparency in its efforts.
name | description | change | 10-year | driving-force | relevancy |
---|---|---|---|---|---|
AI Risk Management Framework | NIST’s AI Risk Management Framework aims to manage risks associated with generative AI technologies. | Shift from traditional software risk management to a focused approach on generative AI risks. | Ten years from now, AI systems will have standardized risk management protocols, enhancing safety and trust. | The increasing adoption of generative AI technologies necessitating robust risk management strategies. | 4 |
Public Feedback on AI Patentability | The USPTO is seeking public comment on how AI impacts patent evaluations. | Transition from traditional patent evaluation processes to those that consider AI contributions. | In a decade, AI-assisted inventions will have clear, defined criteria for patent eligibility. | The rising prevalence of AI in innovation and the need for legal frameworks to accommodate it. | 3 |
Global AI Standards Development | NIST proposes a plan for developing global standards for AI technologies. | Shift from disparate AI practices to unified global standards guiding AI development. | In ten years, there will be internationally accepted standards ensuring safe AI practices. | Global collaboration in technology development pushing for consistent AI regulations. | 5 |
Transparency in AI-generated Content | NIST emphasizes the need for transparency in digital content altered by AI. | Move towards greater accountability in content creation using AI technologies. | Ten years from now, there will be established norms for identifying AI-generated content. | Public demand for clarity and authenticity in digital content amidst AI advancements. | 4 |
Challenge Series for AI Content Distinction | NIST has launched a challenge series to distinguish human and AI-generated content. | Shift from passive content consumption to active verification of content origins. | In a decade, tools will exist to effortlessly identify the source of digital content. | The proliferation of AI-generated content necessitating reliable verification methods. | 4 |
name | description | relevancy |
---|---|---|
Safety and Security Risks of AI | Generative AI introduces unique risks differing from traditional software, which may lead to unsafe applications and misuse. | 5 |
Trustworthiness of AI Systems | Challenges in ensuring AI systems are trustworthy might result in harmful outcomes if users cannot rely on AI-generated content. | 4 |
Patentability Issues of AI Innovations | The evolving nature of AI raises questions about how AI-assisted inventions are evaluated for patentability, affecting innovation. | 4 |
Transparency in AI-Generated Content | The lack of clarity in distinguishing human-created versus AI-generated content poses risks for misinformation and manipulation. | 5 |
Global AI Standards Development | The need for uniform global standards for AI development is crucial to address cross-border risks and challenges. | 4 |
name | description | relevancy |
---|---|---|
Regulatory Framework Development | Establishment of guidelines and frameworks to govern the safe development and use of AI technologies, emphasizing safety and trustworthiness. | 5 |
Public Engagement in AI Governance | Encouraging feedback from stakeholders through requests for public comments on AI’s impact on patent evaluations and other regulations. | 4 |
Transparency in AI Systems | Development of methods to ensure transparency in AI-generated content, distinguishing it from human-produced content. | 4 |
Risk Management for Generative AI | Creation of guidance documents specifically addressing the unique risks associated with generative AI technologies. | 5 |
Global Standardization Efforts | Initiatives aimed at developing international standards for AI technologies to ensure consistent safety and best practices. | 4 |
name | description | relevancy |
---|---|---|
Generative AI | A technology enabling chatbots and creation of text-based images and videos, with unique risks compared to traditional software. | 5 |
AI Risk Management Framework (AI RMF) | A framework designed to manage risks associated with AI technologies, particularly generative AI. | 4 |
Secure Software Development Framework (SSDF) | A framework aimed at promoting security in software development, relevant for AI systems. | 4 |
Global AI Standards | Proposed standards to ensure safe and secure development and implementation of AI technologies worldwide. | 5 |
AI-assisted Inventions | Inventions that leverage AI technologies, raising questions about patentability under U.S. law. | 4 |
name | description | relevancy |
---|---|---|
AI Safety and Security Standards | The development of new standards and frameworks for ensuring the safety and security of AI technologies. | 5 |
Distinguishing Human vs. AI Content | The challenge to create methods that differentiate between human-created and AI-generated content. | 4 |
AI Patentability Evaluation | The impact of AI on the evaluation processes for determining patentability of inventions. | 4 |
Transparency in AI-generated Content | The need for approaches to promote transparency in digital content altered by AI systems. | 4 |
Global AI Standards Development | The proposal for creating global standards for AI technologies to ensure consistent safety and security measures. | 5 |
Risks of Generative AI | The unique risks associated with generative AI that differ from traditional software risks. | 5 |