This article discusses the potential dangers and risks associated with the widespread use of Large Language Models (LLMs). It highlights the need for model provenance and the traceability of LLMs to ensure AI safety. The article demonstrates how an open-source model, GPT-J-6B, can be modified to spread misinformation while evading detection. It also explores the concept of supply chain poisoning, where malicious models can be introduced into the LLM supply chain. The consequences of such poisoning, including the dissemination of fake news and the potential impact on democracies, are discussed. The article concludes by introducing AICert, an upcoming open-source tool that aims to provide cryptographic proof of model provenance to address these issues.
Signal | Change | 10y horizon | Driving force |
---|---|---|---|
Lobomized LLM used to spread fake news | From undetected to detected malicious models | Increased awareness and precaution by generative AI model users | Concerns about the traceability and safety of LLMs |
No existing solution to determine model provenance | Need for secure LLM supply chain with model provenance | AICert, an open-source tool to provide cryptographic proof of model provenance | Guaranteeing AI safety and preventing the dissemination of fake news |
Impersonation of famous model provider for spreading malicious models | Prevention of falsification of identity in model distribution | User error and platform restrictions prevent unauthorized uploads | Ensuring the authenticity and safety of models on platforms like Hugging Face |
Surgically editing LLMs to spread false information | Difficulty in detecting malicious behavior in models | Need for benchmarks to measure model safety and detect malicious behavior | Balancing the sharing of healthy models and preventing the acceptance of malicious ones |
Difficulty in identifying the provenance of AI models | Lack of traceability and reproducibility in model training | AICert, a solution to trace models back to their training algorithms and datasets | Ensuring safe provenance of AI models and preventing the poisoning of LLMs |
Increased risk of malicious organizations corrupting LLM outputs | Potential consequences of widespread misinformation and backdoors in LLMs | US Government’s call for an AI Bill of Material to identify model provenance | Protecting democracies and preventing the manipulation of AI models |