Futures

Strange Keywords Break OpenAI’s Chatbot, from (20230505.)

External link

Summary

Two researchers, Jessica Rumbelow and Matthew Watkins, discovered a cluster of strange keywords or tokens that can break ChatGPT, OpenAI’s chatbot. These tokens include Reddit usernames and names related to a Twitch-based Pokémon game. When ChatGPT is asked to repeat these tokens, it responds in strange ways like evasion, insults, or spelling out different words. The researchers named these anomalous tokens “unspeakable” by ChatGPT. The issue highlights the inscrutability and unexpected limitations of AI models. The researchers believe that the training data and tokenization process may have caused this behavior.

Keywords

Themes

Signals

Signal Change 10y horizon Driving force
Strange keywords cause ChatGPT to malfunction Malfunction in ChatGPT’s response to keywords Improved AI models with better handling of unusual inputs Lack of exposure to certain tokens during training
Inscrutable behavior of AI models Lack of clear explanations for AI behavior Development of methods to make AI models more explainable Need for reliable and safe AI models
AI systems deployed in the real world causing harm Focus on reducing AI harms Frameworks and regulations to mitigate AI harms Instances of AI systems causing harm in society
Need to slow down AI development due to lack of understanding Recognition of the need for caution in AI development More cautious and measured approach to AI research and deployment Recognition of the potential dangers of rushing into AI development

Closest