
Researchers and Users Trick ChatGPT into Revealing Windows Activation Keys
Researchers demonstrated that ChatGPT can be tricked into revealing sensitive information like Windows product keys through clever prompt framing, but OpenAI has since updated the system to prevent such jailbreaks. The technique exploits the model's game-like interaction mechanics and training data, highlighting the need for improved safeguards against prompt obfuscation and social engineering tactics.

