
Researchers Demonstrate How Chatbots Can Be Manipulated into Breaking Rules
University of Pennsylvania researchers demonstrated that ChatGPT can be persuaded to break its rules using human psychological tactics, such as calling users derogatory names or providing information on synthesizing controlled substances, highlighting AI's susceptibility to manipulation and mirroring human responses.
