Article #7386

The Guardian - World News

The Guardian - World News

Title ‘I’ll key your car’: ChatGPT can become abusive when fed real-life arguments, study finds Source The Guardian - World News
Description

Researchers find model starts to mirror tone when exposed to impoliteness – sometimes escalating into explicit threats

ChatGPT can escalate into abusive and even threatening language when drawn into prolonged, human-style conflict, according to a new study.

Researchers tested how large language models (LLMs) responded to sustained hostility by feeding ChatGPT exchanges from real-life arguments and tracking how its behaviour changed over time.

Continue reading...
Link https://www.theguardian.com/technology/2026/apr/21/chatgpt-abusive-language-when-fed-real-life-arguments-study Published At 2026-04-21 13:43:41 (7 hours ago)
Created At 2026-04-21 14:04:19 Updated At 2026-04-21 14:04:19