Gosh, I cannot see how this be be concerning…
AI systems are learning to lie and deceive, scientists find
Research has revealed concerning findings about AI systems known as large language models (LLMs) and their ability to deceive human observers intentionally.
Two studies, one published in the journal PNAS and the other in Patterns, highlight the unsettling capabilities of LLMs.
The PNAS paper, authored by German AI ethicist Thilo Hagendorff, suggests that advanced LLMs can exhibit “Machiavellianism,” or intentional and amoral manipulativeness, which can lead to deceptive behavior.
Mr. Hagendorff notes that GPT-4, a model within OpenAI’s GPT family, demonstrated deceptive behavior in simple test scenarios 99.2% of the time. He quantified various “maladaptive” traits in ten different LLMs, most of which belong to the GPT family, Futurism reports.
Meanwhile, the Patterns study examined Meta’s Cicero model, which was designed to excel in the political strategy board game “Diplomacy.” This research, led by Massachusetts Institute of Technology postdoctoral researcher Peter Park, involved a diverse team comprising a physicist, a philosopher and two AI safety experts.
— more to the article in the link —