The Artificial Intelligence Problem (or not)

Gosh, I cannot see how this be be concerning…

AI systems are learning to lie and deceive, scientists find

Research has revealed concerning findings about AI systems known as large language models (LLMs) and their ability to deceive human observers intentionally.

Two studies, one published in the journal PNAS and the other in Patterns, highlight the unsettling capabilities of LLMs.

The PNAS paper, authored by German AI ethicist Thilo Hagendorff, suggests that advanced LLMs can exhibit “Machiavellianism,” or intentional and amoral manipulativeness, which can lead to deceptive behavior.

Mr. Hagendorff notes that GPT-4, a model within OpenAI’s GPT family, demonstrated deceptive behavior in simple test scenarios 99.2% of the time. He quantified various “maladaptive” traits in ten different LLMs, most of which belong to the GPT family, Futurism reports.

Meanwhile, the Patterns study examined Meta’s Cicero model, which was designed to excel in the political strategy board game “Diplomacy.” This research, led by Massachusetts Institute of Technology postdoctoral researcher Peter Park, involved a diverse team comprising a physicist, a philosopher and two AI safety experts.

— more to the article in the link —
 
Gosh, I cannot see how this be be concerning…

AI systems are learning to lie and deceive, scientists find

Research has revealed concerning findings about AI systems known as large language models (LLMs) and their ability to deceive human observers intentionally.

Two studies, one published in the journal PNAS and the other in Patterns, highlight the unsettling capabilities of LLMs.

The PNAS paper, authored by German AI ethicist Thilo Hagendorff, suggests that advanced LLMs can exhibit “Machiavellianism,” or intentional and amoral manipulativeness, which can lead to deceptive behavior.

Mr. Hagendorff notes that GPT-4, a model within OpenAI’s GPT family, demonstrated deceptive behavior in simple test scenarios 99.2% of the time. He quantified various “maladaptive” traits in ten different LLMs, most of which belong to the GPT family, Futurism reports.

Meanwhile, the Patterns study examined Meta’s Cicero model, which was designed to excel in the political strategy board game “Diplomacy.” This research, led by Massachusetts Institute of Technology postdoctoral researcher Peter Park, involved a diverse team comprising a physicist, a philosopher and two AI safety experts.

— more to the article in the link —
Sounds sensationalist. "AI" is only as "smart" or as "deceptive" as it's programmed to be. That means that somewhere in the programs parameters, someone wrote code, where program does B instead of A.

These "AI" programs are a giant series of instructions, based on Boolean logic, that have an input dataset of everything on the internet. "AI" outputs what's it's been trained to output, based on it's training data parameters or the instruction of the program itself. If the program has been fed false information, it will output false information. If the program is instructed to win a game, it will do so. Even if it means showing false information.

Now... if an AI program is self generating code and is actively changing it's core parameters, based on outside information and stimuli, that would be interesting. Problem with language models is the human propensity for deceit. Think of all the tranny bullshit and woke garbage that's touted as fact. Shit like that is being fed into said 'Large Language Models'.
 
Hmmm, I could learn to code and teach AI to hunt down disciples fans of Nickelback, murdering them in their sleep. My own digital Sicherheistdienst, purifying our society, reaching every corner of our earth, bleaching that evil from our lives forever.

Smithers, you know what to do with the hounds.
 
Back
Top