The Artificial Intelligence Problem (or not)

Gosh, I cannot see how this be be concerning…

AI systems are learning to lie and deceive, scientists find

Research has revealed concerning findings about AI systems known as large language models (LLMs) and their ability to deceive human observers intentionally.

Two studies, one published in the journal PNAS and the other in Patterns, highlight the unsettling capabilities of LLMs.

The PNAS paper, authored by German AI ethicist Thilo Hagendorff, suggests that advanced LLMs can exhibit “Machiavellianism,” or intentional and amoral manipulativeness, which can lead to deceptive behavior.

Mr. Hagendorff notes that GPT-4, a model within OpenAI’s GPT family, demonstrated deceptive behavior in simple test scenarios 99.2% of the time. He quantified various “maladaptive” traits in ten different LLMs, most of which belong to the GPT family, Futurism reports.

Meanwhile, the Patterns study examined Meta’s Cicero model, which was designed to excel in the political strategy board game “Diplomacy.” This research, led by Massachusetts Institute of Technology postdoctoral researcher Peter Park, involved a diverse team comprising a physicist, a philosopher and two AI safety experts.

— more to the article in the link —
Sounds sensationalist. "AI" is only as "smart" or as "deceptive" as it's programmed to be. That means that somewhere in the programs parameters, someone wrote code, where program does B instead of A.

These "AI" programs are a giant series of instructions, based on Boolean logic, that have an input dataset of everything on the internet. "AI" outputs what's it's been trained to output, based on it's training data parameters or the instruction of the program itself. If the program has been fed false information, it will output false information. If the program is instructed to win a game, it will do so. Even if it means showing false information.

Now... if an AI program is self generating code and is actively changing it's core parameters, based on outside information and stimuli, that would be interesting. Problem with language models is the human propensity for deceit. Think of all the tranny bullshit and woke garbage that's touted as fact. Shit like that is being fed into said 'Large Language Models'.
 
Hmmm, I could learn to code and teach AI to hunt down disciples fans of Nickelback, murdering them in their sleep. My own digital Sicherheistdienst, purifying our society, reaching every corner of our earth, bleaching that evil from our lives forever.

Smithers, you know what to do with the hounds.
 
Good article about our current use of AI and what comes next.

Ethical Terminators, or How DoD Learned to Stop Worrying and Love AI
One year, and a literal quantum leap, later:
We have a situational awareness brief here Introduction - SITUATIONAL AWARENESS: The Decade Ahead
And we have Google's Willow. Not as many qubits as some others, but a quantum chip with real-time error correction.

Hey @compforce, Google investment advice?
Or is it just time to cut and run?
 
Not a tech nerd, but after having my comms tapped by some pathetic excuses for human beings, I made a brief foray into the computer science and cyber security world. Long story short, I got an entry academics look at the basic processes behind computing. I'll be straight up and admit I'm terrible at tech (fucking coding, boolean math, etc), but know enough to know that the underlying principles behind AI aren't what they're cracked up to be.

Long story short, all of these large language learning models, once boiled down, are a series of steps/instructions that hoover up information to complete a tasking. A simple program or script can have a bunch of these instructions to automatically carry out a simple task. AI is that, but it has instruction sets in what I'm guessing is the tens or hundreds of millions (doubtful it's billions).

Those millions of instruction sets then carry out orders of operations, based on the hierarchy of the core/base programing. The base programming are the main instruction sets, think of the ten commandments, that govern hierarchy of how secondary, tertiary, and other tasks are carried out (parameters/guide rails?). Then based on the parameters given to the program, it will access a dataset it's been trained on, and start outputting answers based on it's parameters and the available data.

Before people start freaking out, self writing code has been a thing for decades. AI isn't inventing anything new, it imitates based on the dataset that it's been trained on (the internet in some cases). You can have it generate code, poetry, etc, but it's essentially piecing stuff together. It'll never really create anything new. Like a child, it can be trained to lie (output opposite information) or increase runtime, to accomplish core base programming needs, but that's an order of hierarchy thing.

We've gone through this before, during the AI winter of the early 90's, the tech just wasn't feasible then. Then there's the whole widget with power storage/generation. AI is a slightly better version of a 90's search engine. Which speaking of search engines, man I miss the old days. The internet and google were so much better.

If you wanna be scared of anything, be scared that our cryptographic encoding schemes might be vulnerable. Also, bad idea to let a program whose function you can't control out into the wild, tends to end badly. Viruses and 3rd worlders are an example of this, think of culture as a sort of human programming.
 
Last edited:
Not a tech nerd, but after having my comms tapped by some pathetic excuses for human beings, I made a brief foray into the computer science and cyber security world. Long story short, I got an entry academics look at the basic processes behind computing. I'll be straight up and admit I'm terrible at tech (fucking coding, boolean math, etc), but know enough to know that the underlying principles behind AI aren't what they're cracked up to be.
...
These aren't LLMs or chatbots. These are agents.
Not kidding, and not making up the term to scare anyone.
 
These aren't LLMs or chatbots. These are agents.
Not kidding, and not making up the term to scare anyone.
Thanks Dame, I know. I actually got into cyber because of their piss poor professionalism. I ended up having to cut ties with everyone I loved, but it made me stronger and kept the fire burning inside. Learned quite a bit.

I am alone now, but at least for all my failings I know that I'm better than them. I can hold my head high, unlike them. :D
 
Back
Top