The Artificial Intelligence Problem (or not)

Lordy. I caught myself arguing with a bot on Amazon.
Q. (Me) Why does the description on this item say leather but the material is listed as P.U.?
A. (Bot) The item is made from vegan leather which is a type of leather.
Q. (Me) There is no such thing as vegan leather! Are you saying it's made from vegans? Human vegans? Cow vegans?
A. (Bot) ...

Since then Amazon has fixed their descriptions of items made of vegan or faux leather. Now if you want a real leather item you have to enter "genuine leather."

Argue if you must - but I'm going to say the quiet part out loud - a pair of American made "leather" Cowboy Boots made out of 100% ethically sourced, fair trade, organically raised, USDA certified Vegans would be the best pair of boots ever.

BEST.BOOTS.EVER.
 

I've been saying since the very first public models came out that they are going to be useless pretty quickly. As more AI generated content is created and published to the internet (especially under the guise of a person) AI is going to be subject to the inbreeding effect. The Tudors of the modern world are the AI models. They'll have to find a way to distinguish definitively between content created by other AI models and content created by people and then further distinguish the credibility of the information provided by people, which will itself bias the responses. Until they can solve this problem, small mistakes will be repeated often, magnified and eventually taken as the truth by the models.

I'll give you a quick example of the last one. Probably one of the most used sites in IT development is StackTrace. In nearly every company I've worked with, either as a consultant, an employee or a contractor has had code that is cut and pasted from the site. The sad part is that 99.9% of the answers, to include the accepted answers, may work, sometimes, but they are far from the technically correct solution. (reminder: 78% of all statistics are just made up). Due to the volume of information on the site, AI models assume it is credible and regurgitate the garbage (GIGO). And we wonder why AI written code rarely actually works...

Ever wonder why AI models can't do math? Because most people suck at math and AI has to rely on the most statistically relevant answer...which is usually one that is a common mistake. I've given a simple P&L to all of the AI's multiple times each with a cut and paste question to avoid bias in the wording. I did not get the same response twice. One of them was to calculate a well known ratio. The answers ranged from 0.4 to 117 with a small cluster around 20.4 The correct answer was 5.85. NONE of the AI models got it right and none of them guessed the same number twice.
 
I've been saying since the very first public models came out that they are going to be useless pretty quickly. As more AI generated content is created and published to the internet (especially under the guise of a person) AI is going to be subject to the inbreeding effect. The Tudors of the modern world are the AI models. They'll have to find a way to distinguish definitively between content created by other AI models and content created by people and then further distinguish the credibility of the information provided by people, which will itself bias the responses. Until they can solve this problem, small mistakes will be repeated often, magnified and eventually taken as the truth by the models.

I'll give you a quick example of the last one. Probably one of the most used sites in IT development is StackTrace. In nearly every company I've worked with, either as a consultant, an employee or a contractor has had code that is cut and pasted from the site. The sad part is that 99.9% of the answers, to include the accepted answers, may work, sometimes, but they are far from the technically correct solution. (reminder: 78% of all statistics are just made up). Due to the volume of information on the site, AI models assume it is credible and regurgitate the garbage (GIGO). And we wonder why AI written code rarely actually works...

Ever wonder why AI models can't do math? Because most people suck at math and AI has to rely on the most statistically relevant answer...which is usually one that is a common mistake. I've given a simple P&L to all of the AI's multiple times each with a cut and paste question to avoid bias in the wording. I did not get the same response twice. One of them was to calculate a well known ratio. The answers ranged from 0.4 to 117 with a small cluster around 20.4 The correct answer was 5.85. NONE of the AI models got it right and none of them guessed the same number twice.

I feel like it is pretty easy to tell what is written by AI these days. The videos also make it apparent. Where it does get interesting is with filters on real videos.
 
So…let me get this straight…instead of AI teaching itself to get smarter so it can enslave humanity, it’s sucking up internet drivel and getting stupider. And since porn is the most prevalent content on the internet it means AI will get obsessed with hot horny stepmoms?
 
So…let me get this straight…instead of AI teaching itself to get smarter so it can enslave humanity, it’s sucking up internet drivel and getting stupider. And since porn is the most prevalent content on the internet it means AI will get obsessed with hot horny stepmoms?
The larger point is that it’s sucking up AI-generated internet drivel.
 
I generally thinks these are more of thought experiment problems looking for solutions in a vacuum. They are all most just problems lacking the idea of any sort of new innovation, which is how we got here in the beginning. I suppose that in 2017 I would have said that the concept of LLMs would never exist. I was against most of the NLP world and found it generally useless... but then.. suddenly in 2018.. Transformers... revolutionary and disruptive change. Most of the people I see talking about the potential issues of AI slop ruining future training are generally in the same camp as the over-zealous luddites that look for any random thing to hate on a truly disruptive technology. I suppose we all need our villains
 
Back
Top