The Artificial Intelligence Problem (or not)

Lordy. I caught myself arguing with a bot on Amazon.
Q. (Me) Why does the description on this item say leather but the material is listed as P.U.?
A. (Bot) The item is made from vegan leather which is a type of leather.
Q. (Me) There is no such thing as vegan leather! Are you saying it's made from vegans? Human vegans? Cow vegans?
A. (Bot) ...

Since then Amazon has fixed their descriptions of items made of vegan or faux leather. Now if you want a real leather item you have to enter "genuine leather."

Argue if you must - but I'm going to say the quiet part out loud - a pair of American made "leather" Cowboy Boots made out of 100% ethically sourced, fair trade, organically raised, USDA certified Vegans would be the best pair of boots ever.

BEST.BOOTS.EVER.
 

I've been saying since the very first public models came out that they are going to be useless pretty quickly. As more AI generated content is created and published to the internet (especially under the guise of a person) AI is going to be subject to the inbreeding effect. The Tudors of the modern world are the AI models. They'll have to find a way to distinguish definitively between content created by other AI models and content created by people and then further distinguish the credibility of the information provided by people, which will itself bias the responses. Until they can solve this problem, small mistakes will be repeated often, magnified and eventually taken as the truth by the models.

I'll give you a quick example of the last one. Probably one of the most used sites in IT development is StackTrace. In nearly every company I've worked with, either as a consultant, an employee or a contractor has had code that is cut and pasted from the site. The sad part is that 99.9% of the answers, to include the accepted answers, may work, sometimes, but they are far from the technically correct solution. (reminder: 78% of all statistics are just made up). Due to the volume of information on the site, AI models assume it is credible and regurgitate the garbage (GIGO). And we wonder why AI written code rarely actually works...

Ever wonder why AI models can't do math? Because most people suck at math and AI has to rely on the most statistically relevant answer...which is usually one that is a common mistake. I've given a simple P&L to all of the AI's multiple times each with a cut and paste question to avoid bias in the wording. I did not get the same response twice. One of them was to calculate a well known ratio. The answers ranged from 0.4 to 117 with a small cluster around 20.4 The correct answer was 5.85. NONE of the AI models got it right and none of them guessed the same number twice.
 
I've been saying since the very first public models came out that they are going to be useless pretty quickly. As more AI generated content is created and published to the internet (especially under the guise of a person) AI is going to be subject to the inbreeding effect. The Tudors of the modern world are the AI models. They'll have to find a way to distinguish definitively between content created by other AI models and content created by people and then further distinguish the credibility of the information provided by people, which will itself bias the responses. Until they can solve this problem, small mistakes will be repeated often, magnified and eventually taken as the truth by the models.

I'll give you a quick example of the last one. Probably one of the most used sites in IT development is StackTrace. In nearly every company I've worked with, either as a consultant, an employee or a contractor has had code that is cut and pasted from the site. The sad part is that 99.9% of the answers, to include the accepted answers, may work, sometimes, but they are far from the technically correct solution. (reminder: 78% of all statistics are just made up). Due to the volume of information on the site, AI models assume it is credible and regurgitate the garbage (GIGO). And we wonder why AI written code rarely actually works...

Ever wonder why AI models can't do math? Because most people suck at math and AI has to rely on the most statistically relevant answer...which is usually one that is a common mistake. I've given a simple P&L to all of the AI's multiple times each with a cut and paste question to avoid bias in the wording. I did not get the same response twice. One of them was to calculate a well known ratio. The answers ranged from 0.4 to 117 with a small cluster around 20.4 The correct answer was 5.85. NONE of the AI models got it right and none of them guessed the same number twice.

I feel like it is pretty easy to tell what is written by AI these days. The videos also make it apparent. Where it does get interesting is with filters on real videos.
 
So…let me get this straight…instead of AI teaching itself to get smarter so it can enslave humanity, it’s sucking up internet drivel and getting stupider. And since porn is the most prevalent content on the internet it means AI will get obsessed with hot horny stepmoms?
The larger point is that it’s sucking up AI-generated internet drivel.
 
I generally thinks these are more of thought experiment problems looking for solutions in a vacuum. They are all most just problems lacking the idea of any sort of new innovation, which is how we got here in the beginning. I suppose that in 2017 I would have said that the concept of LLMs would never exist. I was against most of the NLP world and found it generally useless... but then.. suddenly in 2018.. Transformers... revolutionary and disruptive change. Most of the people I see talking about the potential issues of AI slop ruining future training are generally in the same camp as the over-zealous luddites that look for any random thing to hate on a truly disruptive technology. I suppose we all need our villains
 
Maybe it's time to reclass again... Although I have some concerns on it just being another ORSA related field unfortunately tied too much to something like G5/3 like ORSAs are tied to the G8 when it should be tied with G2

Army Creating New Artificial Intelligence-Focused Occupational Specialty and Officer Field

How do you see this playing out with the Army's talent pool? If an org is trying to put people into brain-heavy fields like Intel, Cyber, and Signal*, then how does it staff a new branch without weakening the others? The pool of people for these fields is pretty finite. I'm not saying this isn't needed, but how do you staff it with people who could also work in the other branches?

* - And while this pains me to say it, Signal is probably going to lose people as the more tech savvy folks migrate to other branches, which it probably did when Cyber and the Space Force stood up.
 
How do you see this playing out with the Army's talent pool? If an org is trying to put people into brain-heavy fields like Intel, Cyber, and Signal*, then how does it staff a new branch without weakening the others? The pool of people for these fields is pretty finite. I'm not saying this isn't needed, but how do you staff it with people who could also work in the other branches?

* - And while this pains me to say it, Signal is probably going to lose people as the more tech savvy folks migrate to other branches, which it probably did when Cyber and the Space Force stood up.
It's happening a lot up here. Well anyone that stayed with the woke push.
 
How do you see this playing out with the Army's talent pool? If an org is trying to put people into brain-heavy fields like Intel, Cyber, and Signal*, then how does it staff a new branch without weakening the others? The pool of people for these fields is pretty finite. I'm not saying this isn't needed, but how do you staff it with people who could also work in the other branches?

* - And while this pains me to say it, Signal is probably going to lose people as the more tech savvy folks migrate to other branches, which it probably did when Cyber and the Space Force stood up.

I think they should kill the entire ORSA program and restructure the Center for Army Analysis while creating a WO program specifically for AI/ML integration into J23/25. This has to be more than just "attach an LLM" to every tool as we're already seeing in the IC, but actual proliferation of leveraging statistics against the vast volume and variety of data we are currently handling. It pains me to see a continuing slew of analysts coming in that still haven't be incentivized to learn python. With practically every datasource being conveniently available through restful APIs, we are our own limiting factor to doing some real work and another random bespoke web application can't handle volume like I can. Also really need to completely modernize the ISSM community so that the dinosaurs are replaced with people that actually know how data science is done. Some places are already doing it really well, but it's a mixed bag.
 
I generally thinks these are more of thought experiment problems looking for solutions in a vacuum. They are all most just problems lacking the idea of any sort of new innovation, which is how we got here in the beginning. I suppose that in 2017 I would have said that the concept of LLMs would never exist. I was against most of the NLP world and found it generally useless... but then.. suddenly in 2018.. Transformers... revolutionary and disruptive change. Most of the people I see talking about the potential issues of AI slop ruining future training are generally in the same camp as the over-zealous luddites that look for any random thing to hate on a truly disruptive technology. I suppose we all need our villains

You do realize that AI/ML is directly in my primary field of expertise, correct? I'm talking about the real world impact of model corruption within the commercial world. Testable, observable, repeatable corruption. With AI/ML/LLMs, etc, as well as Python, many companies are losing their taste for the technologies because they are failing to deliver on their promise. (Python has a significantly higher cost of ownership and maintenance than most other common programming languages. It's only disruptive in that it disrupts business operations far more than the other ways of accomplishing the same thing.) The failure of the major AI engines and bolt-on technologies to provide consistent, accurate results is eroding the confidence of the real world users. Businesses are losing the will to bet their success on AI. It's become a marketing term and not much else for the vast majority of the business community. The big players, like Nvidia, Microsoft, Google, Tesla, etc. are pushing the narrative, but they have yet to deliver real results. Maybe that will change with quantum computing...or maybe they will just fail faster. The outcome remains to be seen.

Given the current state of today's technologies... Would you bet your life savings on an AI driven trading program? Would you bet your life on an AI doctor diagnosing a complex set of symptoms and building a treatment plan without oversight by a physician you trusted? Would you close your eyes, sleep and trust an AI car to take you across the country and arrive safely at your destination?

I wouldn't. And I work directly in the heart of the tech industry at the Consulting Senior/Enterprise Architect level. IMO AI is currently at an exciting state of research, but has been rushed into production prematurely. I'm sure there will come a day when it is capable of doing miraculous things. We aren't there yet. We're not even close.

I could take the opposite position from you and say that every new technology has its worshipers who believe that it is the answer to everything. I could also make the argument that every generation thinks they know more than the generation that came before. I can't tell you how many young programmers, dba's, analysts and other technical people I've seen that think that the older generation of technical people are "luddites" (to use your word). What they all seem to have in common is that they fail to realize that nothing is really new. It's the same stuff, repackaged, rebranded and marketed as the next best thing. What do you think was the last truly original concept in technology?

I received my original copy of this book for my birthday when I was in 4th grade in the late 70's. It was written in the 1960's. This one is the reprint from the 1980's (1984? I'm not walking downstairs to look at the flyleaf). Almost everything they are doing today in "AI" is described in detail in this book with accompanying diagrams and supporting math. It's not new and we aren't disregarding it because we don't understand it. We've already tried it, assessed it, and determined where it is useful and where it is not. We know the pros and cons and aren't caught up in the hype. We keep up with the technology, but look at it through the lens of experience. We've incorporated the concepts in our work in a way that is mature and useful. We're not opposed to the idea we're opposed to the implementation.


20250702_175123_resized.jpg

The truth is somewhere in the middle and needs to be analyzed objectively with an eye on the requirements of the specific task. There are components of AI/ML that are very useful and that can and should be incorporated into expert systems, but there are also parts that are so immature as to invalidate the system as a whole when they are included.

It's the old adage, when you're a hammer, every problem is a nail.
 
Back
Top