I've been saying since the very first public models came out that they are going to be useless pretty quickly. As more AI generated content is created and published to the internet (especially under the guise of a person) AI is going to be subject to the inbreeding effect. The Tudors of the modern world are the AI models. They'll have to find a way to distinguish definitively between content created by other AI models and content created by people and then further distinguish the credibility of the information provided by people, which will itself bias the responses. Until they can solve this problem, small mistakes will be repeated often, magnified and eventually taken as the truth by the models.
I'll give you a quick example of the last one. Probably one of the most used sites in IT development is StackTrace. In nearly every company I've worked with, either as a consultant, an employee or a contractor has had code that is cut and pasted from the site. The sad part is that 99.9% of the answers, to include the accepted answers, may work, sometimes, but they are far from the technically correct solution. (reminder: 78% of all statistics are just made up). Due to the volume of information on the site, AI models assume it is credible and regurgitate the garbage (GIGO). And we wonder why AI written code rarely actually works...
Ever wonder why AI models can't do math? Because most people suck at math and AI has to rely on the most statistically relevant answer...which is usually one that is a common mistake. I've given a simple P&L to all of the AI's multiple times each with a cut and paste question to avoid bias in the wording. I did not get the same response twice. One of them was to calculate a well known ratio. The answers ranged from 0.4 to 117 with a small cluster around 20.4 The correct answer was 5.85. NONE of the AI models got it right and none of them guessed the same number twice.