Another of the many reasons why Generative AI requires other methods -- such as the 70
yeas of AI and computer science -- to test, evaluate, and correct anything and everything
that it "generates",
As the explanation below says, it does not "UNDERSTAND" what it is doing It
just finds and reproduces patterns that it finds in its huge volume of data. Giving it
more data gives it more patterns to choose from. But it does nothing to help it
understand any of them.
This method enables it to surpass human abilities on IQ tests, law exams, medical exams,
etc. -- for the simple reason that the answers to those exams can be found somewhere on
the WWW. In other words, Generative AI does a superb job of CHEATING on exams. But it is
hopelessly clueless in solving problems whose solution depends on understanding the
structure and the goal of the problem.
For similar reasons, the article mentions that self-driving cars fail in complex
environments, such as busy streets in city traffic. The number and kinds of situations
are far more varied and complex than anything they have been trained on. Carnegie Mellon
University is involved in more testing of self-diving cars because Pittsburgh has the most
complex and varied patterns. It has more bridges than any other city in the world. It
also has three major rivers, many hills and valleys, steep winding roads, complex
intersections, tunnels, foot traffic, and combinations of any or all of the above.
Drivers who test self-driving cars in Pittsburgh say that they can't go for twenty
minutes without having to grab the steering wheel to prevent an accident. (By rhe way, I
learned to drive in P:irravurgh. Then I went to MIT and Harvard,, where the Boston
patterns are based on 300-year-old cow paths.)
John
________________________________________________
AI-Generated Code Has A Staggeringly Stupid Flaw
It simply doesn’t work.
https://medium.com/predict/ai-generated-code-has-a-staggeringly-stupid-flaw….
. .
So, what is the problem with AI-generated code?
Well, one of the internet’s favourite developers, Jason Thor Hall of Pirates Software
fame, described it best in a recent short. He said, “We have talked to people who’re using
AI-generated code, and they are like, hey, it would take me about an hour to produce this
code and like 15 minutes to debug. And then they are like, oh, the AI could produce it in
like 1 minute, and then it would take me like 3 hours to debug it. And they are like,
yeah, but it produced it really fast.”
In other words, even though AI can write code way faster than a human programmer, it does
such a poor job that making the code useful actually makes it far less efficient than
getting a qualified human to just do the job in the first place.
. . .
Well, AI doesn’t actually understand what it is doing. These generative AI models are
basically overly developed predictive text programs. They use statistics based on a
stupidly large pool of data to figure out what the next character or word is. As such, No
AI actually ‘knows’ how to code. It isn’t cognitively trying to solve the problem, but
instead finds an output that matches the statistics of the data it has been trained on. As
such, it gets it massively wrong constantly, as the AI isn’t actually trying to solve the
problem you think it is. As such, even when the coding problem you are asking the AI to
solve is well-represented in its training data, it can still fail to generate a usable
solution simply because it doesn’t actually understand the laws and rules of the coding
language. This issue gets even worse when you ask it to solve an AI problem it has never
seen before, as the statistical models it uses simply can’t be extrapolated out, causing
the AI to produce absolute nonsense.
This isn’t just a problem with AI-generated code but every AI product, such as
self-driving cars. Moreover, this isn’t a problem that can be easily solved. You can’t
just shove more training data into these AIs, and we are starting to hit a point of
diminishing returns when it comes to AI training (read more here). So, what is the
solution?
Well, when we treat AI as it actually is, a statistical model, we can have tremendous
success. For example, AI structural designs, such as those in the Czinger hypercar, are
incredibly efficient and effective. But it falls apart when we treat AI as a replacement
for human workers. Despite its name, AI isn’t intelligent, and we shouldn’t treat it as
such. [End]