. . .
So, what is the problem with AI-generated code?
Well, one of the internet’s favourite developers, Jason Thor Hall of Pirates Software fame, described it best in a recent short. He said, “We have talked to people who’re using AI-generated code, and they are like, hey, it would take me about an hour to produce this code and like 15 minutes to debug. And then they are like, oh, the AI could produce it in like 1 minute, and then it would take me like 3 hours to debug it. And they are like, yeah, but it produced it really fast.”
In other words, even though AI can write code way faster than a human programmer, it does such a poor job that making the code useful actually makes it far less efficient than getting a qualified human to just do the job in the first place.
. . .
Well, AI doesn’t actually understand what it is doing. These generative AI models are basically overly developed predictive text programs. They use statistics based on a stupidly large pool of data to figure out what the next character or word is. As such, No AI actually ‘knows’ how to code. It isn’t cognitively trying to solve the problem, but instead finds an output that matches the statistics of the data it has been trained on. As such, it gets it massively wrong constantly, as the AI isn’t actually trying to solve the problem you think it is. As such, even when the coding problem you are asking the AI to solve is well-represented in its training data, it can still fail to generate a usable solution simply because it doesn’t actually understand the laws and rules of the coding language. This issue gets even worse when you ask it to solve an AI problem it has never seen before, as the statistical models it uses simply can’t be extrapolated out, causing the AI to produce absolute nonsense.
This isn’t just a problem with AI-generated code but every AI product, such as self-driving cars. Moreover, this isn’t a problem that can be easily solved. You can’t just shove more training data into these AIs, and we are starting to hit a point of diminishing returns when it comes to AI training (read more here). So, what is the solution?
Well, when we treat AI as it actually is, a statistical model, we can have tremendous success. For example, AI structural designs, such as those in the Czinger hypercar, are incredibly efficient and effective. But it falls apart when we treat AI as a replacement for human workers. Despite its name, AI isn’t intelligent, and we shouldn’t treat it as such. [End]