For months, I have been criticizing LLM technology for ignoring the 60+ years of
developments in AI and computer science.
But finally, they can now call a subroutine to do elementary arithmetic. That might not
sound like much, but it opens the door to EVERYTHING. It means that LLMs can now invoke a
subroutine that can do anything and everything that any computer program has been able to
do for over 70 years.
Previous applications could combine LLMs with other software by putting a conventional
program in charge and call LLM-based systems as a subroutine. That is still possible with
Q* systems. But the option of allowing LLMs themselves to call external subroutines
provides greater flexibility. See below for excerpts from
https://www.digitaltrends.com/computing/what-is-project-q/
However, there are still some things left to criticize and more work to be done before
humans become obsolete.
John
___________________________
What is Project Q*?
Before moving forward, it should be noted that all the details about Project Q* —
including its existence — comes from some fresh reports following the drama around
Altman’s firing. Reporters at Reuters said on November 22 that it had been given the
information by “two people familiar with the matter,” providing a peek behind the curtain
of what was happening internally in the weeks leading up to the firing.
According to the article, Project Q* was a new model that excelled in learning and
performing mathematics. It was still reportedly only at the level of solving grade-school
mathematics, but as a beginning point, it looked promising for demonstrating previously
unseen intelligence from the researchers involved.
Seems harmless enough, right? Well, not so fast. The existence of Q* was reportedly scary
enough to prompt several staff researchers to write a letter to the board to raise the
alarm about the project, claiming it could “threaten humanity.”
On the other hand, other attempts at explaining Q* aren’t quite as novel — and certainly
aren’t so earth-shattering. The Chief AI scientist at Meta, Yann LeCun, tweeted that Q*
has to do with replacing “auto-regressive token prediction with planning” as a way of
improving LLM (large language model) reliability. LeCun says all of OpenAI’s competitors
have been working on it, and that OpenAI made a specific hire to address this problem.
[Note by JFS: "auto-regressive token prediction" is jargon for what LLMs do by
themselves. Planning is an example of GOFAI (Good Old Fashioned AI). The Q* breakthrough
allows LLMs to call GOFAI subroutines. That might not sound like much, but it's the
critical innovation that enables integration of old and new AI methods.]
One of the main challenges to improve LLM reliability is to replace Auto-Regressive token
prediction with planning. Pretty much every top lab (FAIR, DeepMind, OpenAI etc) is
working on that and some have already published… — Yann LeCun (@ylecun) November 24, 2023
[JFS: The verb 'replace' is inaccurate. The original methods for using LLMs are
still available. A better term is 'integrate'.]
LeCun’s point doesn’t seem to be that such a development isn’t important — but that it’s
not some unknown development that no other AI researchers aren’t currently discussing.
Then again, in the replies to this tweet, LeCun is dismissive of Altman, saying he has a
“long history of self-delusion” and suggests that the reports around Q* don’t convince him
that a significant advancement in the problem of planning in learned models has been made.
[JFS: In one sense, that's true, since integration was possible with the older
methods. But the Q* options enable a smoother and more flexible integration of LLMs with
the methods of GOFAI and other branches of computer science.]