Andras,
Did you look at the slides I cited for our system from 2010? That system could run on a
laptop with an attached drive that would fit in your pocket. When run on a larger server,
Its speed would scale linearly with the number of CPUs in the server.
AK: the devil is in the acquisition of rules and representations. MuZero can learn
these, but not without very significant hardware investment (especially in environments
where self-play makes no sense) so selling NVIDIA stock appears premature.
But they are using LLMs to acquire rules and representations. That is NOT what we do.
Please reread the cogmem.pdf slides cited below. That system does NOT use LLMs to acquire
rules and representations. It is much, much more efficient to acquire rules and
representations by the methods discussed in those slides (and further citations for more
detail). Then look at the three examples starting at slide 44.
There is no LLM-based system available today that could do those three applications. They
require precise symbolic methods. LLM-based methods are of ZERO value for those
applications.
A hybrid system that combines LLMs with symbolic reasoning provides the best of both
worlds. And it does so with just a tiny fraction of the amount of Nvidia chips -- or even
wtih 0 Nvidia chips.. It can take advantage of a reasonable amount of LLM technology, but
the most advanced and complicated reasoning methods are done much better, faster, and more
precisely WITHOUT using LLMs.
I am not saying that a reasonable amount of Nvidia chips would be useless. But I am
saying that 200,000 chips is a terrible waste of hardware and electricity and cold water.
When you have symbolic AI to do the precise reasoning, just a modest amount of Nvidia
chips can provide enough power for translating languages (natural, symbolic,
diagrarmmatic, and perceptual in multidimensions).
In short,, use the Nvidia chips for what they do best: translating languages of any kind.
Then use the symbolic reasoning for what it does best: precise symbolic reasoning. For
that, a laptop can outperform Elon Musk's behemoth.
John
----------------------------------------
From: "Andras Kornai" <kornai(a)ilab.sztaki.hu
John,
I am completely on board with the idea that a symbol-manipulation system can be both more
reliable and less hardware-intense, by orders of magnitude. But as we have all learned in
GOFAI, the devil is in the acquisition of rules and representations. MuZero can learn
these, but not without very significant hardware investment (especially in environments
where self-play makes no sense) so selling NVIDIA stock appears premature.
Andras
On Feb 22, 2025, at 8:54 PM, John F Sowa
<sowa(a)bestweb.net> wrote:
Andras,
I agree that Elon's new system is a big
improvement over earlier systems of its kind. But note what you said below:
AK: Yes, they all need big iron, AI is still in the
"make it work" stage. Yes, they still hallucinate (and this will not be easy to
get rid of, as humans do too).
That is the point of the talk that Arun and I will
present on Wednesday: Our Permion system is a hybrid of LLM technology with symbolic AI.
And it is a MAJOR improvement over "big iron". It detects and ELIMINATES
hallucinations, and it produces reliable results that have precise citations of sources.
With that huge amount big iron, Elon's system
still generates false citations of its sources. That means it's impossible to use it
to detect the source of accidents, disasters, crimes, hackers, or brilliant achievements.
If and when it produces a brilliant answer to a question, it cannot tell you what sources
it used or how and why it combined information from those sources to produce its answers.
Permion can do that with a tiny fraction of the amount of iron. (But it can use more, if
available.)
Humans can tell you where they got their info, and
they can answer your questions about their method of reasoning to derive those answers. In
that regard, our old VivoMind system from 2000 to 2010 could do reasoning with the
precision that Elon's system CANNOT produce today. And even if he could double his
200,000 Nvidia chips, Elon still could not
guarantee the precision that VivoMind produced in
2010.
For a summary of the old VivoMind system with examples
of what it could do, see
https://jfsowa.com/talks/cogmem.pdf .
Our new Permion Inc. system is a major upgrade of the
VivoMind system from 2000 to 2010. You can skip the first 44 slides, which show how the
VivoMind Cognitive Memory system works. The slides from 45 to 64 show three applications
that no LLM-based system can do today. That system could run on a laptop, but it scales
linearly in performance wih the speed and number of CPUs available.
With the addition of LLMs, the symbolic power of
Permion can do everything that VivoMind could do and do it better and faster. But it can
also do the kinds of things that big iron systems do with a tiny fraction of the amount of
iron. If more iron is available, it can use it.
My recommendation: Sell any Nvidia stock you (or
anybody else) may own.
John
> From: "Andras Kornai" <kornai(a)ilab.sztaki.hu
John,
[without condoning Musk's practices in the larger
world] I think this is missing the point, which is catching up to the state of the art
from zero in less than two years. Compare this to the European Union, which is still
incapable of fielding a SOTA system (Mistral, in spite of its laudable goals, is not quite
there yet, still playing catch-up). Yes, they all need big iron, AI is still in the
"make it work" stage. Yes, they still hallucinate (and this will not be easy to
get rid of, as humans do too). But clearly xAI has organized a large enough group of
bespoke engineers and gave them enough hardware to do this, whereas the EU is structurally
incapable of doing so, spending all its energy on wordsmithing resolution after
resolution.
The EU is vastly better resourced than Musk. But it is
a captive of a smooth-talking bureaucracy (I specifically blame CAIRNE, formerly known as
CLAIRE).
Andras
> On Feb 21, 2025, at 11:20 PM, John F Sowa
<sowa(a)bestweb.net> wrote:
>
> Elon has a new version:
>
> But it is based on the old idea of ever more
computing power: 200,000 Nvidia chips and a new data center in Memphis, TN. And it still
suffers from the same old problems of other GPT systems:
>
> "However, some limitations emerged during
testing. Karpathy noted that the model sometimes fabricates citations and struggles with
certain types of humor and ethical reasoning tasks. These challenges are common across
current AI systems and highlight the ongoing difficulties in developing truly human-like
artificial intelligence."
>