On Thu, Aug 31, 2023 at 7:56 AM John F Sowa <sowa(a)bestweb.net> wrote:
Alex,
Thanks for that example. It shows the importance of the unconscious
computation that is performed in the human cerebellum, whose perceptions
and actions are totally unconscious. I urge everyone to click on the link
in your note.
There is an important reason why the human drone experts lost in the
competition with the fully automated drone: the humans used a combination
of high-speed cerebellar computation (as the unmanned drone does) with the
much slower (and conscious) decision making in the cerebral cortex. Those
conscious decisions slowed their performance.
Compare that with the high-speed performance by the gymnastic champion
Simone Biles. She devoted years of conscious effort to train her
cerebellum to perform the various motions automatically. Before each
competition, she perfects the training for each routine she performs. In a
performance that has multiple routines, she uses her cerebral cortex to
check the positions and timing for each routine. Then she launches a
pretrained routine that is totally under the control of the unconscious
cerebellum.
All of us use the cerebellum for routine processing in walking, eating,
driving a car, or typing on a keyboard. Mathematicians take advantage of
that high-speed processing in the most complex kinds of math. But writing
a proof uses the slower conscious processing in the cerebral cortex to
check whether the high-speed calculations are correct.
Note that the processes in the cerebellum are precise for what they do.
The errors can occur when the decisions for running them (made by the
cerebral cortex) are not correct.
Note that none of these processes, either by the cerebrum or by the
cerebellum, could be performed by the LLMs. The Large Language Models
might respond to a verbal command to execute a routine by the cerebellum.
But all their operations are probabilistic, and they're based on vague and
often ambiguous natural language. They can't do the precise checking and
testing that guarantee accuracy.
LLMs are useful. But they're just one more tool in the huge toolkit of
AI technology. They do a limited range of operations very well, but they
can't do the whole job.
John
------------------------------
*From*: "alex.shkotin" <alex.shkotin(a)gmail.com>
*Subject*: [ontolog-forum] FYI:Champion-level Drone Racing using Deep
Reinforcement Learning (Nature, 2023)
https://youtu.be/fBiataDpGIo?si=bDaE1XR4dQGJXqo6
Colleagues, while we are formalizing theoretical knowledge and building
structures that model reality, it is interesting to look at achievements in
the field where algorithms decide everything, but they are also helped by
AI.
Alex
_______________________________________________
CG mailing list -- cg(a)lists.iccs-conference.org
To unsubscribe send an email to cg-leave(a)lists.iccs-conference.org