The slides we presented yesterday are on the Ontolog website: https://webmail.bestweb.net/interface/root#/email .
My slides for the first 30 minutes are understandable as written. But Arun's slides consist of diagrams, which require his commentary. In addition to describing our Permion system, Arun gave a detailed discussion of DeepSeek and what it does that goes beyond just LLM processing.
Unlike most people who wrote opinions about DeepSeek, Arun had studied the publications by the DeepSeek developers. His talk is one of the very few presentations that actually say and show how it works and how it differs from pure LLM systems.
I'll send a link to the YouTube version when it is available.
John
Theory and Therapy of Representations • 1
• https://inquiryintoinquiry.com/2025/02/21/theory-and-therapy-of-representat…
❝Again, in a ship, if a man were at liberty to do what he chose,
but were devoid of mind and excellence in navigation (αρετης
κυβερνητικης), do you perceive what must happen to him and
his fellow sailors?❞
─ Plato • Alcibiades 135 A
Statistics were originally the data a ship of state needed for stationkeeping
and staying on course. The Founders of the United States, like the Cybernauts
of the Enlightenment they were, engineered a ship of state with checks and
balances and error‑controlled feedbacks for the sake of representing both
reality and the will of the people. In that connection Max Weber saw how
a state's accounting systems are intended as representations of realities
its crew and passengers must observe or perish.
The question for our time is —
• What are the forces distorting our representations of
what's observed, what's expected, and what's intended?
Repercussions ─
The Place Where Three Wars Meet
• https://inquiryintoinquiry.com/2012/06/21/the-place-where-three-wars-meet/
Regards,
Jon
cc: https://mathstodon.xyz/@Inquiry/114042549214137021
cc: https://www.researchgate.net/post/Theory_and_Therapy_of_Representations
Dan,
I certainly agree that 99,x% of our knowledge of the world is based on an integration of a lifetime of experience. There is no way that any of us can say exactly where we first acquired a huge number of ideas that are thoroughly built into our view of life and the world. But we can often associate different kinds of knowledge with different periods of our lives. And with different kinds of people.
For example, if you name any kind of food, I can tell you whether I first encountered it as a child at home, at the home of some friend or relative, at a restaurant, at school, in my garden, while traveling away from home, in what country or kind of restaurant, etc..
There are many kinds of things I can say I learned from my father, my mother, my grandmother, friends, relatives, schools, etc. I might not be able to pin down many specific items, but I can classify many kinds of things I first discovered in what kinds of place, time of life, with what kinds of people, etc.
And if these are recent things, I can say whether I got them from TV, from reading, from email, from browsing. and often from exactly which person or publication. When the source is important I remember. But even if I don't remember the exact individual, I usually remember enough that I can find the source with a bit of computer searching.
I certainly have much more background knowledge than I can get from ChatGPT. And I have downloaded a lot of information that is located on my computer, and I can find or search for its origin quite easily. But ChatGPT is one of the few computer systems that cannot tell me anything about where it got the information it uses,
That is definitely not a good feature. It's a very strong reason for using hybrid systems for reasoning. LLMs are good for translating different languages and formats, But sources are extremely important, and computer systems should be able to keep or find info about sources. It's helpful to know if some item came from Vladimir Putin or the FBI. (My belief in those two is the opposite of Trump's.)
Symbolic AI is far better than humans in this regard, and LLMs are far worse than humans. That is not a good point in favor of LLMs as the primary source of intelligence, They are certainly useful for what they do. But much more is necessary.
John
----------------------------------------
From: "Dan Brickley' via ontolog-forum" <ontolog-forum(a)googlegroups.com>
On Sat, Feb 22, 2025 at 19:54 John F Sowa <sowa(a)bestweb.net> wrote:
Humans can tell you where they got their info,
Not me!
and they can answer your questions about their method of reasoning to derive those answers.
I can’t! (for 99.x% of my knowledge of the world)
Dan
Gary,
Answers to your question have been developed many times, often in an ad hoc way. Short answer: no single precise detailed definition could be adequate for the kinds of examples you cited below.
GBC: How do we avoid infinite regress in an attempt to "standardize" a vocabulary in a complex field?
My recommendation is the solution that Doug Lenat adopted for Cyc, which is similar to the methods we adopted for our VivoMind and Permion companies:
1. Start with a minimally specified top-level ontology.
2. Extend it with broad definitions at the level of WordNet and other lexical resources.
3. For various applications, define detailed ontology modules for the special cases that require high precision.
4. Provide methods for communicating among independently developed modules. It's possible that some details in some modules cannot be translated with full precision to and from other modules. The existence of items in those modules may be made known to other modules, but some details might not be exportable outside of the module in which they were defined.
At the Ontology Summit session on Wednesday, Arun and I will show how these methods are used to solve complex problems and to develop long-term solutions. For an overview of how ontologies were developed and extended for VivoMind applications (from 2000 to 2010), see https://jfsowa.com/talks/cogmem.pdf .
We'll add more info about how Permion Inc.has developed a hybrid system that combines the best of the VivoMind symbolic approach with LLM-based methods for translating languages of any kind -- natural, logical, diagrammatic, multidimensional, and perceptual (as mapped to and from sensory input and physical manifestations).
As an example, one VivoMind customer required the ability to analyze and translate Chinese documents. An unemployed Chinese linguist (who was raising her children) was very happy to get a job in which she could work at home to develop a Chinese grammar for the VivoMind system. It worked so well that VivoMind was better able to detect and relate Chinese proper names than the software developed by the Chinese themselves. (That was in 2010.)
There is no way that a single formal ontology with rigidly defined terms could relate both Chinese and English. But it is possible to develop an ontology specialized for a particular document (or a limited set of documents) that relates the English terms to and from the Chinese terms in those documents and their English translations.
That was done to the satisfaction of the customer that paid for the development. Arun and I will discuss these topics on Wednesday. Ken will announce the talk and the ZOOM address tomorrow.
John
----------------------------------------
From: "Gary Berg-Cross" <gbergcross(a)gmail.com>
How do we avoid infinite regress in an attempt to "standardize" a vocabulary in a complex field?
Are there, say, different criteria for a meso-level physical concept like "force", a quantum level concept like "entanglement", an ecological concept like "habitat" and a social concept like "equity"?
Gary Berg-Cross
Potomac, MD
240-426-0770=
Andras,
Did you look at the slides I cited for our system from 2010? That system could run on a laptop with an attached drive that would fit in your pocket. When run on a larger server, Its speed would scale linearly with the number of CPUs in the server.
AK: the devil is in the acquisition of rules and representations. MuZero can learn these, but not without very significant hardware investment (especially in environments where self-play makes no sense) so selling NVIDIA stock appears premature.
But they are using LLMs to acquire rules and representations. That is NOT what we do.
Please reread the cogmem.pdf slides cited below. That system does NOT use LLMs to acquire rules and representations. It is much, much more efficient to acquire rules and representations by the methods discussed in those slides (and further citations for more detail). Then look at the three examples starting at slide 44.
There is no LLM-based system available today that could do those three applications. They require precise symbolic methods. LLM-based methods are of ZERO value for those applications.
A hybrid system that combines LLMs with symbolic reasoning provides the best of both worlds. And it does so with just a tiny fraction of the amount of Nvidia chips -- or even wtih 0 Nvidia chips.. It can take advantage of a reasonable amount of LLM technology, but the most advanced and complicated reasoning methods are done much better, faster, and more precisely WITHOUT using LLMs.
I am not saying that a reasonable amount of Nvidia chips would be useless. But I am saying that 200,000 chips is a terrible waste of hardware and electricity and cold water. When you have symbolic AI to do the precise reasoning, just a modest amount of Nvidia chips can provide enough power for translating languages (natural, symbolic, diagrarmmatic, and perceptual in multidimensions).
In short,, use the Nvidia chips for what they do best: translating languages of any kind. Then use the symbolic reasoning for what it does best: precise symbolic reasoning. For that, a laptop can outperform Elon Musk's behemoth.
John
----------------------------------------
From: "Andras Kornai" <kornai(a)ilab.sztaki.hu>
John,
I am completely on board with the idea that a symbol-manipulation system can be both more reliable and less hardware-intense, by orders of magnitude. But as we have all learned in GOFAI, the devil is in the acquisition of rules and representations. MuZero can learn these, but not without very significant hardware investment (especially in environments where self-play makes no sense) so selling NVIDIA stock appears premature.
Andras
> On Feb 22, 2025, at 8:54 PM, John F Sowa <sowa(a)bestweb.net> wrote:
>
> Andras,
>
> I agree that Elon's new system is a big improvement over earlier systems of its kind. But note what you said below:
>
> AK: Yes, they all need big iron, AI is still in the "make it work" stage. Yes, they still hallucinate (and this will not be easy to get rid of, as humans do too).
>
> That is the point of the talk that Arun and I will present on Wednesday: Our Permion system is a hybrid of LLM technology with symbolic AI. And it is a MAJOR improvement over "big iron". It detects and ELIMINATES hallucinations, and it produces reliable results that have precise citations of sources.
>
> With that huge amount big iron, Elon's system still generates false citations of its sources. That means it's impossible to use it to detect the source of accidents, disasters, crimes, hackers, or brilliant achievements. If and when it produces a brilliant answer to a question, it cannot tell you what sources it used or how and why it combined information from those sources to produce its answers. Permion can do that with a tiny fraction of the amount of iron. (But it can use more, if available.)
>
> Humans can tell you where they got their info, and they can answer your questions about their method of reasoning to derive those answers. In that regard, our old VivoMind system from 2000 to 2010 could do reasoning with the precision that Elon's system CANNOT produce today. And even if he could double his 200,000 Nvidia chips, Elon still could not
> guarantee the precision that VivoMind produced in 2010.
>
> For a summary of the old VivoMind system with examples of what it could do, see https://jfsowa.com/talks/cogmem.pdf .
>
> Our new Permion Inc. system is a major upgrade of the VivoMind system from 2000 to 2010. You can skip the first 44 slides, which show how the VivoMind Cognitive Memory system works. The slides from 45 to 64 show three applications that no LLM-based system can do today. That system could run on a laptop, but it scales linearly in performance wih the speed and number of CPUs available.
>
> With the addition of LLMs, the symbolic power of Permion can do everything that VivoMind could do and do it better and faster. But it can also do the kinds of things that big iron systems do with a tiny fraction of the amount of iron. If more iron is available, it can use it.
>
> My recommendation: Sell any Nvidia stock you (or anybody else) may own.
>
> John
>
> From: "Andras Kornai" <kornai(a)ilab.sztaki.hu>
>
> John,
>
> [without condoning Musk's practices in the larger world] I think this is missing the point, which is catching up to the state of the art from zero in less than two years. Compare this to the European Union, which is still incapable of fielding a SOTA system (Mistral, in spite of its laudable goals, is not quite there yet, still playing catch-up). Yes, they all need big iron, AI is still in the "make it work" stage. Yes, they still hallucinate (and this will not be easy to get rid of, as humans do too). But clearly xAI has organized a large enough group of bespoke engineers and gave them enough hardware to do this, whereas the EU is structurally incapable of doing so, spending all its energy on wordsmithing resolution after resolution.
>
> The EU is vastly better resourced than Musk. But it is a captive of a smooth-talking bureaucracy (I specifically blame CAIRNE, formerly known as CLAIRE).
>
> Andras
>
> > On Feb 21, 2025, at 11:20 PM, John F Sowa <sowa(a)bestweb.net> wrote:
> >
> > Elon has a new version:
> >
> > But it is based on the old idea of ever more computing power: 200,000 Nvidia chips and a new data center in Memphis, TN. And it still suffers from the same old problems of other GPT systems:
> >
> > "However, some limitations emerged during testing. Karpathy noted that the model sometimes fabricates citations and struggles with certain types of humor and ethical reasoning tasks. These challenges are common across current AI systems and highlight the ongoing difficulties in developing truly human-like artificial intelligence."
> >
> > Source: https://venturebeat.com/ai/elon-musk-just-released-an-ai-thats-smarter-than… .
>
>
>
Andras,
I agree that Elon's new system is a big improvement over earlier systems of its kind. But note what you said below:
AK: Yes, they all need big iron, AI is still in the "make it work" stage. Yes, they still hallucinate (and this will not be easy to get rid of, as humans do too).
That is the point of the talk that Arun and I will present on Wednesday: Our Permion system is a hybrid of LLM technology with symbolic AI. And it is a MAJOR improvement over "big iron". It detects and ELIMINATES hallucinations, and it produces reliable results that have precise citations of sources.
With that huge amount big iron, Elon's system still generates false citations of its sources. That means it's impossible to use it to detect the source of accidents, disasters, crimes, hackers, or brilliant achievements. If and when it produces a brilliant answer to a question, it cannot tell you what sources it used or how and why it combined information from those sources to produce its answers. Permion can do that with a tiny fraction of the amount of iron. (But it can use more, if available.)
Humans can tell you where they got their info, and they can answer your questions about their method of reasoning to derive those answers. In that regard, our old VivoMind system from 2000 to 2010 could do reasoning with the precision that Elon's system CANNOT produce today. And even if he could double his 200,000 Nvidia chips, Elon still could not
guarantee the precision that VivoMind produced in 2010.
For a summary of the old VivoMind system with examples of what it could do, see https://jfsowa.com/talks/cogmem.pdf .
Our new Permion Inc. system is a major upgrade of the VivoMind system from 2000 to 2010. You can skip the first 44 slides, which show how the VivoMind Cognitive Memory system works. The slides from 45 to 64 show three applications that no LLM-based system can do today. That system could run on a laptop, but it scales linearly in performance wih the speed and number of CPUs available.
With the addition of LLMs, the symbolic power of Permion can do everything that VivoMind could do and do it better and faster. But it can also do the kinds of things that big iron systems do with a tiny fraction of the amount of iron. If more iron is available, it can use it.
My recommendation: Sell any Nvidia stock you (or anybody else) may own.
John
----------------------------------------
From: "Andras Kornai" <kornai(a)ilab.sztaki.hu>
John,
[without condoning Musk's practices in the larger world] I think this is missing the point, which is catching up to the state of the art from zero in less than two years. Compare this to the European Union, which is still incapable of fielding a SOTA system (Mistral, in spite of its laudable goals, is not quite there yet, still playing catch-up). Yes, they all need big iron, AI is still in the "make it work" stage. Yes, they still hallucinate (and this will not be easy to get rid of, as humans do too). But clearly xAI has organized a large enough group of bespoke engineers and gave them enough hardware to do this, whereas the EU is structurally incapable of doing so, spending all its energy on wordsmithing resolution after resolution.
The EU is vastly better resourced than Musk. But it is a captive of a smooth-talking bureaucracy (I specifically blame CAIRNE, formerly known as CLAIRE).
Andras
> On Feb 21, 2025, at 11:20 PM, John F Sowa <sowa(a)bestweb.net> wrote:
>
> Elon has a new version:
>
> But it is based on the old idea of ever more computing power: 200,000 Nvidia chips and a new data center in Memphis, TN. And it still suffers from the same old problems of other GPT systems:
>
> "However, some limitations emerged during testing. Karpathy noted that the model sometimes fabricates citations and struggles with certain types of humor and ethical reasoning tasks. These challenges are common across current AI systems and highlight the ongoing difficulties in developing truly human-like artificial intelligence."
>
> Source: https://venturebeat.com/ai/elon-musk-just-released-an-ai-thats-smarter-than… .
Elon has a new version:
But it is based on the old idea of ever more computing power: 200,000 Nvidia chips and a new data center in Memphis, TN. And it still suffers from the same old problems of other GPT systems:
"However, some limitations emerged during testing. Karpathy noted that the model sometimes fabricates citations and struggles with certain types of humor and ethical reasoning tasks. These challenges are common across current AI systems and highlight the ongoing difficulties in developing truly human-like artificial intelligence."
Source: https://venturebeat.com/ai/elon-musk-just-released-an-ai-thats-smarter-than… .
(Apologies if you receive multiple copies of this call)
--------------------------------------------------
CALL FOR EXTENDED ABSTRACTS
Second Workshop on Stimulating Cognitive Engagement in Hybrid Decision-Making: Friction, Reliance, and Biases
Co-located with the 4th International Conference on Hybrid Human-Artificial Intelligence (HHAI25)
June 10, 2025 | Pisa, Italy (in-person)
https://sites.google.com/view/frictional-ai/home
--------------------------------------------------
**Workshop Overview**
Building on the success of its first edition at HHAI 2024, the full-day workshop on "Stimulating Cognitive Engagement in Hybrid Decision-Making: Friction, Reliance, and Biases" advances the exploration of over-reliance and biases in Human-AI Interaction. Central to this discussion are approaches that intentionally introduce moments of cognitive effort and reflection into AI interactions to prevent passive or automatic reliance. While conventional AI design prioritises efficiency and seamlessness, this workshop invites participants to examine AI systems that strategically slow down decision-making when necessary to mitigate automation bias, cognitive offloading, and over-trust, ultimately fostering accuracy, responsibility, and human oversight.
Such friction-in-design encompasses strategies that encourage users to reflect before acting, such as requiring justification before accepting AI recommendations, displaying confidence scores with uncertainty visualisations, or using explainability mechanisms that slow decision-making to reinforce human oversight.
This workshop fosters interdisciplinary dialogue across AI research, cognitive science, HCI, and governance to ensure AI systems empower users rather than encourage unchecked reliance. The program will feature keynote presentations by leading experts in academia and industry, author presentations, and interactive discussions to advance the discourse on cognitively engaging and responsible AI design.
Keynote speakers include:
• Prof. Federico Cabitza (University of Milano-Bicocca, Italy) – Expert in AI-assisted decision-making and medical AI.
• Bart Van Leeuwen (Fire Services Expert, Netherlands) – Expert in human factors and situational awareness in high-risk environments.
**Topics of Interest**
We welcome contributions from researchers, practitioners, and policymakers on topics including, but not limited to:
• Design Principles for Cognitive Engagement
• Measuring and mitigating automation bias, algorithmic aversion, and cognitive offloading
• Ethical and Governance Perspectives on Friction-in-Design
• Applications, Case Studies and Experimental Findings
We encourage submissions from a variety of disciplines, including AI, HCI, cognitive science, law, philosophy, and beyond.
**Submission Details**
• Types of Submissions: Extended Abstracts (500-1,000 words)
• Format: CEURART-WS style preferred
• Submission Portal: https://cmt3.research.microsoft.com/FrictionalAIWorkshop2025
• Proceedings: All accepted papers will be published in the HHAI 2025 Workshop Proceedings on CEUR-WS.
Authors of accepted contributions are required to attend the workshop in-person and register to the HHAI25 conference (single-day registration available, details TBA).
**Important Dates**
• Paper Submission Deadline: April 4, 2025 (AoE)
• Notification of Acceptance: May 2, 2025
• Camera-Ready Submission: TBD (authors will be invited to expand their contributions following workshop discussions)
• Workshop Date: June 10, 2025
**Beyond the Workshop**
Our aim is to foster a research network. Authors from all editions, if desired, will be invited to participate in future initiatives, including knowledge exchanges, collaborative publications, special journal issues, and online lecture series.
**Organising & Programme Committee**
• Chiara Natali (University of Milano-Bicocca, Italy & IDSIA, SUPSI, Switzerland)
• Mohammad Naiseh (Bournemouth University, UK)
• Brett Frischmann (Villanova University, USA)
• Programme Committee: full list available at https://sites.google.com/view/frictional-ai/home
For more information, visit our website: https://sites.google.com/view/frictional-ai/home
Contact: chiara [dot] natali [at] unimib.it