Alex,

No, that third method is NOT what I was saying.   

ALTHOUGH their third method (below) may use precise methods, which could include ontology and databases as input, their FINAL process uses LLM-based methods to combine the information.  (See Figure 4 below, which I copied from their publication.)

When absolute precision is required, the final reasoning process MUST be absolutely precise.   That means precise methods of logic, mathematics, and computer science must be the final step.  Probabilistic methods CANNOT guarantee precision.

Our Permion.ai company does use LLM-based methods for many purposes.  But when absolute precision is necessary, we use mathematics and mathematical logic (i.e. FOL, Common Logic, and metalanguage extensions).

Wolfram also uses LLMs for communication with humans in English, but ALL computation is done by mathematical methods, which include mathematical (formal) logic.  Kingsley has also added LLM methods for communication in English.  But his system uses precise methods of logic and computer science for precise computation when precision is essential.

For examples of precise reasoning by our old VivoMind company (prior to 2010), see https://jfsowa.com/talks/cogmem.pdf .  Please look at the examples in the final section of those slides.  The results computed by those systems (from 2000 to 2010) were far more precise and reliable than anything computed by LLMs today.

I am not denying that systems based on LLMs may produce reliable results.  But to do so, they must use formal methods of mathematics, logic, and computer science at the final stage of reasoning, evaluation, and testing.

John
 


From: "Alex Shkotin" <alex.shkotin@gmail.com>
Sent: 6/7/24 3:53 AM

John,


Please! And shortly.


If I want a very reliable LLM, I have to train it myself.


JFS: "That article is interesting.  But without an independent method of testing and verification, Figure 4 is DIAMETRICALLY OPPOSED to the methods we have been discussing and recommending in Ontolog Forum for the past 20 or more years."

But this green box  is all about your point.


One interesting point from one talk on the Internet is that Huge Language Models (from ChatGPT to now) use ALL World Wide Available Knowledge we have and it is not enough to make it good. But we do not have more for them🙂


Alex


чт, 6 июн. 2024 г. в 22:13, John F Sowa <sowa@bestweb.net>:
Alex,

Thanks for the reference to that article.   But the trends it discusses (from Dec 2023) are based on the assumption that all reasoning is performed by LLM-based methods.   It assumes that any additional knowledge is somehow integrated with or added to data stored in LLMs.  Figure 4 from that article illustrates the methods the authors discuss:

Note that the results they produce come from LLMs that have been modified by adding something new.   That article is interesting.  But without an independent method of testing and verification, Figure 4 is DIAMETRICALLY OPPOSED to the methods we have been discussing and recommending in Ontolog Forum for the past 20 or more years.