Pascal,

I just read your paper (cited below).   I agree that LLM technology is good for finding important and valuable information.  But as you know, there are serious issues about evaluating that information to avoid irrelevant, erroneous, or even hallucinogenic data.  I didn't see much attention devoted to evaluation and testing.

As I often mention, our old VivoMind company was doing large volumes of high-speed  knowledge extraction, analysis, evaluation, and processing over 20 years ago.  For a description of that system with some examples of large applications, see https://jfsowa.com/tallks/cogmem.pdf .  The systems described there are just a small sample of the applications, since our customers do not want their data or methods publicized.

I also noticed that you are using OWL for ontology.  We use a high-speed version of Prolog, which is much richer, more powerful, and faster than OWL,  which implements a tiny subset of the logic that Tim Berners-Lee had proposed for the Semantic Web.

Some of our customers were among the sponsors of the IKRIS project, funded from 2004 to 2006, to support a much larger and more powerful version of what Tim BL had proposed.  For an overview of IKRIS with links to some of the original publications, see https://jfsowa.com/ikl .

The IKL technology does not replace LLM, but it is valuable for evaluating the results generated by LLM, detecting errors and avoiding irrelevant, erroneous, or even hallucinogenic data.  When processing high volumes of data at high speed, human checking is not possible.  High quality computer checking is necessary to eliminate 99% or more of the bad or even dangerous data.

Human checking would only be required for the tiny percentage of data for which the computational methods are uncertain.  For a more recent talk, see https://jfsowa.com/talks/eswc.pdf .

John

 


From: "Pascal Hitzler' via ontolog-forum" <ontolog-forum@googlegroups.com>

Given the currently ongoing ISWC2024 conference and all the discussions around this neurosymbolic topic: Link to our (with Cogan Shimizu) position paper on this: https://kastle-lab.github.io/assets/publications/2024-LLMs4KGOE.pdf

The developments are really exciting!

Pascal.