Mike,
I agree with your comments below. As I keep repeating, LLMs are an extremely valuable ADDITION to the AI toolkit. They have a wide variety of applications, especially for translating languages, natural and artificial. But they are an addition, not a replacement. Since "ontology' is the focus of this forum, I would emphasize the role of ontology in evaluating, testing, revising, and enhancing the output generated by LLMs.
For reasoning, LLMs are an excellent method for abduction (guessing). But by themselves, they cannot do deduction, testing, and evaluation. They are able to find and apply certain patterns of deduction. and if their source data is limited to a single set of consistent statements, the results are usually correct.
But "consistent" and "usually" are problematical. That is why we need methods that control how the results LLMs generate are tested, evaluated, and used. Kingsley does that. Wolfram does that. Our Permion.ai company does that. In fact, you do that when you work with LLM-based software and make your own decisions about what to use or ignore.
There are many more options and combinations to explore. But it's important to remember that somebody or something must test and evaluate what to do with the LLM output. GOFAI (Good Old Fashioned AI) is not obsolete.
John
________________________________________
From: "Mike Bergman" <mike@mkbergman.com>
Hi All,
In the sense of fairness by providing an alternative viewpoint, I prompted ChatGPT 4o (as of today) with the inverse question. I am not personally endorsing the practice, and I further believe any LLM used to support an academic (or other) manuscript should be disclosed as to how used, even if allowed by the publisher.
Best, Mike