Igor,
I'm glad that we agree on the value of Prolog. The reason why Prolog failed to
achieve much usage in the US was caused by a prominent AI author, who wrote a couple of
books and was considered an authority. His comment about Prolog: "We tried that
with Microplanner, and it was inefficient."
Since it's not nice to say anything bad about the dead, I won't mention his name.
But his comment was based on a profound ignorance: Microplanner was a research project,
written by one person for a PhD dissertation. It did not support the full functionality
of Prolog. It was written in LISP, which is OK for AI applications, but it is not
efficient for high performance.
For the IBM system that beat the world champion in Jeopardy, the program that analyzed the
English questions and answers was written in Prolog. That program was written by Michael
McCord, who was one of the four co-authors of the book Knowledge Systems and Prolog. My
"Prolog to Prolog" was another one of the four.
By the way, the IBM developers had tried to use some software based on the Semantic Web
stack. But it was too slow and too difficult to update. They brought back McCord, who
had retired from IBM a few years earlier. His Prolog implementation was faster, had more
functionality, and was easier to update.
For our VivoMind company, we used Prolog for supporting applications that processed
English and other natural languages. The semantic representation was based on conceptual
graphs, which are based on Peirce's existential graphs, which we extended (with a few
minor features) to represent the full ISO standard for Common Logic.
The users communicated with applications in English and in diagrams., See the examples in
https://jfsowa.com/talks/cogmem.pdf . The VivoMind system could analyze English (and
other NLs) as well as computer languages. For the application in legacy reengineering, it
was able to compare and detect errors and inconsistencies in programming languages,
English comments in the code, documentation about the programs, and various memos,
commentary, and publications.
Our new Permion.ai company has a more general foundation that can also support current
versions of LLMs. That enables detailed analysis, evaluation, and correction of output
generated by the LLMs. Detecting errors is very important. Correcting errors is even
better.
And by the way, John McCarthy, who originally designed LISP, finally admitted that Prolog
was better for advanced AI applications. My colleague Arun Majumdar influenced that
decision after showing him VivoMind applications and their implementation.
John
----------------------------------------
From: "Igor Toujilov' via ontolog-forum"
<ontolog-forum(a)googlegroups.com>
John,
A typical scenario of consuming an ontology by end-users is the following:
- Download the ontology;
- Load it into a visual tool;
- Run a reasoner on it and see the results in the visual tool.
SWI-Prolog is an excellent system with web-server components and OWL support. I used it in
2005 when I was working in UCL for the Cancergrid project in ontologies in cancer
bioinformatics. This was a big international project where I collaborated with the
National Cancer Institute (NCI), universities of Oxford, Cambridge, Manchester, UCLA, etc.
I created an ontology web-server based on SWI-Prolog. It loaded ontologies, e.g. NCI
Thesaurus, from OWL files into the Prolog runtime environment and then exposed the
ontology through the web. The server accepted queries in Prolog and I suggested using it
as a production server for the project. However, after some testing my suggestion was not
accepted. The reason for this decision: software engineers who tested the server were not
good enough in Prolog to write the queries.
Igor