Michael,

The word 'understand' is a vague word that has no formal definition of any kind.  In any specific example, it's essential to replace it with a statement of the operations that had been performed.

MDB:  ChatGPT ... can understand Turtle but it seems to understand natural language definitions better.

Short summary:   LLMs process patterns of symbols.  Their most reliable applications map strings of symbols (in natural or artificial languages) to strings of symbols in other languages.  Those are not perfect, but they are the most reliable because they do the least amount of processing.  But they do nothing that remotely resembles what humans do when they "understand" the source and target languages.

The more steps in the processing, the more unreliable the results.  Translation is the most reliable because the source and target patterns are very closely related.  LLMs can also be used to find information when a pattern in a question has a more distant relation to a pattern in the target that is found.

That kind of search is more reliable the closer the search pattern is to the pattern that is found.  But LLMs don't do any evaluation.  If a search pattern has a closer match to bad data than good data, the bad data will be retrieved.

The most spectacular, but also the most unreliable applications of LLMs search for a pattern that does some kind of transformation and then apply that transformation to some source data to produce some target data.  People often call these transformations "reasoning".  But the kind of reasoning should be called "guessing" or "hypothesis" or "abduction".  

Humans who understand what LLMs do can often find them very useful because they are sufficiently knowledgeable about the subject  that (1) they recognize bad guesses and ignore them; and (2) they do further tests and checks to evaluate the answers before using them.

There is much more to say about the details.  But never, ever use the words 'reasoning' or 'understanding' for what LLMs do.  However, it may be permissible to use the word 'reasoning' for a hybrid system that uses symbolic methods for deduction and evaluation of the results that LLMs find or generate.

And most of all, it's best to reject , discard, or ignore any publication that claims LLMs are approaching a human level of understanding or reasoning.  I have never seen any article about artificial general intelligence (AGI) that has any insight whatsoever into human intelligence.

John
 
PS:  Penrose wrote some excellent books on physics.   When he wanders outside of his level of expertise, his ideas may be interesting -- or not.

From: "Michael DeBellis" <mdebellissf@gmail.com>
Sent: 9/26/24 11:41 AM

As we know there is a large and subtle discussion around Penrose thesis. I am not in it. I am sure that there should be a forum for this topic.  And your question
"Why do we have to accept the "indisputable validity" of these statements that lie outside the scope of P?"
is to RP not to me. Sorry.

Alex, no need to apologize. Although I'm starting to wonder how subtle the discussion really is. I'm starting to suspect this is a case of the emperor having no clothes (ironic given the title of one of Penrose's book). Specifically, 95% of the people who read Penrose's book don't have the capability to understand Godel at all, and of the 5% that can at least somewhat grasp Godel (most of us) even fewer (like Paolo Mancosu, the guy at Berkeley who taught a class I audited a long time ago on Godel, Turing, etc.) who are the real experts aren't interested enough to point out obvious errors. 

For example D. Hilbert wrote the axiomatic theory of Euclid's geometry. Do we have formalization? No

I don't quite understand that. Isn't an "axiomatic theory" a formalization? 

But our robots are waiting for them.

One of the ironies of this whole discussion IMO is that in some circumstances (LLMs) it is now easier to communicate to software agents using natural language than formal language. I still need to do a lot more work on this but so far that is what I'm finding in my work with ChatGPT, it says it can understand Turtle but it seems to understand natural language definitions better. One of the things I plan to work on is a simple generator to generate basic NL descriptions of classes, properties, and axioms. It shouldn't be hard. If something like this exists I would appreciate a pointer 

How many theories outside math are formalized?

One of the philosophers in the Vienna circle, think it was Carnap, thought that we SHOULD formalize other disciplines of science and attempted to do that for physics. I've always thought one of the reasons he failed was because to create such a complex model required some kind of tool. For a while I even thought it might be possible to do with OWL. I actually tried but it was soon apparent that 1) I don't know physics well enough and 2) Even if I did (here I'm in violent agreement with John Sowa) OWL wasn't powerful enough for this kind of model. 

Michael

On Wed, Sep 25, 2024 at 1:11 AM Alex Shkotin <alex.shkotin@gmail.com> wrote:

Michael, 


As we know there is a large and subtle discussion around Penrose thesis. I am not in it. I am sure that there should be a forum for this topic.  And your question 

"Why do we have to accept the "indisputable validity" of these statements that lie outside the scope of P?"

is to RP not to me. Sorry.


And any discussion outside some theory, which should be definitely pointed to concrete theory, is just mind to mind games.

In Linguistics there are a lot of theories fighting. For example D. Everett has one theory and N. Chomsky has another. 

Outside scope on any theory we just gymnastics the mind. Why not!


But the challenge is to formalize one or another existing theory 🎯


For example D. Hilbert wrote the axiomatic theory of Euclid's geometry. Do we have formalization? No 😂


How many theories outside math are formalized? 0.


But our robots are waiting for them.


By the way topic of truth values is one of the subtle in math logic 👍


Everyone, who has created a formal ontology, formalized some theoretical knowledge. From what theory? Where is this theory expressed? How to justify his formalization by this theory? 

We can always verbalize formalization and then must find in this particular theory a justification or ask experts.


Theory first, robots second 🏋️


Alex