Alex,

The issues you mention happen to be topics that are included in that article I'm writing.  I'll add some excerpts from other sections to Section6.pdf and send the update to these lists. 

Alex> pictures that are recorded in our memory and are more or less accessible depending on access to visual memory.

Certainly.  And not just visual memory, but memory of all perceptions, internal and external, are fundamental to what Peirce called the phaneron.  All those perceptions are continuous and language consists of a string of discrete words that represent concepts that people consider significant.  Diagrams constructed as patterns of concepts and relations serve as the intermediate stage between continuous perceptions and discrete strings of words.

The mappings go both ways:  Perceptions -> images -> mental diagrams -> languages (spoken, signed, and artificial).  And languages -> mental diagrams -> images ->  actions in and on the continuous world (or some local part of it).  The mental diagrams (and representations in one, two, three, or more dimensions) are an essential stage in those mappings.

C. S. Peirce recognized the importance of diagrams, and he had plans to extend them to "stereoscopic moving images".  At the language end, nodes of the diagrams can be mapped to and from discrete words or concepts.  At the image end, the nodes of diagrams can be moved and mapped to the parts of an image, either static or dynamic, which they represent.

Alex> A broad interpretation of the term diagram is possible. This is somewhat reminiscent of systems engineering. Consider "system thinking" vs "diagrammatic thinking". 

Of course.  The only change I would make is to replace "vs" with "and".   Systems thinking is and must be in terms of diagrams that relate one-dimensional specifications (in words or other kinds of symbols) to the three-dimensional moving systems that engineers design and build.

Alex> It is important that a mind is able to store and operate with visual images - this is cooler than diagrams.

I'm not sure about the temperature.  But human memory (and probably the mental imagery of other animals) can include imagery from perception as well as imagery from imagination.  (In another note, I'll tell you the story about how Yojo the cat dreamt that there were monsters under the bed.)

Alex> LLM for me is an engineering invention around which there is a lot of noise, because it unexpectedly turned out to be capable of simulating many mental activities. 

Yes.  I'm glad that you used the critical word "many".  The crucial addition is "but not all."   I believe that LLMs are valuable for what they do.  But as discrete patterns that support a limited set of operations, they are limited in what they can do. 

Alex> Interesting topics include visual thinking and movie thinking.

Yes.  That's what Peirce wrote in 1911, when he mentioned "stereoscopic moving images."  In December of that year, he introduced an extension of his existential graphs called Delta graphs.   Unfortunately, he had an accident before he finished writing his MS about them.  But what he did write seems to be along the lines we have been discussing.

John
 


From: "Alex Shkotin" <alex.shkotin@gmail.com>

John,


There is an important difference between a diagram and a visual representation (pictures that are recorded in our memory and are more or less accessible depending on access to visual memory): 


you need to be able to read diagrams.


A broad interpretation of the term diagram is possible. This is somewhat reminiscent of systems engineering. Consider "system thinking" vs "diagrammatic thinking".

It is important that a mind is able to store and operate with visual images - this is cooler than diagrams.


LLM for me is an engineering invention around which there is a lot of noise, because it unexpectedly turned out to be capable of simulating many mental activities.

I'm slowly learning how LLM works. There are a lot of surprises there.

As far as I know, there are GPTs that can build diagrams and even accept them as input. After all, the first ANN layer is more likely intended for an image than for text.


Interesting topics include visual thinking and movie thinking.


Alex


вт, 26 сент. 2023 г. в 23:35, John F Sowa <sowa@bestweb.net>:

Alex,

The only relevant item in that reference is a publication that is cited before the paywall:  https://arxiv.org/pdf/2309.06979.pdf

What they prove is that you can train a system of LLMs to simulate a Turing machine.  But that proves nothing.  Almost every AI system designed in the past 60 years can be "trained" to simulate a Turing machine.

Every LLM that is trained on natural language data is limited to the kind of "thinking" that is done in natural languages.  As I pointed out in Section6.pdf (and many other publications), NL thinking is limited to all the ambiguities and limitations of NL speaking.   In human communication, NLs must be supplemented by context, shared background knowledge, and gestures that indicate or point to non-linguistic information.

The great leap of science by the Egyptians, Stone-hengers, Babylonians, Chinese, Indians, Greeks, Mayans, etc., was to increase the precision and accuracy of their thinking by going beyond what can be stated in ordinary languages.  And guess what their magic happens to be?   It's DIAGRAMS!!!!

Translating thoughts from diagrams to words is a great leap in communication.  But it cannot replace the precision and generality of the original thinking expressed in the original diagrams.

As I said, you cannot design the great architectures of ancient times, the complex machinery of today, or any of the great scientific innovations of the past 500 years without geometrical diagrams that are far more complex than anything you can state in humanly readable natural language.

I admit that it is possible to translate any geometrical design or any bit pattern in a digital computer into a specification that uses the words and syntax of a natural language.  But what you get is an immense amount of verbiage that no human could read and understand.

That is the most important message that we can get across in the forthcoming mini-summit.   LLMs trained on NL input cannot go beyond NL thinking, and they cannot do any thinking that can go beyond thoughts expressible in NLs.   To test that statement, show somebody (anybody you know) a picture, have them describe it, and have somebody else draw or explain what they heard, and have a fourth person compare the original to the explanation.  (By the way, my previous sentence would be much clearer if I had included a drawing.)

John