Alex,

Re Blob:  Generalized existential graphs can contain anything in any format inside any area.  There is no limit in theory (for proving theorems or analyzing options), but there are limits imposed on whatever implementation (digital or analog or neural) that may use whatever technology is available.  A blob is just one among an open-ended variety.

Last week, I sent an excerpt from the article I'm writing.  It was called Excerpts234.pdf, dated Nov 17.

I am now sending a larger version Excerpt234.pdf, attached below.  The new version includes larger excerpts, and I include a list of references with URLs on the last page.  I have referred to some of them before, and I may mention  others in  my notes.   So those references may explain (or confuse) many of the issues we have been discussing.

In particular, the oldest reference is chapter 7 (cs7.pdf), Limits of Conceptualization.  This discusses issues about forming concepts, which are a prerequisite for any kind of language or many kinds of diagrams.  If I were writing a new version today, I would add many more items, but even that chapter has quite a few complexities to consider.

One revewier, who wrote a favorable review of the book said that he was surprised that Chapter 7 seems to refute everything in the previous chapters.  But that is not what I meant.  cs7.pdf summarizes the many topics that could not be handled by the AI methods of 1984.  And in fact, they still cannot be handled by the latest AI methods today.  LLMs can't even begin to handle them.  But I'm adding more to Section 7  of my current article to explain why.

Another interesting article from 2007, Language games, A foundation for semantics and ontology, goes into many of the limitations of language of any kind.   It also discusses some of the ways of getting around the problems.  Those are the kinds of methods that require technologies other than LLMs.   You can't solve problems created by language by systems that are based on (or limited by) language.

Anothre example is  Two paradigms are better than one, and multiple paradigms are even better (by Majumdar & Sowa).   The basic idea is that humans have an open-ended variety of methods of thinking and reasoning.  The technologies that Arun and I (and others) have been developing use an open-ended variety of methods -- the more the better,  Systems based on a single technology, such as LLMs, cannot have the open-ended flexibility of human intelligence. 

That is the basis for all the criticisms I have been making about LLMs.   I am not saying that LLMs are bad, I'm just discussing their limitation to a single paradigm.  That paradigm is very powerful for what it does.  But by itself, it cannot begin to compete with human intelligence.

Other references in that list deal with related issues.  Altogether, they show a huge range of issues that require methods other than LLMs.   The 60+ years of AI and computer science are not obsolete.  They can do many kinds of operations that current LLMs (combined with artificial NNs) cannot begin to do.

John
 


From: "Alex Shkotin" <alex.shkotin@gmail.com>

John,


Thank you. This is the usual way to think about reality: complex, chaotic, infinite etc.


And let me add that keeping in a computer any perception like information (pictures, movies, sounds etc.) is known as blob [1].

So I suppose preliminary that keeping GEGs in the computer is not a problem, the processing is a problem.

To be as simple as a programmer, let me propose just to add a blob data type to CL. 


For me it is wrong to read notes about theory. I need to read the theory.


By the way in reality it's interesting to follow what kind of perception like information robots operate with.


Alex


[1] https://en.wikipedia.org/wiki/Binary_large_object