Since I suggested that anybody who is trying to define anything should check the definitions in a good dictionary, I decided to take my own advice. See the attached defs.htm for definitions of the words 'diagram' and 'structure' in the American Heritage Dictionary and the Merriam Webster Dictionary. In general, I have found the American Heritage definitions and etymologies very good. They are usually clearer and more precise than the definitions in other dictionaries. But it's always useful to get a second or third opinion.
An important distinction: A structure is a pattern in an entity of some kind. A diagram is a pattern that somebody draws or imagines as a representation or explanation of a pattern that somebody observed of found in some structure.
Therefore, a diagram would be more likely to be the kind of pattern that some human or animal or computer would be likely to use to support reasoning or computation about a pattern of any kind.
John
Cf: Sign Relations, Triadic Relations, Relation Theory • Discussion 6
http://inquiryintoinquiry.com/2022/03/01/sign-relations-triadic-relations-r…
Re: FB | Charles S. Peirce Society
https://www.facebook.com/groups/peircesociety/posts/2551077815028195/
::: Alain Létourneau
https://www.facebook.com/groups/peircesociety/posts/2551077815028195?commen…
All,
Alain Létourneau asks if I have any thoughts
on Peirce's Rhetoric. I venture the following.
Classically speaking, rhetoric (as distinguished from dialectic)
treats forms of argument which “consider the audience” — which
take the condition of the addressee into account. But that is
just what Peirce's semiotic does in extending our theories of
signs from dyadic to triadic sign relations.
We often begin our approach to Peirce's semiotics by saying he puts the
interpreter back into the relation of signs to their objects. But even
Aristotle had already done that much. Peirce's innovation was to apply
the pragmatic maxim, clarifying the characters of interpreters in terms
of their effects — their interpretants — in the flow of semiosis.
Some reading —
Awbrey, J.L., and Awbrey, S.M. (1995),
“Interpretation as Action • The Risk of Inquiry”,
Inquiry : Critical Thinking Across the Disciplines 15(1), 40–52.
https://www.academia.edu/57812482/Interpretation_as_Action_The_Risk_of_Inqu…
Regards,
Jon
Logical Graphs • First Impressions 1
• https://inquiryintoinquiry.com/2023/08/24/logical-graphs-first-impressions/
Introduction • Moving Pictures of Thought —
A “logical graph” is a graph-theoretic structure in one
of the systems of graphical syntax Charles Sanders Peirce
developed for logic.
In numerous papers on “qualitative logic”, “entitative graphs”,
and “existential graphs”, Peirce developed several versions of
a graphical formalism, or a graph-theoretic formal language,
designed to be interpreted for logic.
In the century since Peirce initiated this line of development,
a variety of formal systems have branched out from what is abstractly
the same formal base of graph-theoretic structures. This article
examines the common basis of these formal systems from a bird's eye
view, focusing on the aspects of form shared by the entire family of
algebras, calculi, or languages, however they happen to be viewed in
a given application.
Regards,
Jon
cc: https://independent.academia.edu/JonAwbrey
cc: https://mathstodon.xyz/@Inquiry/110945139629369891
Alex,
I very strongly agree with your comment below: The diagrams are fundamental, and the words are secondary. Whenever there is any dispute -- start with the diagrams. Formalisms, such as mathematical notations, always have a more direct mapping to diagrams than to words. Euclidean geometry is the best example. But any book that uses algebraic notations can always map the algebra more clearly and precisely to a diagram than to any words in any natural language.
Re engineering diagrams: Anybody who can't read the engineering diagram, can't understand a precise explanation written in their native language. Any simple explanation that they can understand is guaranteed to be an oversimplification. But if the engineering diagram is carefully explained to them then they can and do understand the subject.
I know that point very well -- because I've done it. I also know that people who claim they understand a simple explanation, but cannot understand the diagram don't know what they're talking about. If you ask them some simple questions about how the thing works, their answers are hopelessly confused. I know that because I've met such people.
If you doubt that point, try that exercise with people who claim that they understand the simple explanation.
The mapping to diagrams is especially important for robots. Every action by a robot has a direct mapping to and from some kind of diagram. But the explanation in a natural language is more complex, more unreadable, and more prone to misreading and misunderstandings.
John
----------------------------------------
From: "Alex Shkotin" <alex.shkotin(a)gmail.com>
John,
Very briefly about formal definitions. The formal definition should be compared with an engineering drawing.
Everyone uses various devices, but few people can and should be able to read engineering drawings.
The construction of formal definitions is important, for example, because they can be transferred to robots.
Alex
The subject line comes from an article in the New York Times (excerpts below). Data from the James Webb telescope is raising serious difficulties with long-held assumptions about the evolution of the universe and the things in it.
This raises yet another objection to the idea of a universal formal ontology of everything. But it adds further support for the idea of an open-ended collection of specialized ontologies for any particular topic or system that anybody may be working on or with.
The overall framework of everything may be more like a dictionary or encyclopedia written by humans for humans (and also computers). Wikipedia is a good example. The editors of Wikipedia post warning notes about articles that need more or better references. But the best articles are far more reliable than anything that can be derived from LLMs -- and they have reliable citations, not the phony citations that the LLMs generate (or hallucinate).
This is one more reason for abandoning the project of creating a universal ontology of everything. Science and engineering have made excellent progress without them. The task of determining what should replace them is a very important issue for Ontolog Forum.
John
_________________________
The Story of Our Universe May Be Starting to Unravel
Sept. 2, 2023
By Adam Frank and Marcelo Gleiser
www.nytimes.com/2023/09/02/opinion/cosmology-crisis-webb-telescope.html
Not long after the James Webb Space Telescope began beaming back from outer space its stunning images of planets and nebulae last year, astronomers, though dazzled, had to admit that something was amiss. Eight months later, based in part on what the telescope has revealed, it’s beginning to look as if we may need to rethink key features of the origin and development of the universe.
According to the standard model, which is the basis for essentially all research in the field, there is a fixed and precise sequence of events that followed the Big Bang: First, the force of gravity pulled together denser regions in the cooling cosmic gas, which grew to become stars and black holes; then, the force of gravity pulled together the stars into galaxies.
The Webb data, though, revealed that some very large galaxies formed really fast, in too short a time, at least according to the standard model. This was no minor discrepancy.
It was not, unfortunately, an isolated incident. There have been other recent occasions in which the evidence behind science’s basic understanding of the universe has been found to be alarmingly inconsistent.
Take the matter of how fast the universe is expanding. This is a foundational fact in cosmological science — the so-called Hubble constant — yet scientists have not been able to settle on a number. There are two main ways to calculate it: One involves measurements of the early universe (such as the sort that the Webb is providing); the other involves measurements of nearby stars in the modern universe. Despite decades of effort, these two methods continue to yield different answers.. . .
Physicists and astronomers are starting to get the sense that something may be really wrong. It’s not just that some of us believe we might have to rethink the standard model of cosmology; we might also have to change the way we think about some of the most basic features of our universe — a conceptual revolution.. . .
The standard model today holds that “normal” matter — the stuff that makes up people and planets and everything else we can see — constitutes only about 4 per.cent of the universe. The rest is invisible stuff called dark matter and dark energy (roughly 27 percent and 68 percent).
Cosmic inflation is an example of yet another exotic adjustment made to the standard model. Devised in 1981 to resolve paradoxes arising from an older version of the Big Bang, the theory holds that the early universe expanded exponentially fast for a fraction of a second after the Big Bang. This theory solves certain problems but creates others. Notably, according to most versions of the theory, rather than there being one universe, ours is just one universe in a multiverse — an infinite number of universes, the others of which may be forever unobservable to us not just in practice but also in principle.
Cosmology is not like other sciences. The universe is everything there is; there’s only one and we can’t look at it from the outside. You can’t put it in a box on a table and run controlled experiments on it. Because it is all-encompassing, cosmology forces scientists to tackle questions about the very environment in which science operates: the nature of time, the nature of space, the nature of lawlike regularity, the role of the observers doing the observations.
Alex, Gary, Dan B.
Before writing any detailed comments, I want to emphasize three points: (1) Major software systems survive in one form or another for 40 years or more. Few, if any precise definitions from the early days remain unchanged for more than a tiny fraction of that time. As an example, IBM developed the first Airline reservation system for American Airlines in the 1960s to run on the IBM 7094. An updated version of that became IBM's airline reservation system running on System/360. The ontology and terminology of that system became the industry-wide basis for all reservations for hotels, cars, and any kind of services that travelers might need. The ontology and choice of word definitions that IBM adopted in collaboration with American Airlines has become the universal world-wide standard. The formal definitions change with every update, but the choice of words and their translations from English to other languages do not change.
(2) The researchers and programmers working on the details of any system may understand the formal details, but the top-level managers, the great majority of the users, and the investors who have money will never see or understand the details of those definitions. They will interpret the terminology according to the way those words are used in everyday life. If the formal definitions diverge too far from common usage, the result will be confusion and repeated errors.
(3) Any attempt to edict an official, precise definition for all terms will guarantee that whatever system uses those terms exactly as defined will become obsolete within a few years. Please note that the manuals for every product -- from a refrigerator to a programming language -- will have a new manual with new definitions of key terms for every update.
IBM used the term 'functionally stabilized' for any hardware or software system whose terminology would never change. That term was a synonym for "obsolete". IBM would continue to sell those obsolete systems to customers who could not afford to update their systems to accommodate the new products. Microsoft, for example, just recently stopped producing and delivering updates for System/95 (wjocj was introduced in 1995)..
Alex> Is there a chance to have one world wide dictionary for every science and technology?
You can define it, if you like, but it is guaranteed to become obsolete with the first new discovery in science or new development in engineering. And even if you define it, 99.999% of the people in the world would never use more than a tiny percentage of the words as defined.
Alex> AI is first of all summa technologiae, each with its own glossary.
There is no universal glossary of AI. New terms are constantly being defined by people who never read or understood similar terms that had been defined and published before. AI terminology changes very rapidly because many AI people never read anything that is more than five years old.
Alex> Why is the theory of directed graphs with composition of arrows called category theory?
For historical reasons. Mathematicians, unlike Ai people, cite publications of any date and make updates compatible with the original definitions.
Alex> Why did the DBMS guys call their company Oracle?
Because it answered questions, like an oracle. There are many horror stories about compatibility in DB systems, but they developed in different ways than AI for different reasons -- mostly bad: preserving incompatibility. Preserving incompatibility was also one of the worst reasons for Windows 95. But that is another story.
Dan> In general, ML-AI terminology is a mess. Eg Labelled/unlabelled data, unsupervised/supervised learning, giving way (thankfully) to the otherwise wordy “self-supervised”. And the word “inference” is used in ways that might make some ontolog-forum readers splutter their coffee.
That's a good answer to Alex's questions.
Gary> One may leverage results from prior efforts with best practices but often we don't have the vision or time or temperament to do this.
That's a good explanation for the points by Alex and Dan.
In summary, most people who need to know something about Ai technology (users and funding agencies, for example) will not know or remember the detail of a formal definition, Even if they read the definition, it will be easier to understand and remember if the words are used in ways that are consistent with common usage -- as codified in common dictionaries.
An example of a bad choice is the term 'fundamental model'. Both words are commonly used, but that combination does not give any hint of what the term means. But the terms 'functional pattern' and 'structural pattern' use common words that give an approximate idea of the meaning. That makes them easier to learn, easier to remember and easier to use by everybody -- programmers, managers, funding agencies, and intelligent outsiders who want to know what is happening.
John
Alex.
I read that web page you cited. What Google calls "foundation models" I would call "mappings based on specialized ontologies". They include three kindds: (1) text to image, (2) text to code, and (3) speech to text.
I believe they are making a serious mistake by using English text in their foundation. The article I'm writing, which puts Peirce's diagrammatic reasoning at the center, is more general, flexible, and powerful. It also avoids a huge number of complex issues that differ from one natural language to another -- even worse, the words differ from one kind of application to another, even in the same language.
Thanks for citing that article. I am now finishing the final Section 7 of my article, and this method by Google gives me a clear target to shoot at. I'm actually glad to see that Google is making that mistake -- because it makes it easier to compete with them.
That diagram by Gartner puts foundation models at the top of the hype cycle. That means they are about to plunge into the trough of disillusionment. I would enjoy giving them a little push.
John
----------------------------------------
From: "Alex Shkotin" <alex.shkotin(a)gmail.com>
John,
I am talking about this part of Gartner's picture you gave in attachment.
[image.png]
It was unknown to me that guys from AI technology have their own ideas for the term "foundation models" [1] (just an example).
Alex
[1] https://ai.google/discover/foundation-models/
Alex,
Your observations about existential graphs are a good starting point for several topics.
Re Jon Awbrey: I've known him for many years. He's developing a system that begins with EGs and connects with many mathematical issues. But I've been relating a much broader range of Peirce's theories to the full range of issues in the latest developments of AI and cognitive science.
Re Boutbaki: They started from a totally different direction, and they discovered a version of "squashed" existential graphs. They define variables by starting with a linear formula with existential quantifiers. Then they draw arcs above the line to connect each quantifier with the place in each function or relation where a variable would appear. Finally, they choose a letter as the name of each arc. Then they insert the name of the arc at each end point of each arc. Finally, they erase the arcs to get a more familiar formula.
To map their squashed EGs to Peirce's notation, (1) convert each formula to a version with just the operators for AND, NOT, and EXISTS; (2) Erase all the AND operators and assume that the blank regions represent AND. (3) Replace each NOT operator with a shaded region. (4) pull the squashed EGs apart to full two dimensional graphs with shaded ovals for negation. (5) If some of the arc lines cross, move to 3D to avoid any crossing.
And voila: You now have an existential graph. The Bourbaki demonstrated that all of mathematics can be specified by EGs.
But please read the following article: "The ignorance of the Bourbaki" by Adrian Mathias, ttps://www.dpmms.cam.ac.uk/~ardm/bourbaki.pdf
Individually, the members of the Bourbaki were brilliant mathematicians, the books they produced contain a great deal of important insights and mathematical results. But their goal was mistaken, and their method had some serious flaws. The article is only 12 pages long, and it is well worth reading.
And by the way, note the huge number of mathematical theories they related. Tha's only a finite number, but there is no limit to the number that could be developed -- that implies infinity.
Just look at Wolfram's Mathematica for the huge number of theories that have been implemented in computable forms that can be used for practical applications. Unlike LLMs, those theories are very precise, and they don't make stupid mistakes. Nobody calls them AI. They call them mathematics.
John
----------------------------------------
From: "Alex Shkotin" <alex.shkotin(a)gmail.com>
John,
A few more thoughts.
It is very interesting to compare your approach with this [1] project of Jon Awbrey as you have the same root: Pierce's EG.
By the way, even such a formalist as N. Bourbaki, in order to avoid variables bound by a quantifiers, turned a formative construction into a graph. In this graph, occurrences of quantifier variables are replaced by the sign □, and are directly connected to their quantifier by an edge. This saved N. Bourbaki from writing an algorithm for binding a quantifier variable to its own quantifier.
Alex
[1] https://inquiryintoinquiry.com/2023/08/24/logical-graphs-first-impressions/
Alex,
Thanks for that example. It shows the importance of the unconscious computation that is performed in the human cerebellum, whose perceptions and actions are totally unconscious. I urge everyone to click on the link in your note.
There is an important reason why the human drone experts lost in the competition with the fully automated drone: the humans used a combination of high-speed cerebellar computation (as the unmanned drone does) with the much slower (and conscious) decision making in the cerebral cortex. Those conscious decisions slowed their performance.
Compare that with the high-speed performance by the gymnastic champion Simone Biles. She devoted years of conscious effort to train her cerebellum to perform the various motions automatically. Before each competition, she perfects the training for each routine she performs. In a performance that has multiple routines, she uses her cerebral cortex to check the positions and timing for each routine. Then she launches a pretrained routine that is totally under the control of the unconscious cerebellum.
All of us use the cerebellum for routine processing in walking, eating, driving a car, or typing on a keyboard. Mathematicians take advantage of that high-speed processing in the most complex kinds of math. But writing a proof uses the slower conscious processing in the cerebral cortex to check whether the high-speed calculations are correct.
Note that the processes in the cerebellum are precise for what they do. The errors can occur when the decisions for running them (made by the cerebral cortex) are not correct.
Note that none of these processes, either by the cerebrum or by the cerebellum, could be performed by the LLMs. The Large Language Models might respond to a verbal command to execute a routine by the cerebellum. But all their operations are probabilistic, and they're based on vague and often ambiguous natural language. They can't do the precise checking and testing that guarantee accuracy.
LLMs are useful. But they're just one more tool in the huge toolkit of AI technology. They do a limited range of operations very well, but they can't do the whole job.
John
----------------------------------------
From: "alex.shkotin" <alex.shkotin(a)gmail.com>
Subject: [ontolog-forum] FYI:Champion-level Drone Racing using Deep Reinforcement Learning (Nature, 2023)
https://youtu.be/fBiataDpGIo?si=bDaE1XR4dQGJXqo6
Colleagues, while we are formalizing theoretical knowledge and building structures that model reality, it is interesting to look at achievements in the field where algorithms decide everything, but they are also helped by AI.
Alex
Alex and Michael DB,
To Alex: I agree with what you wrote, but with three important qualifications: (1) Every node in a diagram represents a concept. (2) Every linear notation for mathematics is a special case of some diagram; in some cases, the linearization is a one-to-one mapping; but in other cases, it loses some of the information, or it encodes that information in a more obscure way. Euclidean geometry is the most obvious example, but other kinds of geometry are even stronger reasons for multi-dimensional diagrams. (3) The tensors that represent LLMs are special cases of diagrams with special-case operations; for full generality, they must be supplemented with more general diagrams and operations on them.
And by the way, the title of my first book, Conceptual Structures, emphasizes the point that diagrams represent structures, and every structure can be represented by a diagram. Linear notations are just one-dimensional diagrams. Mapping a multi-dimensional structure into a one-dimensional line adds a huge amount of complexity. As just one example: direct connections by lines must be replaced by special symbols called names. And those names create a huge amount of complexity when they are constantly being renamed.
To Micheal: Since you agree with me, I agree with you.
Re consciousness: The fact that the cerebellum has over 4 times as many neurons as the much larger cerebral cortex is important. Even more important is that (1) Those neurons are essential for high-speed mathematical computation and reasoning. (2) They are aslo essential for all complex methods of performance in music, gymnastics, art architecture, and complex design of machinery of any kind. and (3) Nothing in the cerebellum is conscious.
Just look at the fantastic gymnastics by Simone Biles. She required years of dedicated *conscious* training to learn those moves, but the details of the high-speed performance are outside of any conscious control. It would be impossible to think in words about each of those details at the speed at which they were performed. Each performance was initiated and controlled by conscious decisions, but the speed is too fast for any conscious control. She was conscious of the performance, but not of every detail computed by her cerebellum.
That is a very important distinction: the computation in the cerebellum is not conscious. And no definition of consciousness would have the slightest value for understanding what and how the cerebellum computes its operations
But since you mention Searle, I'm not surprised at his response about panpsychism. I remember another story about a dinner party he attended, where the guests were sitting outside while the food was being prepared. At one point, Searle jumped up and proclaimed in a loud voice that frightened the neighbors, their children, and their dogs, a denunciation of "Derrida and the other inhabitants of Frogistan."
John
_______________________________________
From: "Alex Shkotin" <alex.shkotin(a)gmail.com>
John,
I look forward to reading your article, as the presentation is more or less sketchy. Diagrams are a wonderful tool, but thinking in concepts is what science and technology, and thinking in general, relies on.
And creating, researching and using structures is also very important.
Formula is amazing way to keep process definition, like
h = gt^2/2
where h - height, g - gravity constant, t - time of falling from the Leaning Tower of Pisa.
Alex
__________________________________________
From: "Michael DeBellis" <mdebellissf(a)gmail.com>
Subject: [ontolog-forum] Re: On the concept of consciousness
I was going to write a reply to this... actually I did anyway but it's shorter because John Sowa already said what I was going to say. No-one really has a clue and virtually all the discussions I've ever seen on this end up going nowhere. IMO there are some questions that are amenable to scientific analysis and some (given our current knowledge) that aren't and consciousness is one of those that currently aren't. You have extremes such as a paper I saw years ago by some leading neuroscientists that talked in depth about consciousness and defined it as the opposite of being asleep or in a coma. And on the other extreme people like Kristof Koch who believe in Pansychism, that everything in the universe is conscious.
Many years ago I sat in on a Philosophy of Mind lecture series led by John Searle at Berkeley. One of my favorite classes was a guest lecture by Koch. Searle started out by lauding him as one of the most brilliant minds ever (which at the start of his talk I could see why, Koch really knows his neuroscience). Then Koch started getting into his Pansychism philosophy and you could just see the color draining from Searle's face and Searle finally said something like "Wait, you are serious?! I thought you were talking about Pansychism as an example of a clearly wrong theory!" And it got more entertaining from there.
I don't agree with Patricia Churchland much but there is a book called "This Idea Must Die!" where she talks about the Neural Correlate of Consciousness (NCC) as an idea that must die. Her reasoning was that there are so many concepts we don't have coherent, falsifiable models of yet such as the Language Faculty and Episodic Memory and that whatever consciousness is, we probably all agree that it is closely tied to memory and language so until we at least have decent theories on such more basic (but still barely understood) concepts it is pointless to postulate theories about consciousness. I mean it can be fun but not something I expect to see any serious science on.
Michael