Lars, Doug, List,
There is a huge difference between a reasoning system and a decision system. Give a set of axioms and raw data, a reasoning system derives conclusions. It does not make any value judgments about the any of them, And it does not take any actions based on any conclusions.
But every living system from bacteria on up must make decisions about which of many sources of information must be considered in taking action. I agreed with Mihai Nadin that the sources of knowledge are distributed among all components of the brain, but I should have added "brain and body". Every part of the body generates signals of pain and pleasure of varying strength. And the most brilliant or pleasurable thoughts must be deferred when a pain signal from a finger touches a hot stove.
In any animal, there are an immense number of signals coming from every part of the brain and body. There must be something that decides which one(s) to consider immediately and which ones may be deferred.
The central executive is not my idea. But I have done a fair amount of studying of all the branches of the cognitive sciences, and I have learned important ideas from comparing different ways they deal with common problems.
I'm not asking anybody to believe me. But I am asking everybody to consider the wide range of insights that come from the different branches of all six: Philosophy, psychology, linguistics, artificial intelligence, neuroscience, and anthropology. Please look at the references. And if you don't like the references I cited, look for more.
As for the central executive, please let me know of any other mechanism that can decide whether it's better to (a) read a book, (b) take a nap, (c) eat lunch, or (d) duck and cover.
John
----------------------------------------
From: "Dr. Lars Ludwig" <mail(a)lars-ludwig.com>
Doug, John,
I am just reading this catching up: I think it is noteworthy that in modern (autopoietic) system theory (Humberto Maturana, esp. Niklas Luhmann) any (not only societal) systems basically operate and evolve without a central executive. Systemic intelligence is thus independent of any central control instance, which is sometimes understood as a weakness of modern societies. The memory system as the central conscious reproductive (intelligence) system of humans is also not centrally controlled in any meaningful way I could think of (I have written about/explained the (functioning of the) memory sytem and its central importance for any technology in my thesis on "extended artificial memory", which is basically a general autopoietic theory of all memory sub-systems). Thus, theoretically, I don't yet get John's point. I guess these are relicts of pre-systemic sequential/hierarchical operational thinking (that is classic information science) not yet touched by the pradoxical problem of closed cycles of (control /) system operations.
Lars
John F Sowa <sowa(a)bestweb.net> hat am 11.04.2024 02:44 CEST geschrieben:
Doug,
The central executive controls all the processes that are controllable by the human ego. But the term 'executive' should be considered the equivalent of what the chief executive officer (CEO) of a business does in managing a corporation. There are intermediaries at various points.
Baddeley & Hitch wrote their initial article in 1974. They wrote that in response to George Miller's "Magic Number 7, plus or minus 2." They realized that there was much more to short-term memory than just words and phonemes. They called Miller's storage "the phonological loop" and they added a visuo-spatial scratchpad for short-term imagery and feelings. And they continued to revise and extend their hypotheses for another 20 or 30 years. Other neuroscientists, who are specialists in different aspects, have been working on related issues.
The idea is an important one that the Generative AI gang has not yet latched onto. But some AI people are starting to take notice, and I believe that they are on the right track. In summary, there is more to come. See the references I cited, and do whatever googling and searching you like.
John
----------------------------------------
From: "doug foxvog" <doug(a)foxvog.org>
John,
Baddeley & Hitch's "central executive" (CE) is described as an attentional
controlling system. I have just briefly glanced at it, but it seems that
the point is coordinating and accessing memory through an episodic buffer,
phonological loop, and visio-spatial "sketchpad". The hypothesized CE
deals with information, language, memory, imagery, & spatial awareness.
That covers a lot, and i assume it would also cover conscious actions and
processes.
But i don't see it covering neurohormone production or things like
heartrate. Lower level processes like basal signaling between neurons
would have no need of a central executive, as they are just basal
processes.
It's the word "all" in "all processes" that indicates to me that the claim
is excessive.
FWIW, i note that sharks also have brains -- as do "higher" orders of
invertebrates.
-- doug f
> On Wed, April 10, 2024 18:38, John F Sowa wrote:
> Doug,
>
> The central executive was proposed by the neuroscientists Baddeley &
> Hitch, not by AI researchers. There is nothing "machine-like" in the
> idea, by itself. Without something like it, there is no way to explain
> how a huge tangle of neurons could act together and coordinate their
> efforts to support a common effort.
>
> It reminds me of a neighboring town (to my residence in Croton on Hudson,
> NY), which was doing some major developments without hiring a general
> contractor. They thought that their local town employees could schedule
> all the processes. It turned out to be a total disaster. All the
> subcontractors did their tasks in a random order, each one interfering
> with some of the others, and causing a major mess. There were lawsuits
> back and forth, and the town management was found guilty and had losses
> that were many times greater than the cost of hiring a general contractor.
>
> It is certainly true that there is a huge amount of computation going on
> in the brain that is below conscious awareness. Most of that is done by
> the cerebellum (little brain), which is physically much smaller than the
> cerebral cortex. But it contains over four times the number of neurons.
> In effect, the cerebellum behaves like a GPU (Graphics Processing Unit)
> which is a superfast, highly specialized processor for all the perception
> and action that takes place without conscious awareness.
>
> For example, when you're walking down the street talking on your cell
> phone, the cerebellum is monitoring your vision, muscles, and strides --
> until you step off the curb and get run over by a bus. That's why you need
> a central controller to monitor and coordinate all the processes.
>
> Sharks and dolphins are about the same size and they eat the same kind of
> prey. Sharks have a huge cerebellum and a small lump for a cerebellum.
> Dolphins have a huge cerebral cortex and a huge cerebellum. They are as
> agile as sharks, but they can plan, communicate, and coordinate their
> activities. When the food is plentiful, they can both eat their fill.
> But when it's scarce, the dolphins are much more successful.
>
> Please look at the citations in my previous note and the attached
> Section7.pdf. The cycle of abduction, induction, testing, and induction
> depends on a central executive that is responsible for planning,
> coordinating, and integrating those steps of conscious feeling, thinking,
> reasoning, and acting. With a central executive, an AI system would be
> more intelligent. But much, much more R & D would be required before
> anything could be called "Artificial General Intelligence" (AGI). That's
> why I have very little faith in anything called AGI.
>
> John
>
> ----------------------------------------
> From: "doug foxvog" <doug(a)foxvog.org>
> Subject: Re: [ontolog-forum] The central executive
>
> On Wed, April 10, 2024 14:07, John F Sowa wrote:
>> In today's ZOOM meeting, I objected to the term 'neuro-symbolic hybrid'
>> of
>> artificial neural networks (ANNs) with symbols. Hybrids simply relate
>> two
>> (sometimes more) distinctly different things. But all the processes in
>> the mind and brain are integrated, and they all operate continuously in
>> different parts of the brain, which are all monitored and controlled by
>> a
>> central executive. ...
>
> This seems to me to be modeling the body as a machine and not an accurate
> description.
>
> There are a wide variety of processes in the mind and brain -- many
> processes in the brain occur independently without being integrated either
> with each other or with the mind. I am excluding standard cellular level
> processes that go on in every cell and the processes of the circulatory
> system in the brain. Every neuron regularly chemically interacts with
> adjacent neurons & passes electrical signals along its surface.
>
> As far as i understand, much that goes on in the brain we are unaware of,
> neurohormone production, for example. Sensory input processing does not
> seem to be integrated with a number of other processes. I have seen no
> evidence of a central executive in the brain that monitors and controls
> all the other processes. I'm not sure how such a central executive could
> have evolved.
>
> --
Bobbin,
I changed the title for this topic. Before discussing any issues of modeling anything, it's important to start with an example. I suggest that we make this topic an Ontolog project:
You send us a specification of whatever holonic structures you would like to represent. Instead of using OWL, I suggest that we use Controlled English for two subsets of Common Logic: (1) A type hierarchy specified by Aristotle's syllogisms, and (2) Full first-order logic for the constraints.
Then anybody who prefers OWL can map the controlled English to OWL (if they can). If they can't, that would show why you had difficulty in mapping your problems to OWL. I also recommend controlled natural languages (CNLs) for any other languages anybody would prefer. But we can stick with CE for the Ontolog discussion.
For a tutorial on Controlled English (CE) and its mapping to logic, see Patterns of Logic and Ontology: https://jfsowa.com/talks/patolog1.pdf . Those are slides that I used for the first day of a 5-day short course that I taught in 2019. Patolog1 should be sufficient for an intro to CE. If anybody needs more examples, see Patolog2, 3, 4, or 5.
And by the way, these slides are closely related to my book Knowledge Representation, which was published in 2000, but they were updated with another 20 years of publications and collaboration with Arun Majumdar and other colleagues in our VivoMind company. I discuss some VivoMind examples in various slides, especially Patolog4 and 5.
If you still have any of the notes that Pat Hayes sent you, I suggest that you package all of them in one PDF together with whatever holonic specifications you are working on. Then send them to us as just one file. It's better to keep it all together instead of putting them in a collection of a lot of little files.
And by the way, if anybody has trouble with CE, just look at the examples and make your best guess. That is how we designed the controlled English for VivoMind and for our new Permion company. It's very forgiving. It never says "Error". It just comes back with an English echo and the question "Is this what you mean?" If not, you can revise it until you're happy with the echo.
My personal opinion about the SW stack: It's legacy software that nobody should ever need to learn or use. All knowledge representations should be specified in CNLs supplemented with diagrams. The only people who would have to use other notations should be those who spent years in designing and/or learning them. Today we have the technology for supporting that policy.
John
----------------------------------------
From: "Bobbin Teegarden" <teegs(a)earthlink.net>
John, a question re your suggested hierarchies below:
I have been frustrated by owl’s lack of composition. Even UML has composition (and aggregation, a special case of composition). I have been working with holons and holonic structures (for contexts and such) and need compositions. Pat Hayes once told me that there were lots of composition/mereology ontologies out there, just pick one. Frustrating.
Your suggestions solicited. How would I model complex holonic structures on OWL?
Thanks, Bobbin
Ravi and Alex,
It's true that the idea itself is not unreasonable. What is amazing is that it's about some of the most common things in everybody's life: sun, water, and clouds. But nobody noticed.
Source: https://www.pnas.org/doi/full/10.1073/pnas.2312751120
Note the last line of the paragraph labeled "significance": "Such a photomolecular evaporation process could be happening widely in nature. It may significantly impact the earth’s water cycle, climate change, and has potential clean water and energy technology applications."
Alex: The effect itself is most likely insignificant, otherwise it would have been discovered earlier.
No. The effects are well known and very important. Scientists at NCAR (National Center for Atmospheric Research) have been using the fastest available computers to study clouds and evaporation since the 1950s. And at every stage, they bought the fastest available computers to do the simulations. Since then, many different groups around the world are computing the world weather on a daily basis. Those computations take those effects into account.
The fact that evaporation rates in clouds are faster than predicted by heat transfer alone is significant and had been unexplained. This study is the first discovery of a mechanism that may explain the discrepancy..
Note that the authors call it a "hypothesis". The implications are extremely important, and I'm sure that scientists around the world are making plans to replicate the results with a wide variety of methods for bouncing photons off liquid water.
Importance for ontology: A detailed ontology of everything would be extremely fragile. Even something as common as sun, water, and clouds can interact in unknown ways that may be revised at any time The top level is most useful for classification, not detailed reasoning or computation.
Summary: A TLO is most useful for common terminology that is widely shared for communication among independent working groups. Different groups that share info may interpret the details and define them by very different specialized ontologies.
The most complex reasoning is done at the detailed levels, not the upper levels. For the TLO, a simple hierarchy is sufficient. That is why OWL can be widely used -- the hierarchy is the most important part, The details may be computed by many different methods. Decidability is irrelevant.
John
----------------------------------------
From: "Ravi Sharma" <drravisharma(a)gmail.com>
Alex
This Physics demonstration is not in that class of path breaking discovery, yes its applications may turn out to be pathbreaking especially if this could also become a massive source of Hydrogen on which i have been working for past 20 years (for energy generated by H2).
One more parameter in the last mail from me which is very important is the intensity of light or number of photons at the frequency found to be effective.
Regards
Thanks.Ravi
On Fri, May 3, 2024 at 11:05 PM Alex Shkotin <alex.shkotin(a)gmail.com> wrote:
John,
The discovery of a new phenomenon (if it is confirmed) is a holiday for all physicists, but especially for theorists, because they need to explain it. As Landau said, “Theorists are bored without experimenters.”
Here's their work: "Solar-driven evaporation rates using porous absorbers have been reported to exceed the theoretical thermal evaporation limit, but the mechanism of this phenomenon remains unclear." https://www.pnas.org/doi/full/10.1073/pnas.2312751120
This is how Physics lives.
A new theorem (description of the phenomenon) will appear in the framework of the theory, and then a proof (based on physical laws).
Well, for now we have a hypothesis.
The effect itself is most likely insignificant, otherwise it would have been discovered earlier.
The very phenomenon of photons separating water molecules does not seem revolutionary. After all, photons are energy. I think Ravi writes about this.
Appendix [1] provides an example of the theorem (in the last line) and its proof from the framework of the theory of undirected graphs. Next will be the framework of the theory of Statics.
Let me point out that in the last column we name mental actions with knowledge: a-priory, union, summation. In addition to abduction, deduction and induction.
Alex
[1] https://www.researchgate.net/publication/374265191_Theory_framework_-_knowl…
Interpretive Duality in Logical Graphs • 1
• https://inquiryintoinquiry.com/2024/04/22/interpretive-duality-in-logical-g…
All,
The duality between Entitative and Existential interpretations
of logical graphs is a good example of a mathematical symmetry,
in this case a symmetry of order two. Symmetries of this and
higher orders give us conceptual handles on excess complexity
in the manifold of sensuous impressions, making it well worth
the effort to seek them out and grasp them where we find them.
Both Peirce and Spencer Brown understood the significance of
the mathematical unity underlying the dual interpretation of
logical graphs. Peirce began with the Entitative option and
later switched to the Existential choice while Spencer Brown
exercised the Entitative option in his “Laws of Form”.
In that vein, here's a Rosetta Stone to give us a grounding in
the relationship between boolean functions and our two readings
of logical graphs.
Boolean Functions on Two Variables
• https://inquiryintoinquiry.com/wp-content/uploads/2020/11/boolean-functions…
Regards,
Jon
cc: https://www.academia.edu/community/5REb1n
Mathematical Duality in Logical Graphs • 1
• https://inquiryintoinquiry.com/2024/05/03/mathematical-duality-in-logical-g…
“All other sciences without exception depend upon
the principles of mathematics; and mathematics
borrows nothing from them but hints.”
— C.S. Peirce • “Logic of Number”
“A principal intention of this essay is to separate
what are known as algebras of logic from the subject
of logic, and to re‑align them with mathematics.”
— G. Spencer Brown • “Laws of Form”
All,
The duality between entitative and existential interpretations
of logical graphs tells us something important about the relation
between logic and mathematics. It tells us the mathematical forms
giving structure to reasoning are deeper and more abstract at once
than their logical interpretations.
A formal duality points to a more encompassing unity, founding a
calculus of forms whose expressions can be read in alternate ways
by switching the meanings assigned to a pair of primitive terms.
Spencer Brown's mathematical approach to “Laws of Form” and the
whole of Peirce's work on the mathematics of logic shows both
thinkers were deeply aware of this principle.
Peirce explored a variety of dualities in logic which he treated
on analogy with the dualities in projective geometry. This gave
rise to formal systems where the initial constants, and thus their
geometric and graph‑theoretic representations, had no uniquely
fixed meanings but could be given dual interpretations in logic.
It was in this context that Peirce's systems of logical graphs developed,
issuing in dual interpretations of the same formal axioms which Peirce
referred to as “entitative graphs” and “existential graphs”, respectively.
He developed only the existential interpretation to any great extent, since
the extension from propositional to relational calculus appeared more natural
in that case, but whether there is any logical or mathematical reason for
the symmetry to break at that point is a good question for further research.
Resources —
Duality Indicating Unity
• https://inquiryintoinquiry.com/2013/01/31/duality-indicating-unity-1/
C.S. Peirce • Logic of Number
• https://inquiryintoinquiry.com/2012/09/01/c-s-peirce-logic-of-number-ms-229/
C.S. Peirce • Syllabus • Selection 1
• https://inquiryintoinquiry.com/2014/08/24/c-s-peirce-syllabus-selection-1/
References —
• Peirce, C.S., [Logic of Number — Le Fevre] (MS 229), in Carolyn Eisele
(ed., 1976), The New Elements of Mathematics by Charles S. Peirce,
vol. 2, 592–595.
• Spencer Brown, G. (1969), Laws of Form, George Allen and Unwin, London, UK.
Regards,
Jon
cc: https://www.academia.edu/community/LbAn0D
A surprising phenomenon that occurs all around us everywhere, and nobody noticed until some researchers took very precise measurements. It shows that our theories of seemingly simple matters can be incomplete in fundamental ways.
Implications for ontology and applications: Always expect the unexpected, even in the simplest, most familiar subjects. Never expect any ontology to be finished and accurate in all its details.
Reference and excerpts below.
John
--------------------------
How light can vaporize water without the need for heathttps://news.mit.edu/2024/how-light-can-vaporize-water-without-heat-0423
Surprising “photomolecular effect” discovered by MIT researchers could affect calculations of climate change and may lead to improved desalination and drying processes.
It’s the most fundamental of processes — the evaporation of water from the surfaces of oceans and lakes, the burning off of fog in the morning sun, and the drying of briny ponds that leaves solid salt behind. Evaporation is all around us, and humans have been observing it and making use of it for as long as we have existed. And yet, it turns out, we’ve been missing a major part of the picture all along.
In a series of painstakingly precise experiments, a team of researchers at MIT has demonstrated that heat isn’t alone in causing water to evaporate. Light, striking the water’s surface where air and water meet, can break water molecules away and float them into the air, causing evaporation in the absence of any source of heat.
The astonishing new discovery could have a wide range of significant implications. It could help explain mysterious measurements over the years of how sunlight affects clouds, and therefore affect calculations of the effects of climate change on cloud cover and precipitation. It could also lead to new ways of designing industrial processes such as solar-powered desalination or drying of materials.
In today's ontology summit discussion, I made the point that a reliable, dependable, trustworthy AI system must have a central executive that evaluates, monitors all developments and makes the final decisions about what actions to take.
The central executive in the human brain is located in the frontal lobes, and it depends on information and processing in all other parts of the cerebral cortex, cerebellum, brain stem, and the connections among them. It doesn't have all the knowledge of everything, but it has access to whatever critical knowledge is required when it is required to make a decision about what actions to take. In reference to the researchers who originally proposed it, a more specific name is the Baddeley-Hitch Central Executive. Check Wikipedia for more info. (Technical articles in Wikipedia get far more thorough evaluation (by humans!) than anything generated by LLMs.)
In any large organization -- a business has a CEO, every department has a manager, every school has a principal, every classroom has a teacher, and every organization of any kind for any purpose has somebody in charge -- the person in charge must have guidelines (AKA goals about what to do and ethical principles about how to do it). The central executive in an AI system must have similar goals and ethical principles.
In the discussion, somebody said that any such thing must be unbiased. But that is a meaningless statement. Anything that has background knowledge and goals will be biased toward using that knowledge to accomplish those goals. A better term is ethical. An ethical AI system would be fair and honest. It would avoid harming people, destroying property, or damaging the environment. And it would obey all laws, rules, and regulations.
Another person suggested that the set of information accessed by a generative AI system should be limited to "safe" information that will not generate bad results. But that would severely limit what the system can do. And it cannot prevent some combination of "safe" actions from causing unintended damage in circumstances that are different from anything it had been trained for.
In March I finished an article I had been discussing for several months. I won't release a full copy, because it has not yet appeared in print. However, the attached Section7.pdf summarizes the issues, and the last page has some references FYI. It discusses the central executive and its role in humans and in AI systems.
Summary: LLMs are used for two purposes: (1) supporting a natural language interface to a complex AI system; and (2) generating new hypotheses, suggestions, proposals, or educated guesses. The output for #1 is normally safe, but checking would be useful to ensure safety, The output for #2 would be used in the initial stage of abduction, which would be followed by deduction for checking and evaluation prior to any further use. But people who are knowledgeable about the technology may ask to see the output and do their own checking.
John
Operator Variables in Logical Graphs • 1
• https://inquiryintoinquiry.com/2024/04/06/operator-variables-in-logical-gra…
All,
In lieu of a field study requirement for my bachelor's degree I spent
two years in various state and university libraries reading everything
I could find by and about Peirce, poring most memorably through reels
of microfilmed Peirce manuscripts Michigan State had at the time, all
in trying to track down some hint of a clue to a puzzling passage in
Peirce's “Simplest Mathematics”, most acutely coming to a head with
that bizarre line of type at CP 4.306, which the editors of Peirce's
“Collected Papers”, no doubt compromised by the typographer's reluctance
to cut new symbols, transmogrified into a script more cryptic than even
the manuscript's original hieroglyphic.
I found one key to the mystery in Peirce's use of “operator variables”,
which he and his students Christine Ladd–Franklin and O.H. Mitchell
explored in depth. I will shortly discuss that theme as it affects
logical graphs but it may be useful to give a shorter and sweeter
explanation of how the basic idea typically arises in common
logical practice.
Consider De Morgan's rules:
• ¬(A ∧ B) = ¬A ∨ ¬B
• ¬(A ∨ B) = ¬A ∧ ¬B
The common form exhibited by the two rules could be captured in a single
formula by taking “o₁” and “o₂” as variable names ranging over a family
of logical operators, then asking what substitutions for o₁ and o₂ would
satisfy the following equation.
• ¬(A o₁ B) = ¬A o₂ ¬B
We already know two solutions to this “operator equation”, namely,
(o₁, o₂) = (∧, ∨) and (o₁, o₂) = (∨, ∧). Wouldn't it be just
like Peirce to ask if there are others?
Having broached the subject of “logical operator variables”,
I will leave it for now in the same way Peirce himself did:
❝I shall not further enlarge upon this matter at this point,
although the conception mentioned opens a wide field; because
it cannot be set in its proper light without overstepping the
limits of dichotomic mathematics.❞ (Peirce, CP 4.306).
Further exploration of operator variables and operator invariants
treads on grounds traditionally known as second intentional logic
and “opens a wide field”, as Peirce says. For now, however, I will
tend to that corner of the field where our garden variety logical
graphs grow, observing the ways in which operative variations and
operative themes naturally develop on those grounds.
Regards,
Jon
cc: https://www.academia.edu/community/Lxn1Ww
cc: https://mathstodon.xyz/@Inquiry/112225263055943815
Dima,
Yes, they were in the same field as George Miller (psychology). But they also hung out with enough neuroscientists that some of the blood and guts rubbed off on them. Right now, the major research on the topic depends on neuroscience.
That is one among many reasons why I prefer to use the term 'Cognitive Science'. The subject is so complex that collaboration among the different fields is essential.
John
----------------------------------------
From: "Dima, Alden A. (Fed)' via ontolog-forum" <ontolog-forum(a)googlegroups.com>
Hi John,
A certain large language model tells me that Alan Baddeley and Graham Hitch were psychologists and not neuroscientists.
Alden