Igor,
I'm glad that we agree on the value of Prolog. The reason why Prolog failed to achieve much usage in the US was caused by a prominent AI author, who wrote a couple of books and was considered an authority. His comment about Prolog: "We tried that with Microplanner, and it was inefficient."
Since it's not nice to say anything bad about the dead, I won't mention his name. But his comment was based on a profound ignorance: Microplanner was a research project, written by one person for a PhD dissertation. It did not support the full functionality of Prolog. It was written in LISP, which is OK for AI applications, but it is not efficient for high performance.
For the IBM system that beat the world champion in Jeopardy, the program that analyzed the English questions and answers was written in Prolog. That program was written by Michael McCord, who was one of the four co-authors of the book Knowledge Systems and Prolog. My "Prolog to Prolog" was another one of the four.
By the way, the IBM developers had tried to use some software based on the Semantic Web stack. But it was too slow and too difficult to update. They brought back McCord, who had retired from IBM a few years earlier. His Prolog implementation was faster, had more functionality, and was easier to update.
For our VivoMind company, we used Prolog for supporting applications that processed English and other natural languages. The semantic representation was based on conceptual graphs, which are based on Peirce's existential graphs, which we extended (with a few minor features) to represent the full ISO standard for Common Logic.
The users communicated with applications in English and in diagrams., See the examples in https://jfsowa.com/talks/cogmem.pdf . The VivoMind system could analyze English (and other NLs) as well as computer languages. For the application in legacy reengineering, it was able to compare and detect errors and inconsistencies in programming languages, English comments in the code, documentation about the programs, and various memos, commentary, and publications.
Our new Permion.ai company has a more general foundation that can also support current versions of LLMs. That enables detailed analysis, evaluation, and correction of output generated by the LLMs. Detecting errors is very important. Correcting errors is even better.
And by the way, John McCarthy, who originally designed LISP, finally admitted that Prolog was better for advanced AI applications. My colleague Arun Majumdar influenced that decision after showing him VivoMind applications and their implementation.
John
----------------------------------------
From: "Igor Toujilov' via ontolog-forum" <ontolog-forum(a)googlegroups.com>
John,
A typical scenario of consuming an ontology by end-users is the following:
- Download the ontology;
- Load it into a visual tool;
- Run a reasoner on it and see the results in the visual tool.
SWI-Prolog is an excellent system with web-server components and OWL support. I used it in 2005 when I was working in UCL for the Cancergrid project in ontologies in cancer bioinformatics. This was a big international project where I collaborated with the National Cancer Institute (NCI), universities of Oxford, Cambridge, Manchester, UCLA, etc. I created an ontology web-server based on SWI-Prolog. It loaded ontologies, e.g. NCI Thesaurus, from OWL files into the Prolog runtime environment and then exposed the ontology through the web. The server accepted queries in Prolog and I suggested using it as a production server for the project. However, after some testing my suggestion was not accepted. The reason for this decision: software engineers who tested the server were not good enough in Prolog to write the queries.
Igor
Igor,
Before commenting on OWL + SWRL, I'd like to say a bit about my qualifications for evaluating hardware and software design. I admit that in my 30 years at IBM, I did quite a bit of research. But I also worked with the IBM large systems development groups, where I learned how computer systems are designed and developed from the circuit level to the mainframe level to the operating system level to the user application level.
First, I suggest a cartoon, which was drawn by one of my colleagues at IBM in 1974: The Adventures of Task-Force Tim, http://www.jfsowa.com/computer/tftim.htm . The topic is the failure of IBM's huge project called Future Systems (FS). It collapsed in 1974, and high-level management attempted to cover up the failure and salvage something from it.
On a more serious note, see the page on Computer Systems, in which I discuss some lessons learned from those days and from later developments in the computer industry: http://www.jfsowa.com/computer/ . The first topic is about issues related to FS, including my Memo 125, which was circulated by copying machines throughout IBM. The people who read it loved it (most of them) or hated it (the managers who were responsible). Fortunately, Carl Conti protected me by getting me transferred to the IBM Systems Research Institute, which was outside the chain of command of the people who caused FS and its downfall. That is where I did my research, writing, and teaching about AI, natural language processing, experts systems, and applications to computer software.
And note, by the way, I have been writing critical articles about bad system designs for many years -- even when it got me in a lot of trouble -- plus a lot of praise by people who had been too timid to admit publicly that the design was bad. They didn't have to criticize FS themselves. They just asked someone "Did you see Sowa's Memo 125?" Then depending on the answer, they knew whether it was safe to add their own criticisms.
Since OWL + SWRL have been forced upon innocent victims for almost 20 years, there is a huge amount of legacy software built on top of them. That software is not going away. I am not suggesting that anybody should throw away programs that they have been using for many years. The effects of legacy software don't go away. Everything that follows has to have interfaces to it for many, many years.
But what I am saying is that OWL was a terrible blunder. It is the kind of mistake that very intelligent researchers who have little or no experience with practical software design tend to make. Please read my Memo 125. The kind of blunders that IBM managers made in 1974 comes from the same motivations as the mistakes made by the designers of OWL.
To answer your question, I'll just say that you or anybody else could write an ontology for addition in FOL much simpler, better, and faster than in OWL + SWRL. In fact, you can Google many publications about how to define addition by axioms stated in FOL. It's simple.
John
----------------------------------------
From: "Igor Toujilov' via ontolog-forum" <ontolog-forum(a)googlegroups.com>
John,
I hoped you would show us a computer-executable example of how to sum a set of numbers in an ontology, using a publicly available free tool based on FOL or CL.
Igor
On Fri, 17 May 2024 at 22:46, John F Sowa <sowa(a)bestweb.net> wrote:
Igor,
My recommendation is to replace OWL + SWRL with (1) a simple type hierarchy, equivalent to just the hierarchy part of OWL, which is the most widely used subset, and (2) full first-order logic (FOL) for the constraints.
The requirement of decidability makes the OWL notation (Turtle or whatever) more complex than it should be. That makes it harder to learn, harder to use, and less expressive. And it serves no useful purpose whatsoever. Nobody but a logician who had advanced training could ever write a statement that is undecidable. The Cyc developers, for example, use a superset of FOL, but in 30 years of applications, nobody had ever written an ontology that contained any undecidable statements.
For the type hierarchy, I recommend the four sentence types of Aristotle's syllogisms. Full FOL can be expressed in a very readable and writable notation that uses logical operators with the following spelling: AND, OR, NOT, IF-THEN, SOME, EVERY. For a summary, see slides 25 to 37 of https://jfsowa.com/talks/patolog1.pdf . That's all you need to express (1) a type hierarchy equivalent to the OWL hierarchy, and (2) constraints in full FOL.
SWRL has some non-logical features, but they are unnecessary if you have full FOL.
This is a simple subset of Common Logic. You can use any standard dialect of CL for the internal representation, and you can use highly readable versions of Controlled English, Russian, or whatever for the user interface.
John
Neil,
Thanks for the reference.
Very short summary: a representation of one cubic millimeter of human brain tissue takes1.4 petabytes. And that is just one millionth of the total volume of the brain.
However, each neuron is far more complex than the artificial NNs, which are just simple switches with no internal structure. Each neuron (in humans and other beasties) has the ability to store a very large amount of information, say X. But the size of X is still unknown. To get an estimate of how much computer storage would be necessary to simulate a human brain, multiply one million times 1.4 petabytes times the number of neurons (57 thousand) times the number of synapses (150 million) times some large number X.
However, it might not be necessary to simulate every feature of every neuron. It might be possible to recreate the function with some smaller amount of hardware. So you might divide that huge number by 10 or even by a million or even by a billion. But you are still left with a very, very, very big number.
Conclusion: Simulating the full power of the human brain is not possible with any computer hardware & software available today or the foreseeable future. And nobody knows how many more years of research and development of hardware and software would be required to simulate the brain of a rat, let alone the brain of a human.
Some people claim that a simulation of the brain is not necessary. It might be possible to simulate the function of the brain by some other method. Yes, that might be possible. But just note the neuroscientists' favorite nematode, C. elegans. It has a total of just over 300 neurons. They have been trying to simulate its abilities for quite a few decades without success. And 300 is trivial compared to that huge number above.
See below for some excerpts from the article that Neil cited.
John
_____________________
One cubic millimetre of brain mapped in spectacular detail
https://www.nature.com/articles/d41586-024-01387-9
Researchers have mapped a tiny piece of the human brain in astonishing detail. The resulting cell atlas, which was described today in Science and is available online, reveals new patterns of connections between neurons, as well as cells that wrap around themselves to form knots, and pairs of neurons that are almost mirror images of each other.
The 3D map covers a volume of about one cubic millimetre, one-millionth of a whole brain, and contains roughly 57,000 cells and 150 million synapses — the connections between neurons. It incorporates a colossal 1.4 petabytes of data. “It’s a little bit humbling,” says Viren Jain, a neuroscientist at Google in Mountain View, California, and a co-author of the paper. “How are we ever going to really come to terms with all this complexity?”
Jain’s team then built artificial-intelligence models that were able to stitch the microscope images together to reconstruct the whole sample in 3D. “I remember this moment, going into the map and looking at one individual synapse from this woman’s brain, and then zooming out into these other millions of pixels,” says Jain. “It felt sort of spiritual.”
When examining the model in detail, the researchers discovered unconventional neurons, including some that made up to 50 connections with each other. “In general, you would find a couple of connections at most between two neurons,” says Jain. Elsewhere, the model showed neurons with tendrils that formed knots around themselves. “Nobody had seen anything like this before,” Jain adds.
“This paper is really the tour de force creation of a human cortex data set,” says Hongkui Zeng, director of the Allen Institute for Brain Science in Seattle. The vast amount of data that has been made freely accessible will “allow the community to look deeper into the micro-circuitry in the human cortex”, she adds.
Gaining a deeper understanding of how the cortex works could offer clues about how to treat some psychiatric and neurodegenerative diseases. “This map provides unprecedented details that can unveil new rules of neural connections and help to decipher the inner working of the human brain,” says Yongsoo Kim, a neuroscientist at Pennsylvania State University in Hershey.
----------------------------------------
From: "Neil McNaughton" <neilmcn(a)oilit.com>
See recent Science/Nature announcement of a cubic mm of a brain imaged
https://www.nature.com/articles/d41586-024-01387-9
Lars,
That sentence is inaccurate: "The first wave of cognitive scientists from the 60s and 70s (esp. from the US) used concepts from information science in order to explain the workings of the brain."
At MIT, the work on machine translation began in 1950, six years before the first AI conference at Dartmouth in 1956. But the first wave of cognitive scientists in the US who also designed computational machines includes the mathematician, logician, philosopher, and cognitive scientist C. S. Peirce. He published an article on Logical Machines in Volume 1 of the American Journal of Psychology (1887). And he was a close friend of the psychologist William James, who said that he had learned more from Peirce than he could ever repay. For an overview of the issues, see https://irvine.georgetown.domains/papers/Irvine-SSA-Peirce-Computation-expa…
That 40-page article also cites two of my publications, a book and an article. My work on AI began with a course on AI by Marvin Minsky, who earned a PhD in mathematics at Princeton with a dissertation on neural networks in the 1950s. Therefore artificial neural nets are one of the oldest branches of AI. The first publications on logical operations computed by artificial neural networks were in the 1940s.
In 1968, I took two related courses, Minsky's course on AI at MIT, and a course on psycholinguistics by David McNeill at Harvard. I got permission from both of them to write two related papers about conceptual graphs (my name for a semantic representation for natural languages). The first paper for McNeill was about the psychological and linguistic issues about representing natural language semantics in conceptual graphs, and the second one for Minsky was about the computational methods for representing language and reasoning with conceptual graphs. I got an A on both papers.
After 16 years, those two papers became the starting point for my book Conceptual Structures (1984). And by the way, Minsky published a large bibliography of AI, in which he cited the 1887 article by Peirce as one of the early publications on AI.
As for the central executive, it is an important aspect of any AI system that claims to represent an intelligent agent. But I admit that AI programs that are used as subroutines to other technology, need not have anything that resembles a central executive. Those programs may compute intelligent results, but they do not do anything that resembles an agent.
As for distributed functions, the research related to a central executive emphasizes the related functions that are distributed across all parts of the brain, the spinal cord, and everything connected to them. The frontal lobes are the part of the brain that makes the final decisions about which actions to perform and when.
There is much more to say (and cite) about all these issues.
John
----------------------------------------
From: "Dr. Lars Ludwig" <mail(a)lars-ludwig.com>
John,
The first wave of cognitive scientists from the 60s and 70s (esp. from the US) used concepts from information science in order to explain the workings of the brain (maybe that's the reason you find a liking in this). The second wave (inspired by progress in neuro science) rejected these simplictic models by pointing to the tautological quality of such explanations (aka Homunculus models). The ideas of a central executive in the brain are therfore an example of an outdated (rather weak) explanation pattern. Someone in the list pointed out that it would be better to use "executive functions" and think of those as manyfold and distributed. That's one way. More modern theories of cognition (see Wolfgang Prinz) link action (something executive) closely to perception, which hints into the oppositive direction. Thus, as a cognitive psychologist, I would strongly advice to drop this idea of a central executive, as it has no validity in current cognitive sciences.
Lars
_________________________________________________________________
John F Sowa <sowa(a)bestweb.net> hat am 06.05.2024 03:52 CEST geschrieben:
Lars, List,
The Homunculus is a totally different concept proposed by philosophers. It has no relationship to anything that the psychologists and neuroscientists have been studying. The origin is an idea that goes back to the 1960s with George Miller and his hypothesis about short-term memory and the "Magic Number 7, plus or minus 3".
The psychologists Baddeley & Hitch wrote their initial article in 1974. They wrote in response to Miller's hypothesis. They realized that there is much more to short-term memory than just words and phonemes. They called Miller's storage "the phonological loop" and they added a "visuo-spatial scratchpad" for short-term memory of imagery and feelings. And they continued to revise and extend their research for another 20 or 30 years. Neuroscientists, who are specialists in different aspects, have been working on related issues. The consensus is not a single hypothesis, but a branch of research on issues related to conscious control of action by a central executive in the frontal lobes vs. subconscious control by the brainstem and the cerebellum.
For example, when you're walking down the street and talking on your cell phone, several different systems are controlling your actions: (1) the central executive is in charge of what you're doing on the phone in talking and pushing buttons; (2) the cerebellum is guiding your steps in walking and maintaining your balance; (3) the brain stem is maintaining your breathing, heart beat, and other bodily functions; and (4) the nerves running done the spine and branching to all parts of your body are controlling every movement and monitoring any abnormalities, such as a burn, a scratch, or a more serious injury.
In Freud's terms. the central executive is the ego, and the lower-level systems are the id. Those ideas are much older, but they illustrate the kinds of issues involved. The more recent research relates the observational data to actual neural functions in specific regions of the brain. Since aspects of those functions can be traced back to the earliest bacteria, worms, and fish, there must be something fundamental about them. AI systems that do not support related functions do so at their peril.
In my notes and the articles I cite, there are many references to ongoing research. For more background, don't use those GPT-based things that summarize surface-level trivia. You can start with Wikipedia, which cites the original research. Then continue with more detailed studies in neuroscience.
John
I already have a subscription to CG list, and there were some other people who asked to join.
What should I telll them about getting a subscription.
John Sowa
Excerpts from an interesting article.
John
_______________________________________________________
Why AI-generated videos feel hypnotic, fluid, and uncanny
The strengths and weaknesses of being an impartial number cruncherhttps://ykulbashian.medium.com/why-ai-generated-videos-feel-hypnoti…
. . .
Watching them is like watching a fractal: entrancing both in itself, and as a style of presentation. Or perhaps it’s like hearing a story without a climax. As soon as you think you’ve got the set up figured out it subtly shifts and reveals a different story, which now anticipates its own payoff, and so on; like a run-on sentence. The videos are always on the cusp of a revelation, but they never cross over — and you don’t expect them to either. They are liminal, and prefer to stay that way.
The hypnotic effect comes from the sense that the video is going somewhere, and at every turn it makes a new promise, holding you in suspense. The viewer follows along, and feels they can’t let go until they see something delivered. Ultimately they realize it never will be and just enjoy the experience in the moment. And so it meanders from goal to goal, as undirected as scrolling on social media.
To be clear, the videos do have an underlying idea, but it’s revealed at the start and maintained consistently throughout. There is no arc of setup and delivery, just a constant pressure. This is why the content of the videos is chosen to be visually captivating, and is often presented in slow motion. Cute puppies playing in snow, boats in a coffee cup, mammoths striding down a plateau. Like the stock footage they are trained on, their visual appeal is their whole point. The message of the clips matches the medium.
Perhaps the best way to understand this phenomenon is to ask: what is the clip trying to say? What is its intent, its theme, its thesis? Why did the creator want us to see this, and specifically this? When you watch any video, you expect to quickly pick up the intent or purpose behind its creation. For AI-generated videos the answer shouldn’t be difficult to discover, since once a generative model has been trained, all specified content comes from the text prompt that it’s generated on. So whatever unique message a video has compared to other videos from the same model must come exclusively from that prompt. The text prompt should, ultimately, encapsulate the thesis… but it doesn’t. The prompt represents the content, but it doesn’t represent the purpose.
To understand what I mean, remember that any prompt you give an AI doesn’t come out of nowhere. You, as it’s author, have a history of motives and feelings, goals and intents, all trying to find expression through that prompt. They are your hidden purpose, not the concrete content you are literally asking for. None of that backstory makes it to the AI. The AI is being shortchanged, since it is not being given all the facts.
It is difficult to tell an AI something like “I want the viewer to feel the urge to help the poor” or “I want them to feel excited about exploring the world” or “I want them to be full of child-like wonder”. Often you yourself aren’t fully aware of what your are really trying to say — at least not enough to put it into English words.
So whenever an AI’s output misses the mark it is because, like an unintentionally evil genie, it is delivering what you asked for, but not what you wanted. The mistakes the AI makes reveal its alienation from the user’s hidden intent. It is frustratingly literal, and doesn’t suss when it should emphasize or downplay some part of the prompt, nor even when to add something missing that should rightly be there.
As a user you subsequently have to edit the prompt to try to shift it closer to what you had intended, but didn’t actually say. All the while, the AI is being pushed towards your vision from behind, prompt by prompt, rather than being pulled by a driving motive. And like a sandcastle that you have to keep pulling back into shape, any part of it that you neglect slowly crumbles.
. . .
When you merge multiple viewpoints together, the result won’t say anything specific, nor can it make a “point”. An AI generated video is like the combined sound of a dozen human voices speaking in unison through the cultural artifacts we have created. There is no single value or motive uniting the result. AI are exceptionally good at merging content but not intent, simply because intent shouldn’t be merged at all. “Intent” is always biased in favour of what it wants — it brooks no compromises. The only way to correct the blending of intents in AI art is for the AI to first learn the effect its choices have on the viewer, then use that to drive the art in the direction it wants to.
For now, generative AI is a blank slate of sorts, an impartial data cruncher. This has its appeal, to be sure. It lacks many of the annoyances of working with humans, particularly their stubbornness and partiality. It is a compliant and obsequious mimic of social artifacts. This also means its failures can be confusing, almost like the software is “mocking”¹ you. It imitates your representations without showing that it understands why you care about the message or the subject matter. You want the AI ‘artist’ to agree with your message, and to express that through the resulting creation. But it can’t agree with you, it can only ape what it’s seen. So it ends up generating “mock” human artifacts —that is, facsimiles that lack the driving voice of their originals. [End]
After I sent that recent note about Verses AI, I received an offline response about the following.
John
__________________________________
Wonder how this compares with Verses AI?
‘Leaked’ GPT2 Model Has Everyone Stunned.
On-Purpose leak?https://medium.com/@ignacio.de.gregorio.noblejas/openais-leaked-gpt2-m…
. . . [Excerpts]:
But even though it still feels hard to believe that “gpt2-chatbot” has been trained through self-improvement, we have plenty of reasons to believe it’s the first successful implementation of what OpenAI has been working on for years: test-time computation.
The Arrival of test-time computation models
Over the years, several research papers by OpenAI have hinted at this idea of skewing models into ‘heavy inference’.
For example, back in 2021, they presented the notion of using ‘verifiers’ at inference to improve the model’s responses when working with Math.
The idea was to train an auxiliary model that would evaluate in real-time several responses the model gave, choosing the best one (which was then served to the user).
This, combined with some sort of tree search algorithm like the one used by AlphaGo, with examples like Google Deepmind’s Tree-of-Thought research for LLMs, and you could eventually create an LLM that, before answering, explores the ‘realm of possible responses’, carefully filtering and selecting the best path toward the solution.
. . .
This idea, although presented by OpenAI back in 2021, has become pretty popular these days, with cross-effort research by Microsoft and Google applying it to train next-generation verifiers, and with Google even managing to create a model, Alphacode, that executed this kind of architecture to great success, reaching the 85% percentile among competitive programmers, the best humans at it.
And why does this new generation of LLMs have so much potential?
Well, because they approach problem-solving in a very similar way to how humans do, through the exercise of deliberate and extensive thought to solve a given task.
Bottom line, think of ‘search+LLM’ models as AI systems that allocate a much higher degree of compute (akin to human thought) to the actual runtime of the model so that, instead of having to guess the correct solution immediately, they are, simply put, ‘given more time to think’.
But OpenAI has gone further.
. . .
Impossible not to Get Excited
Considering gpt2-chatbot’s insane performance, and keeping in mind OpenAI’s recent research and leaks, we might have a pretty nice idea by now of what on Earth this thing is.
What we know for sure is that we are soon going to be faced with a completely different beast, one that will take AI’s impact to the next level.
Have we finally reached the milestone for LLMs to go beyond human-level performance as we did with AlphaGo?Is the age of long inference, aka the conquest of System 2 thinking by AI, upon us?
Probably not. However, it’s hard not to feel highly optimistic for the insane developments we are about to witness over the following months.
In the meantime, I guess we will have to wait to get those answers. But not for long.
Verses AI is an AI system that does not use LLMs. Instead of telling the world how great their system is, the developers asked ChatGPT4 to answer three questions. To see the answers, go to https://www.linkedin.com/pulse/verses-ai-compared-openai-conversation-chatg… .
1. Describe some of the biggest problems with systems like ChatGPT and BARD.
2. There is a new type of Artificial Intelligence from VERSES.AI called Active Inference. This new form of AI does not use Big Internal Data like an LLM. Instead, it can access REAL TIME data about the world to update its internal World Model. Describe the advantages of Active Inference AI over an LLM like ChatGPT.
3. The Spatial Web creates a network of the Internet of Things (IoT) and this network of IoT can provide real-time data to the Active Inference AI acting like the perception system of a human being. Describe how Active Inference when combined with the Spatial Web could function as a nervous system for a company or an entire city.
That's a clever ploy: Use your competitor's system to explain how your own system is better than it is. One thing that this exercise does demonstrate: ChatGPT4 is too stupid to know that it is explaining how stupid it is. I wonder whether Verses AI would recognize an attempt to get it to praise ChatGPT4.
For more info about Verses AI, ask you favorite search engine.
John
Lars, List,
The Homunculus is a totally different concept proposed by philosophers. It has no relationship to anything that the psychologists and neuroscientists have been studying. The origin is an idea that goes back to the 1960s with George Miller and his hypothesis about short-term memory and the "Magic Number 7, plus or minus 3".
The psychologists Baddeley & Hitch wrote their initial article in 1974. They wrote in response to Miller's hypothesis. They realized that there is much more to short-term memory than just words and phonemes. They called Miller's storage "the phonological loop" and they added a "visuo-spatial scratchpad" for short-term memory of imagery and feelings. And they continued to revise and extend their research for another 20 or 30 years. Neuroscientists, who are specialists in different aspects, have been working on related issues. The consensus is not a single hypothesis, but a branch of research on issues related to conscious control of action by a central executive in the frontal lobes vs. subconscious control by the brainstem and the cerebellum.
For example, when you're walking down the street and talking on your cell phone, several different systems are controlling your actions: (1) the central executive is in charge of what you're doing on the phone in talking and pushing buttons; (2) the cerebellum is guiding your steps in walking and maintaining your balance; (3) the brain stem is maintaining your breathing, heart beat, and other bodily functions; and (4) the nerves running done the spine and branching to all parts of your body are controlling every movement and monitoring any abnormalities, such as a burn, a scratch, or a more serious injury.
In Freud's terms. the central executive is the ego, and the lower-level systems are the id. Those ideas are much older, but they illustrate the kinds of issues involved. The more recent research relates the observational data to actual neural functions in specific regions of the brain. Since aspects of those functions can be traced back to the earliest bacteria, worms, and fish, there must be something fundamental about them. AI systems that do not support related functions do so at their peril.
In my notes and the articles I cite, there are many references to ongoing research. For more background, don't use those GPT-based things that summarize surface-level trivia. You can start with Wikipedia, which cites the original research. Then continue with more detailed studies in neuroscience.
John
----------------------------------------
From: "Dr. Lars Ludwig" <mail(a)lars-ludwig.com>
John,
if I remember correctly that what you propose here via a central executive was rejected in the cognitive sciences as the so called "homunculus theory of cognition", meaning, in short, that the "decision making" of a system cannot be explained by an instance (central executive) making decisions.
Lars
John F Sowa <sowa(a)bestweb.net> hat am 05.05.2024 21:23 CEST geschriebe
Lars, Doug, List,
There is a huge difference between a reasoning system and a decision system. Give a set of axioms and raw data, a reasoning system derives conclusions. It does not make any value judgments about the any of them, And it does not take any actions based on any conclusions.
But every living system from bacteria on up must make decisions about which of many sources of information must be considered in taking action. I agreed with Mihai Nadin that the sources of knowledge are distributed among all components of the brain, but I should have added "brain and body". Every part of the body generates signals of pain and pleasure of varying strength. And the most brilliant or pleasurable thoughts must be deferred when a pain signal from a finger touches a hot stove.
In any animal, there are an immense number of signals coming from every part of the brain and body. There must be something that decides which one(s) to consider immediately and which ones may be deferred.
The central executive is not my idea. But I have done a fair amount of studying of all the branches of the cognitive sciences, and I have learned important ideas from comparing different ways they deal with common problems.
I'm not asking anybody to believe me. But I am asking everybody to consider the wide range of insights that come from the different branches of all six: Philosophy, psychology, linguistics, artificial intelligence, neuroscience, and anthropology. Please look at the references. And if you don't like the references I cited, look for more.
As for the central executive, please let me know of any other mechanism that can decide whether it's better to (a) read a book, (b) take a nap, (c) eat lunch, or (d) duck and cover.
John
----------------------------------------
From: "Dr. Lars Ludwig" <mail(a)lars-ludwig.com>
Doug, John,
I am just reading this catching up: I think it is noteworthy that in modern (autopoietic) system theory (Humberto Maturana, esp. Niklas Luhmann) any (not only societal) systems basically operate and evolve without a central executive. Systemic intelligence is thus independent of any central control instance, which is sometimes understood as a weakness of modern societies. The memory system as the central conscious reproductive (intelligence) system of humans is also not centrally controlled in any meaningful way I could think of (I have written about/explained the (functioning of the) memory sytem and its central importance for any technology in my thesis on "extended artificial memory", which is basically a general autopoietic theory of all memory sub-systems). Thus, theoretically, I don't yet get John's point. I guess these are relicts of pre-systemic sequential/hierarchical operational thinking (that is classic information science) not yet touched by the pradoxical problem of closed cycles of (control /) system operations.
Lars
John F Sowa <sowa(a)bestweb.net> hat am 11.04.2024 02:44 CEST geschrieben:
Doug,
The central executive controls all the processes that are controllable by the human ego. But the term 'executive' should be considered the equivalent of what the chief executive officer (CEO) of a business does in managing a corporation. There are intermediaries at various points.
Baddeley & Hitch wrote their initial article in 1974. They wrote that in response to George Miller's "Magic Number 7, plus or minus 2." They realized that there was much more to short-term memory than just words and phonemes. They called Miller's storage "the phonological loop" and they added a visuo-spatial scratchpad for short-term imagery and feelings. And they continued to revise and extend their hypotheses for another 20 or 30 years. Other neuroscientists, who are specialists in different aspects, have been working on related issues.
The idea is an important one that the Generative AI gang has not yet latched onto. But some AI people are starting to take notice, and I believe that they are on the right track. In summary, there is more to come. See the references I cited, and do whatever googling and searching you like.
John
----------------------------------------
From: "doug foxvog" <doug(a)foxvog.org>
John,
Baddeley & Hitch's "central executive" (CE) is described as an attentional
controlling system. I have just briefly glanced at it, but it seems that
the point is coordinating and accessing memory through an episodic buffer,
phonological loop, and visio-spatial "sketchpad". The hypothesized CE
deals with information, language, memory, imagery, & spatial awareness.
That covers a lot, and i assume it would also cover conscious actions and
processes.
But i don't see it covering neurohormone production or things like
heartrate. Lower level processes like basal signaling between neurons
would have no need of a central executive, as they are just basal
processes.
It's the word "all" in "all processes" that indicates to me that the claim
is excessive.
FWIW, i note that sharks also have brains -- as do "higher" orders of
invertebrates.
-- doug f
> On Wed, April 10, 2024 18:38, John F Sowa wrote:
> Doug,
>
> The central executive was proposed by the neuroscientists Baddeley &
> Hitch, not by AI researchers. There is nothing "machine-like" in the
> idea, by itself. Without something like it, there is no way to explain
> how a huge tangle of neurons could act together and coordinate their
> efforts to support a common effort.
>
> It reminds me of a neighboring town (to my residence in Croton on Hudson,
> NY), which was doing some major developments without hiring a general
> contractor. They thought that their local town employees could schedule
> all the processes. It turned out to be a total disaster. All the
> subcontractors did their tasks in a random order, each one interfering
> with some of the others, and causing a major mess. There were lawsuits
> back and forth, and the town management was found guilty and had losses
> that were many times greater than the cost of hiring a general contractor.
>
> It is certainly true that there is a huge amount of computation going on
> in the brain that is below conscious awareness. Most of that is done by
> the cerebellum (little brain), which is physically much smaller than the
> cerebral cortex. But it contains over four times the number of neurons.
> In effect, the cerebellum behaves like a GPU (Graphics Processing Unit)
> which is a superfast, highly specialized processor for all the perception
> and action that takes place without conscious awareness.
>
> For example, when you're walking down the street talking on your cell
> phone, the cerebellum is monitoring your vision, muscles, and strides --
> until you step off the curb and get run over by a bus. That's why you need
> a central controller to monitor and coordinate all the processes.
>
> Sharks and dolphins are about the same size and they eat the same kind of
> prey. Sharks have a huge cerebellum and a small lump for a cerebellum.
> Dolphins have a huge cerebral cortex and a huge cerebellum. They are as
> agile as sharks, but they can plan, communicate, and coordinate their
> activities. When the food is plentiful, they can both eat their fill.
> But when it's scarce, the dolphins are much more successful.
>
> Please look at the citations in my previous note and the attached
> Section7.pdf. The cycle of abduction, induction, testing, and induction
> depends on a central executive that is responsible for planning,
> coordinating, and integrating those steps of conscious feeling, thinking,
> reasoning, and acting. With a central executive, an AI system would be
> more intelligent. But much, much more R & D would be required before
> anything could be called "Artificial General Intelligence" (AGI). That's
> why I have very little faith in anything called AGI.
>
> John
>
> ----------------------------------------
> From: "doug foxvog" <doug(a)foxvog.org>
> Subject: Re: [ontolog-forum] The central executive
>
> On Wed, April 10, 2024 14:07, John F Sowa wrote:
>> In today's ZOOM meeting, I objected to the term 'neuro-symbolic hybrid'
>> of
>> artificial neural networks (ANNs) with symbols. Hybrids simply relate
>> two
>> (sometimes more) distinctly different things. But all the processes in
>> the mind and brain are integrated, and they all operate continuously in
>> different parts of the brain, which are all monitored and controlled by
>> a
>> central executive. ...
>
> This seems to me to be modeling the body as a machine and not an accurate
> description.
>
> There are a wide variety of processes in the mind and brain -- many
> processes in the brain occur independently without being integrated either
> with each other or with the mind. I am excluding standard cellular level
> processes that go on in every cell and the processes of the circulatory
> system in the brain. Every neuron regularly chemically interacts with
> adjacent neurons & passes electrical signals along its surface.
>
> As far as i understand, much that goes on in the brain we are unaware of,
> neurohormone production, for example. Sensory input processing does not
> seem to be integrated with a number of other processes. I have seen no
> evidence of a central executive in the brain that monitors and controls
> all the other processes. I'm not sure how such a central executive could
> have evolved.
Gary,
Our notes crossed in the mail. Thank you for citing that article about executive functions in the brain. If you notice, they cite Baddeley & Hitch who introduced the idea of a central executive.
And your idea about implementing executive functions in a computer system is very similar (maybe identical) to what I have been proposing. Implement executive functions along the lines of the article you cited (and the other articles I cited) are the key point.
It's irrelevant whether you call the top-level program THE central executive or whether you say that it implements executive functions. Except for details of terminology, we are in violent agreement.
John
----------------------------------------
From: "Gary Berg-Cross" <gbergcross(a)gmail.com>
Sent: 5/8/24 2:15 PM
To: ontolog-forum(a)googlegroups.com
Cc: "Dr. Lars Ludwig" <mail(a)lars-ludwig.com>, Peirce List <peirce-l(a)list.iupui.edu>, CG <cg(a)lists.iccs-conference.org>
Subject: Re: [ontolog-forum] The central executive
As I mentioned in today's Ontolog Summit meeting on ethics for Modern AI systems, It might be more useful to talk about executive function (EF) than an executive.
You can see a good summary argument in this article:
A new era for executive function research: On the transition from centralized to distributed executive functioninghttps://www.ncbi.nlm.nih.gov/pmc/articles/PMC8420078/
This follows some ideas people may remember from Minsky's Society of Mind and modular ideas of intelligence. Distributed theories of cognitive abilities conceptualizes "EFs as emergent consequences of highly distributed brain processes that communicate with a pool of highly connected hub regions, thus precluding the need for a central executive."
There is much more in the article including ideas on testing distributed models and from a risk point of view this on trust based on distributed robustness: a "key property of a DCS is its robustness to perturbations. In contrast to centralized systems, in which a nonbrain biological systems such as swarm would be vulnerable to the loss of its leading agent, a swarm organized as a DCS has been shown to be robust to degradation (Sumpter, 2006). Similarly, decentralized (i.e. distributed) networks have been shown to be resilient systems which are capable of absorbing large external perturbations without undergoing functional breakdown (Achard, 2006; Bassett and Bullmore, 2006; Bullmore and Sporns, 2009; Buzsáki, 2006). A DCS network organization in the brain may therefore explain how EFs can be preserved to some extent in the face of pathological attack by lesion and substance-related disorders...."
Gary Berg-Cross
Potomac, MD
240-426-0770
On Tue, May 7, 2024 at 4:06 PM Ravi Sharma <drravisharma(a)gmail.com> wrote:
John
As you already probably know, in the Indian systems, there is a tremendous amount of literature on Mind, Brain,intellect, applying reasoning, state of alertness and cognition levels, attention span and like.
I am a listener to many of these dialogs embedded in the knowledge system, and have studied brain lobes size enhancements through recitations and repetitions over years.
Are there fMRI or PET studies that confirm the role of Central Executive relating to:
- Level of cognition and awareness,
- Decision making and outcomes or response handling,
- etc.?
Regards.
Thanks.Ravi(Dr. Ravi Sharma, Ph.D. USA)
NASA Apollo Achievement Award
Former Scientific Secretary iSRO HQ
Ontolog Board of Trustees
Particle and Space Physics
Senior Enterprise Architect
SAE Fuel Cell Standards Member
On Sun, May 5, 2024 at 6:52 PM John F Sowa <sowa(a)bestweb.net> wrote:
Lars, List,
The Homunculus is a totally different concept proposed by philosophers. It has no relationship to anything that the psychologists and neuroscientists have been studying. The origin is an idea that goes back to the 1960s with George Miller and his hypothesis about short-term memory and the "Magic Number 7, plus or minus 3".
The psychologists Baddeley & Hitch wrote their initial article in 1974. They wrote in response to Miller's hypothesis. They realized that there is much more to short-term memory than just words and phonemes. They called Miller's storage "the phonological loop" and they added a "visuo-spatial scratchpad" for short-term memory of imagery and feelings. And they continued to revise and extend their research for another 20 or 30 years. Neuroscientists, who are specialists in different aspects, have been working on related issues. The consensus is not a single hypothesis, but a branch of research on issues related to conscious control of action by a central executive in the frontal lobes vs. subconscious control by the brainstem and the cerebellum.
For example, when you're walking down the street and talking on your cell phone, several different systems are controlling your actions: (1) the central executive is in charge of what you're doing on the phone in talking and pushing buttons; (2) the cerebellum is guiding your steps in walking and maintaining your balance; (3) the brain stem is maintaining your breathing, heart beat, and other bodily functions; and (4) the nerves running done the spine and branching to all parts of your body are controlling every movement and monitoring any abnormalities, such as a burn, a scratch, or a more serious injury.
In Freud's terms. the central executive is the ego, and the lower-level systems are the id. Those ideas are much older, but they illustrate the kinds of issues involved. The more recent research relates the observational data to actual neural functions in specific regions of the brain. Since aspects of those functions can be traced back to the earliest bacteria, worms, and fish, there must be something fundamental about them. AI systems that do not support related functions do so at their peril.
In my notes and the articles I cite, there are many references to ongoing research. For more background, don't use those GPT-based things that summarize surface-level trivia. You can start with Wikipedia, which cites the original research. Then continue with more detailed studies in neuroscience.
John
----------------------------------------
From: "Dr. Lars Ludwig" <mail(a)lars-ludwig.com>
John,
if I remember correctly that what you propose here via a central executive was rejected in the cognitive sciences as the so called "homunculus theory of cognition", meaning, in short, that the "decision making" of a system cannot be explained by an instance (central executive) making decisions.
Lars
John F Sowa <sowa(a)bestweb.net> hat am 05.05.2024 21:23 CEST geschriebe
Lars, Doug, List,
There is a huge difference between a reasoning system and a decision system. Give a set of axioms and raw data, a reasoning system derives conclusions. It does not make any value judgments about the any of them, And it does not take any actions based on any conclusions.
But every living system from bacteria on up must make decisions about which of many sources of information must be considered in taking action. I agreed with Mihai Nadin that the sources of knowledge are distributed among all components of the brain, but I should have added "brain and body". Every part of the body generates signals of pain and pleasure of varying strength. And the most brilliant or pleasurable thoughts must be deferred when a pain signal from a finger touches a hot stove.
In any animal, there are an immense number of signals coming from every part of the brain and body. There must be something that decides which one(s) to consider immediately and which ones may be deferred.
The central executive is not my idea. But I have done a fair amount of studying of all the branches of the cognitive sciences, and I have learned important ideas from comparing different ways they deal with common problems.
I'm not asking anybody to believe me. But I am asking everybody to consider the wide range of insights that come from the different branches of all six: Philosophy, psychology, linguistics, artificial intelligence, neuroscience, and anthropology. Please look at the references. And if you don't like the references I cited, look for more.
As for the central executive, please let me know of any other mechanism that can decide whether it's better to (a) read a book, (b) take a nap, (c) eat lunch, or (d) duck and cover.
John
----------------------------------------
From: "Dr. Lars Ludwig" <mail(a)lars-ludwig.com>
Doug, John,
I am just reading this catching up: I think it is noteworthy that in modern (autopoietic) system theory (Humberto Maturana, esp. Niklas Luhmann) any (not only societal) systems basically operate and evolve without a central executive. Systemic intelligence is thus independent of any central control instance, which is sometimes understood as a weakness of modern societies. The memory system as the central conscious reproductive (intelligence) system of humans is also not centrally controlled in any meaningful way I could think of (I have written about/explained the (functioning of the) memory sytem and its central importance for any technology in my thesis on "extended artificial memory", which is basically a general autopoietic theory of all memory sub-systems). Thus, theoretically, I don't yet get John's point. I guess these are relicts of pre-systemic sequential/hierarchical operational thinking (that is classic information science) not yet touched by the pradoxical problem of closed cycles of (control /) system operations.
Lars
John F Sowa <sowa(a)bestweb.net> hat am 11.04.2024 02:44 CEST geschrieben:
Doug,
The central executive controls all the processes that are controllable by the human ego. But the term 'executive' should be considered the equivalent of what the chief executive officer (CEO) of a business does in managing a corporation. There are intermediaries at various points.
Baddeley & Hitch wrote their initial article in 1974. They wrote that in response to George Miller's "Magic Number 7, plus or minus 2." They realized that there was much more to short-term memory than just words and phonemes. They called Miller's storage "the phonological loop" and they added a visuo-spatial scratchpad for short-term imagery and feelings. And they continued to revise and extend their hypotheses for another 20 or 30 years. Other neuroscientists, who are specialists in different aspects, have been working on related issues.
The idea is an important one that the Generative AI gang has not yet latched onto. But some AI people are starting to take notice, and I believe that they are on the right track. In summary, there is more to come. See the references I cited, and do whatever googling and searching you like.
John
----------------------------------------
From: "doug foxvog" <doug(a)foxvog.org>
John,
Baddeley & Hitch's "central executive" (CE) is described as an attentional
controlling system. I have just briefly glanced at it, but it seems that
the point is coordinating and accessing memory through an episodic buffer,
phonological loop, and visio-spatial "sketchpad". The hypothesized CE
deals with information, language, memory, imagery, & spatial awareness.
That covers a lot, and i assume it would also cover conscious actions and
processes.
But i don't see it covering neurohormone production or things like
heartrate. Lower level processes like basal signaling between neurons
would have no need of a central executive, as they are just basal
processes.
It's the word "all" in "all processes" that indicates to me that the claim
is excessive.
FWIW, i note that sharks also have brains -- as do "higher" orders of
invertebrates.
-- doug f
> On Wed, April 10, 2024 18:38, John F Sowa wrote:
> Doug,
>
> The central executive was proposed by the neuroscientists Baddeley &
> Hitch, not by AI researchers. There is nothing "machine-like" in the
> idea, by itself. Without something like it, there is no way to explain
> how a huge tangle of neurons could act together and coordinate their
> efforts to support a common effort.
>
> It reminds me of a neighboring town (to my residence in Croton on Hudson,
> NY), which was doing some major developments without hiring a general
> contractor. They thought that their local town employees could schedule
> all the processes. It turned out to be a total disaster. All the
> subcontractors did their tasks in a random order, each one interfering
> with some of the others, and causing a major mess. There were lawsuits
> back and forth, and the town management was found guilty and had losses
> that were many times greater than the cost of hiring a general contractor.
>
> It is certainly true that there is a huge amount of computation going on
> in the brain that is below conscious awareness. Most of that is done by
> the cerebellum (little brain), which is physically much smaller than the
> cerebral cortex. But it contains over four times the number of neurons.
> In effect, the cerebellum behaves like a GPU (Graphics Processing Unit)
> which is a superfast, highly specialized processor for all the perception
> and action that takes place without conscious awareness.
>
> For example, when you're walking down the street talking on your cell
> phone, the cerebellum is monitoring your vision, muscles, and strides --
> until you step off the curb and get run over by a bus. That's why you need
> a central controller to monitor and coordinate all the processes.
>
> Sharks and dolphins are about the same size and they eat the same kind of
> prey. Sharks have a huge cerebellum and a small lump for a cerebellum.
> Dolphins have a huge cerebral cortex and a huge cerebellum. They are as
> agile as sharks, but they can plan, communicate, and coordinate their
> activities. When the food is plentiful, they can both eat their fill.
> But when it's scarce, the dolphins are much more successful.
>
> Please look at the citations in my previous note and the attached
> Section7.pdf. The cycle of abduction, induction, testing, and induction
> depends on a central executive that is responsible for planning,
> coordinating, and integrating those steps of conscious feeling, thinking,
> reasoning, and acting. With a central executive, an AI system would be
> more intelligent. But much, much more R & D would be required before
> anything could be called "Artificial General Intelligence" (AGI). That's
> why I have very little faith in anything called AGI.
>
> John
>
> ----------------------------------------
> From: "doug foxvog" <doug(a)foxvog.org>
> Subject: Re: [ontolog-forum] The central executive
>
> On Wed, April 10, 2024 14:07, John F Sowa wrote:
>> In today's ZOOM meeting, I objected to the term 'neuro-symbolic hybrid'
>> of
>> artificial neural networks (ANNs) with symbols. Hybrids simply relate
>> two
>> (sometimes more) distinctly different things. But all the processes in
>> the mind and brain are integrated, and they all operate continuously in
>> different parts of the brain, which are all monitored and controlled by
>> a
>> central executive. ...
>
> This seems to me to be modeling the body as a machine and not an accurate
> description.
>
> There are a wide variety of processes in the mind and brain -- many
> processes in the brain occur independently without being integrated either
> with each other or with the mind. I am excluding standard cellular level
> processes that go on in every cell and the processes of the circulatory
> system in the brain. Every neuron regularly chemically interacts with
> adjacent neurons & passes electrical signals along its surface.
>
> As far as i understand, much that goes on in the brain we are unaware of,
> neurohormone production, for example. Sensory input processing does not
> seem to be integrated with a number of other processes. I have seen no
> evidence of a central executive in the brain that monitors and controls
> all the other processes. I'm not sure how such a central executive could
> have evolved.