Amit and anybody who did or did not attended today's talk at the Ontology Summit session,
All three of those questions below involve metalevel issues about LLMs and various reasoning issues with and about generative AI. The first and most important is about anything generated by LLMs: Is it true, false, or possible? After that are How? Why? and How likely?
The biggest limitation of LLMs is that they cannot do any reasoning by themselves. But they can often find some reasoning by some human in some document from somewhere. If they find something similar, they can apply it to solve the current problem. But the word 'similar' raises critical questions: How similar? In what way is it similar/ Is that kind f similarity relevant to the current question or problem?
For example, the LLMs trained on the WWW must have found textbooks on Euclidean geometry. If some problem is stated in the same terminology as the books on geometry, the LLMs might find an answer and apply it.
But more likely, the problem will be stated in terms of the subject matter, such as building a house, plowing a field, flying an airplane, or surveying the land rights in a contract dispute. In those cases, the same geometrical problem may have few or no words in common with Euclid's description of the geometry and the terminology of each of the applications.
For these reasons, a generative AI system, by itself, is unreliable for any mission-critical application. It is best used under the control and supervision of some system that uses trusted methods of AI and computer science to check, evaluate, and supplement whatever the generative AI happens to generate.
As an example of the kinds of systems that my colleagues and I have been developing, see https://jfsowa.com/talks/cogmem.pdf , Cognitive Memory For Language, Learning, and Reasoning, by Arun K. Majumdar and John F. Sowa.
See especially slides 44 to 64. They show three applications for which precision is essential. There are no LLM systems today that can do anything useful with those applications or anything similar. Today, we have a new company, Permion.ai LLC, which has developed new technology that takes advantage of BOTH LLMs and the 60+ years of earlier AI research.
The often flaky and hallucinogenic LLMs are under the control of technology that is guaranteed to produce precisely controlled reasoning and evaluations. Metalevel reasoning is its forte. It evaluates and filters out whatever may be flaky, hallucinogenic, or inconsistent with the given facts.
John
----------------------------------------
From: "Sheth, Amit" <AMIT(a)sc.edu>
There has been a lot of discussion on LLMs and GenAI on this forum.
I would like to share papers related to three major challenges:
1 Is it Human or AI?
Counter Turing Test CT^2: AI-Generated Text Detection is Not as Easy as You May Think —
Introducing AI Detectability Index
2. Measuring, characterizing and countering Hallucination (Hallucination Vulnerability Index)
The Troubling Emergence of Hallucination in Large Language Models –An Extensive Definition, Quantification, and Prescriptive Remediations
3. Fake News/misinformation
FACTIFY3M: A Benchmark for Multimodal Fact Verification with Explainability through 5W Question-Answering
Introduction/details/links to papers (EMNLP 2023):
https://www.linkedin.com/feed/update/urn:li:activity:7117565699258011648
I think this community won’t find this perspective alien:
Data driven only approaches can’t/won’t address these challenges well—
need to understand the duality of data and knowledge.
Knowledge (including KGs/ontologies/world model/structured semantics) and
neuro-symbolic AI (arxiv) which use a variety of relevant knowledge (linguistic, common sense,
domain specific, etc) will play critical role in
addressing these. The same goes for three of the most important requirements
(knowledge will play a critical role in making progress on these):
grounding, intractability, and alignment.
More to come on this from #AIISC.
Cheers,
Amit
Amit Sheth LinkedIn, Google Scholar, Quora, Blog, Twitter
Artificial Intelligence Institute; NCR Chair
University of South Carolina
#AIISConWeb, #AIISConLinkedIn, #AIISConFB
Andrea, Dan, Doug, Alex,
As I keep repeating, I am enthusiastic about the ongoing research on generative AI and the LLMs that support it. But as I also keep repeating, it's impossible to understand the full potential of any computational or reasoning method without understanding its limitations.
I explicitly address that issue for my own work. In my first book, Conceptual Structures, the final chapter 7 had the title "Limits of Conceptualization". Following is the opening paragraph: "No theory is fully understood until its limitations are recognized. To avoid the presumption that conceptual mechanisms completely define the human mind, this chapter surveys aspects of the mind that lie beyond (or perhaps beneath) conceptual graphs. These are the continuous aspects of the world that cannot be adequately expressed in discrete concepts and conceptual relations."
One of the reviewers, who wrote a favorable review of the book, said that he was surprised that Chapter 7 refuted everything that went before. But actually, it's not a refutation. It just itemizes the many complex issues about human thought and thinking that go beyond what can be handled by conceptual graphs (and related AI methods, such as semantic networks and knowledge graphs). Those are very important research areas, and it's essential to understand what can and cannot be done with current technology. For a copy of that chapter, see https://jfsowa.com/pubs/cs7.pdf
As another example, the AI Journal devoted an entire issue in 1993 to a review of a book on Cyc by Lenat & Guha. Lenat told me that my review was the most accurate, but it was also the most frustrating because I itemized all the difficult problems that they had not yet solved. Following is a copy of that review: https://jfsowa.com/pubs/CycRev93.pdf
Lenat did not hold that review against me. In 2004, the DoD, which had invested a great deal of funding in the Cyc project held a 20-year evaluation of the Cyc project to determine whether and how much should they continue to invest. And Lenat recommended me as one of the members of the review committee. Our unanimous review was that (1) Cyc had developed a great deal of important research, which should be documented and made available to the public; (2) future development of Cyc should be funded mostly by commercial applications of Cyc technology; (3) government funding should be continued during the documentation stage and during the transition to funding by applications. Those goals were achieved, and Cyc continued to be funded by applications for another 19 years.
So when I write about the limitations of generative AI and the LLM technology, I am writing exactly what must be done in any review of any project of any kind. A good review of any development must ALWAYS evaluate the strengths and limitations.
But many (most? all?) of the people who are working on LLMs don't ask questions about the limitations. For example, I have a high regard for Geoffrey Hinton, who has been one of the most prominent pioneers in this area. But in an interview on 60 Minutes last Sunday, he said nothing about the limitations. He even suggested that there were no limits. For that interview, see https://www.cbs.com/shows/video/L25QUOdr6apMNr0ZWqDBCo9uPMd_SBWM/
As a matter of fact, many of the limitations I discussed in cs7.pdf also apply to the limitations of LLMs. In particular, they are the limitations of representing and reasoning about the continuous aspects of the world and their translations to and from a discrete, finite vocabulary of any language, natural or artificial.
Andrea> I agree with the position of using LLMs wherever they are appropriate, researching the areas where they need more work, supplementing them where other technologies are strong, and (in general) "not throwing the baby out with the bath water".
Yes indeed.
Dan> The ability of these systems to engage with human-authored text in ways highly sensitive to their content and intent is absolutely stunning. Encouraging members of this forum to delay putting time into learning how to use LLMs is doing them no favours. All of us love to feel we can see through hype, but it’s also a brainworm that means we’ll occasionally miss out on things whose hype is grounded in substance.
I certainly agree. I'm not asking anybody to stop doing their R & D. But I am asking people who promote LLMs to look at where they are running up against the limits of current versions and what can be done to go beyond those limits.
Doug F> Note that much of the left hemisphere has nothing to do with language. In front of the language areas are strips for motor control of & sensory input from the right side of the body. The frontal lobe forward of those strips do not deal with language. The occipital lobe at the rear of the brain does not deal with language, either. The visual cortex in the temporal lobe also does not deal with language. This means that most of the 8 billion neurons in the cerebral cortex have nothing to do with language.
I agree with that point. But I believe that the LLM proponents would also agree. They would say that those areas of the cortex are necessary for mapping language-based LLMs to and from perception and action. What they fail to recognize is the importance of the 90% of the neurons that do not do anything directly related to language.
Alex> My proposal: let’s first agree that ANN is far from being only an LLM. LLM is by far the most noisy and unexpected of ANN applications. The question can be posed this way: we know about the Language Model, but what other models using ANN exist?
I agree that we should explore the many ways that artificial NNs relate to the NNs in various parts of the brain. It's also important to recognize that there are many difference kinds of NNs in different areas of the brain, and they are organized in ways that are very different from the currently popular ANNs.
In summary, there is a lot more research that remains to be done. I'm not telling anybody to stop what they're doing. I'm just recommending that they look at what more needs to be done before claiming that LLMs can do everything.
John
As I have said in recent notes sent to three groups (Ontolog Forum, Peirce List, and CG list), Peirce's work on diagrammatic reasoning is at the forefront of current research on Generative AI and related applications.
In some of my notes on this topic, I have included excerpts from an article I'm writing, which explains the connections to Peirce's writings, especially in the last decade of his life. But I have also included some further discussions on Ontolog Forum, which do not indicate any connection to Peirce.
Gary Richmond reported that some subscribers to P-List have complained. And I admit that some of those notes addressed some technical issues that are not directly relevant to CSP. Therefore, I'll limit my cc's for those notes to CG list. The only notes I'll cc to P-list are the ones that explicitly cite or discuss Peirce's writings.
Anybody who wishes to see the other notes can subscribe to CG list or to Ontolog Forum. (CG list has very little traffic, so it won't fill up anyone's mailbox.)
John
Stephen.
The six branches of the cognitive sciences (Philosophy, psychology, linguistics, AI, neuroscience, and anthropology) have an open-ended variety of unanswered questions. That is the nature of every active branch of science. The reason why researchers in those six sciences formed the coalition called cognitive science is that cutting-edge research in each of them has strong implications and valuable results for each of the others. In fact, prominent leaders in AI were very active in founding the Cognitive Science Journal and conferences.
There is a huge amount of fundamental research about the multiplicity of very different "languages" of thought. These results are well established with solid evidence about the influences. Natural languages are valuable for communication, but they are not the best or even the most general foundation for thinking about most of the things we do in our daily lives -- or in our most complex activities.
You can't fly an airplane, drive a truck, thread a needle, paint a picture, ski down a mountain, or solve a mathematical problem if you have to talk to yourself (vocally or silently) about every detail. You might do that when you're first learning something, but not when you master the subject
Compared to those results, the writings by many prominent researchers on LLMs are naive. They know how to play with LLMs, but they don't know how to solve the very serious tasks that AI researchers have been implementing and using successfully for years. As just some examples that my colleagues and I have implemented successfully, see https://jfsowa.com/talks/cogmem.pdf
Look at the examples in the final section (slides 44 to 64). The current LLM technology cannot even begin to meet the requirements that the VivoMind technology could implement in 2010. Nobody writing about LLMs can show how to handle those requirements by using LLMs.
And those examples are just a small sample of successful applications. Most of the others were proprietary for our customers, who did not want to have their solutions publicized. That was fundamental science applied to mission-critical applications.
John
----------------------------------------
From: "Stephen Young" <steve(a)electricmint.com>
Sent: 10/8/23 7:13 PM
To: ontolog-forum(a)googlegroups.com, Stephen Young <steve(a)electricmint.com>
Subject: Re: [ontolog-forum] Addendum to (Generative AI is at the top of the Hype Cycle. Is it about to crash?
John, we've known since the 50s that the right brain has a significant role in understanding language. We also know that there is a ton of neural real estate between Wernicke's and Broca's areas that must be involved in language processing. They're like the input and output layers of the 98-layer GPT model. And we call them large language models, but they also "understand" vision.
Using our limited understanding of one black box to try to justify our assessment of another black box is not going to get us anywhere.
On Mon, 9 Oct 2023 at 08:23, John F Sowa <sowa(a)bestweb.net> wrote:
Alex,
Thanks for the list of applications of LANGUAGE-based LLMs. It is indeed impressive. We all agree on that. But mathematics, physics, computer science, neuroscience, and all the branches of cognitive science have shown that natural languages are just one of an open-ended variety of left-brain ways of thinking. LLMs haven't scratched the surface of the methods of thinking by the right brain and the cerebellum.
The left hemisphere of the cerebral cortex has about 8 billion neurons. The right hemisphere has another 8 billion neurons that are NOT dedicated to language. And the cerebellum has about 69 billion neurons that are organized in patterns that are totally different from the cerebrum. That implies that LLMs are only addressing 10% of what is going on in the human brain. There is a lot going on in that other 90%. What kinds of processes are happening in those regions?
Science makes progress by asking QUESTIONS. The biggest question is how can you handle the open-ended range of thinking that is not based on natural languages. Ignoring that question is NOT scientific. As the saying goes, when the only tool you have is a hammer, all the world is a nail. We need more tools to handle the other 90% of the brain -- or perhaps updated and extended variations of tools that have been developed in the past 60+ years of AI and computer science.
I'll say more about these issues with more excerpts from the article I'm writing. But I appreciate your work in showing the limitations of the current LLMs.
John
----------------------------------------
From: "Alex Shkotin" <alex.shkotin(a)gmail.com>
John,
English LLM is the flower on the tip of the iceberg. Multilingual LLMs are also being created. The Chinese certainly train more than just English-speaking LLMs. You can see the underwater structure of the iceberg, for example, here https://huggingface.co/datasets (1).
Academic claims against inventors are possible. But you know the inventors: it works!
It's funny that before that hype LLM meant Master of Laws:-)
Alex
(1)
--
All contributions to this forum are covered by an open-source license.
For information about the wiki, the license, and how to subscribe or
unsubscribe to the forum, see http://ontologforum.org/info/
---
You received this message because you are subscribed to the Google Groups "ontolog-forum" group.
To unsubscribe from this group and stop receiving emails from it, send an email to ontolog-forum+unsubscribe(a)googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/ontolog-forum/4b87b85e6cde491780c3115b5ba….
--
Stephen Young
--
All contributions to this forum are covered by an open-source license.
For information about the wiki, the license, and how to subscribe or
unsubscribe to the forum, see http://ontologforum.org/info/
---
You received this message because you are subscribed to the Google Groups "ontolog-forum" group.
To unsubscribe from this group and stop receiving emails from it, send an email to ontolog-forum+unsubscribe(a)googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/ontolog-forum/CAHH%2BT2JSgqdmGksQRc0-qVqJ….
Alex,
Thanks for the list of applications of LANGUAGE-based LLMs. It is indeed impressive. We all agree on that. But mathematics, physics, computer science, neuroscience, and all the branches of cognitive science have shown that natural languages are just one of an open-ended variety of left-brain ways of thinking. LLMs haven't scratched the surface of the methods of thinking by the right brain and the cerebellum.
The left hemisphere of the cerebral cortex has about 8 billion neurons. The right hemisphere has another 8 billion neurons that are NOT dedicated to language. And the cerebellum has about 69 billion neurons that are organized in patterns that are totally different from the cerebrum. That implies that LLMs are only addressing 10% of what is going on in the human brain. There is a lot going on in that other 90%. What kinds of processes are happening in those regions?
Science makes progress by asking QUESTIONS. The biggest question is how can you handle the open-ended range of thinking that is not based on natural languages. Ignoring that question is NOT scientific. As the saying goes, when the only tool you have is a hammer, all the world is a nail. We need more tools to handle the other 90% of the brain -- or perhaps updated and extended variations of tools that have been developed in the past 60+ years of AI and computer science.
I'll say more about these issues with more excerpts from the article I'm writing. But I appreciate your work in showing the limitations of the current LLMs.
John
----------------------------------------
From: "Alex Shkotin" <alex.shkotin(a)gmail.com>
John,
English LLM is the flower on the tip of the iceberg. Multilingual LLMs are also being created. The Chinese certainly train more than just English-speaking LLMs. You can see the underwater structure of the iceberg, for example, here https://huggingface.co/datasets (1).
Academic claims against inventors are possible. But you know the inventors: it works!
It's funny that before that hype LLM meant Master of Laws:-)
Alex
(1)
Alex,
I'm glad that we finally agree. The main problem with the LLM gang is that they don't ask the fundamental questions: How is this new tool related to the 60+ years of R & D in AI, computer science, and the immense area of the multiple cognitive sciences?
For example, Stanislas Dehaene and his students and colleagues have shown that there are multiple languages of thought, not just one. And every method of thinking has a different view of the world, of life, and of the fundamental methods of thought. For example, thinking and working with and about mathematics, visual structures, music, games, gymnastics, flying an airplane, building a bridge, plowing a field, etc., etc., etc. activate totally different areas of the brain than speaking and writing English.
A brain lesion that knocks out one region may leave other regions unscathed, and it may even enhance performance in those other regions. The LLM gang knows nothing about these issues. They don't ask the right questions. In fact, they're so one-sided that they don't even know what questions they should be asking. Somebody has to educate them. The best way to start is for us to ask the embarrassing questions.
Just before I read your note, I came across another article by the Dehaene gang: https://www.science.org/doi/pdf/10.1126/sciadv.adf6140
Does the visual word form area split in bilingual readers?
Minye Zhan, Christophe Pallier, Aakash Agrawal, Stanislas Dehaene, Laurent Cohen
In expert readers, a brain region known as the visual word form area (VWFA) is highly sensitive to written words, exhibiting a posterior-to-anterior gradient of increasing sensitivity to orthographic stimuli whose statistics match those of real words. Using high-resolution 7-tesla fMRI, we ask whether, in bilingual readers, distinct cortical patches specialize for different languages. In 21 EnglishFrench bilinguals, unsmoothed 1.2-millimeters fMRI revealed that the VWFA is actually composed of several small cortical patches highly selective for reading, with a posterior-to-anterior word-similarity gradient, but with near-complete overlap between the two languages. In 10 English-Chinese bilinguals, however, while most word-specific patches exhibited similar reading specificity and word-similarity gradients for reading in Chinese and English, additional patches responded specifically to Chinese writing and, unexpectedly, to faces. Our results show that the acquisition of multiple writing systems can indeed tune the visual cortex differently in bilinguals, sometimes leading to the emergence of cortical patches specialized for a single language.
This is just one of many studies that show why LLMs based on English may be inadequate for ways of thinking in other languages or in non-linguistic or pre-linguistic ways of thinking, working, living, etc. Furthermore, language is a left-brain activity, and most of our actions and ways of behaving and working are right-brain activities. The current LLMs are based on ways of thinking by an English speaker whose right brain was destroyed by a stroke.
None of the writings about LLMs ask or even mention these issues. In this mini-series on generative AI, we have to ask the embarrassing questions. Any science that avoids such questions is brain dead.
John
----------------------------------------
From: "Alex Shkotin" <alex.shkotin(a)gmail.com>
JFS: "Now is the time to ask deeper questions."
Exactly, and these questions should be scientific :-)
And we have a scientific phase with these creatures, GenAI in general and LLM in particular: experiments ;-)
Alex
Anatoly, Stephen, Dan, Alex, and every subscriber to these lists,
I want to emphasize two points: (1) I am extremely enthusiastic about LLMs and what they can and cannot do. (2) I am also extremely enthusiastic about the 60+ years of R & D in AI technologies and what they have and have not done. Many of the most successful AI developments are no longer called AI because they have become integral components of computer science. Examples: compilers, databases, computer graphics, and the interfaces of nearly every appliance we use today: cars, trucks, airplanes, rockets, telephones, farm equipment, construction equipment, washing machines, etc. For those things, the AI technology of the 20th century is performing mission-critical operations with a level of precision and dependability that unaided humans cannot achieve without their help.
Fundamental principle: For any tool of any kind -- hardware or software -- it's impossible to understand exactly what it can do until the tool is pushed to the limits where it breaks. At that point, an examination of the pieces shows where its strengths and weaknesses lie.
For LLMs, some of the breaking points have been published as hallucinations and humorous nonsense. But more R & D is necessary to determine where the boundaries are, how to overcome them, work around them, and supplement them with the 60+ years of other AI tools.
Anatoly> When you target LLM and ANN as its engine, you should consider that this is very fast moving target. E.g. consider recent work (and imagine what can be done there in a year or two in graph-of-thoughts architectures) . . .
Yes, that's obvious. The article you cited looks interesting, and there are many others. They are certainly worth exploring. But I emphasize the question I asked: Google and OpenAI have been exploring this technology for quite a few years. What mission-critical applications have they or anybody else discovered and implemented?
So far the only truly successful applications are in MT -- machine translation of languages, natural and artificial. Can anybody point to any other applications that are mission critical for any business or government organization anywhere?
Stephen Young> Yup. My 17yo only managed 94% in his Math exam. He got 6% wrong. Hopeless - he'll never amount to anything.
The LLMs have been successful in passing various tests at levels that match or surpass the best humans. But that's because they cheat. They have access to a huge amount of information on the WWW about a huge range of tests. Bur when they are asked routine questions for which the answers or the methods for generating answers cannot be found, they make truly stupid mistakes.
No mission-critical system that guides a car, an airplane, a rocket, or a farmer's plow can depend on such tools.
Dan Brickley> Encouraging members of this forum to delay putting time into learning how to use LLMs is doing them no favours. All of us love to feel we can see through hype, but it’s also a brainworm that means we’ll occasionally miss out on things whose hype is grounded in substance.
Yes, I enthusiastically agree. We must always ask questions. We must study how LLMs work, what they do, and what their limitations are. If they cannot solve some puzzle, it's essential to find out why. Noticing a failure on one problem is not an excuse for giving up. It's a clue for guiding the search.
Alex> I'm researching how LLMs work. And we will really find out where they will be used after the hype in 3-5 years.
Yes. But that is when everybody else will have won the big contracts to develop the mission-critical applications.
Now is the time to do the critical research on where the strengths and limitations are. Right now, the crowd is having fun building toys that exploit the obvious strengths. The people who are doing the truly fundamental research are exploring the limitations and how to get around them.
John
Ricardo, Alex, Anatoly, and anybody who is working with or speculating about LLMs for generative AI,
LLMs have proved to be valuable for machine translation of languages. They have also been used to implement many kinds of toys that appear to be impressive. But nobody has shown that LLM technology can be used for any mission critical applications of any kind -- i.e. any applications for which a failure would cause a disaster (financial or human or both).
Question: Companies that are working on generative AI are *taking* a huge amount of money from investors. Have any of them produced any practical applications that are actually *making* money? Generative AI is now at the top of the hype cycle. That implies an impending crash into the trough of disillusionment. When will that crash occur? Unless anybody can demonstrate applications that make money, the investors are going to be disillusioned.
To Ricardo> Those are interesting hypotheses about consciousness in your note below. But none of them have any significant implications for AI, ontology, or the possibility of money-making applications of LLMs.
One important point: Nobody suggests that anything in the cerebellum is conscious. The results from the cerebellum that are reported to the cortex are critical, especially since the cerebellum has more than four times as many neurons as the cerebral cortex. There is also strong evidence that the cerebellum is essential for complex mathematics. (See Section 6.pdf)
Implication: AI methods that simulate processes in the cerebral cortex (such as natural language processing by LLMs) cannot do the heavy duty computation that is done by neurons in the cerebellum -- and that includes the most complex logic and mathematics.
See the summary in Section6.pdf and my other references below.
John
----------------------------------------
From: "Ricardo Sanz" <ricardo.sanz.bravo(a)gmail.com>
Hi,
JFS>> What parts of the brain are relevant for any sensation of consciousness?
So far, the question of neural correlates of consciousness (NCC) is still unresolved. This was the theme of the Chalmers-Koch wager. There are too many theories and no relevant enough experimental data to decide.
The most repeated theory is that consciousness is hosted in thalamo-cortical reentrant loops. The cortex (sensorimotor data processor) and the thalamus (the main relay station of the brain). This is yet to be demonstrated.
Another widely repeated theory was that the NCC was a train of 40hz signal waves across the whole brain.
The boldest to me, however, is the quantum macroscopic coherence in the axon microtubules. This is called the Orchestrated Objective Reduction theory (Orch-OR).
Best,
Ricardo
On Mon, Oct 2, 2023 at 5:40 AM John F Sowa <sowa(a)bestweb.net> wrote:
That article shows several points: (1) The experts on the subject don't agree on basic issues. (2) They are afraid that too much criticism of one theory will cause neuroscientists to consider all theories dubious. (3) They don't have \clear criteria for what kinds of observations would or would not be considered relevant to the issues.
But I want to mention some questions I have: What parts of the brain are relevant for any sensation of consciousness? All parts? Some parts? Some parts more than others? Which ones?
From common experience, we know that complex activities require a great deal of conscious attention when we're first learning them. But after we learn them, they become almost automatic, and we can perform them without thinking about them. Examples: Learning to ski vs. skiing smoothly on moderate hills vs skiing on very steep or complex surfaces. The same issues apply to any kind of skill: driving a car, driving a truck, flying a plane, swimming, dancing, skating, mountain climbing, working in any profession of any kind -- indoors, outdoors, on a computer, with any kinds of tools, instruments, conditions, etc.
In every kind of skill, the basic techniques become automatic and can be performed with a minimum of conscious attention. There is strong evidence that the effort in the cerebrum (/AKA cerebral cortex) is conscious, but expert skills are controlled by the cerebellum, which is not conscious. There is brief discussion of the cerebellum in Section6.pdf (see the latest excerpt I sent, which is dated 28 Sept 2023).
For more about the role of the cerebellum, see the article and video of a man who was born without a cerebellum and survived: A Man's Incomplete Brain Reveals Cerebellum's Role In Thought And Emotion. https://www.npr.org/sections/health-shots/2015/03/16/392789753/a-man-s-inco…;
John
That article shows several points: (1) The experts on the subject don't agree on basic issues. (2) They are afraid that too much criticism of one theory will cause neuroscientists to consider all theories dubious. (3) They don't have \clear criteria for what kinds of observations would or would not be considered relevant to the issues.
But I want to mention some questions I have: What parts of the brain are relevant for any sensation of consciousness? All parts? Some parts? Some parts more than others? Which ones?
From common experience, we know that complex activities require a great deal of conscious attention when we're first learning them. But after we learn them, they become almost automatic, and we can perform them without thinking about them. Examples: Learning to ski vs. skiing smoothly on moderate hills vs skiing on very steep or complex surfaces. The same issues apply to any kind of skill: driving a car, driving a truck, flying a plane, swimming, dancing, skating, mountain climbing, working in any profession of any kind -- indoors, outdoors, on a computer, with any kinds of tools, instruments, conditions, etc.
In every kind of skill, the basic techniques become automatic and can be performed with a minimum of conscious attention. There is strong evidence that the effort in the cerebrum (/AKA cerebral cortex) is conscious, but expert skills are controlled by the cerebellum, which is not conscious. There is brief discussion of the cerebellum in Section6.pdf (see the latest excerpt I sent, which is dated 28 Sept 2023).
For more about the role of the cerebellum, see the article and video of a man who was born without a cerebellum and survived: A Man's Incomplete Brain Reveals Cerebellum's Role In Thought And Emotion. https://www.npr.org/sections/health-shots/2015/03/16/392789753/a-man-s-inco…
John
----------------------------------------
From: "Nadin, Mihai" <nadin(a)utdallas.edu>
Dear and respected colleagues:
The issue does not go away:
https://theconversation.com/consciousness-why-a-leading-theory-has-been-bra…
I have no dog in this race!
Mihai Nadin
https://www.nadin.wshttps://www.anteinstitute.org
Google Scholar
Logical Graphs • Formal Development 1
• https://inquiryintoinquiry.com/2023/09/15/logical-graphs-formal-development…
Recap —
A first approach to logical graphs can be found in the article linked below.
Logical Graphs • First Impressions
• https://inquiryintoinquiry.com/2023/08/24/logical-graphs-first-impressions/
That introduces the initial elements of logical graphs and hopefully supplies
the reader with an intuitive sense of their motivation and rationale.
Formal Development —
Logical graphs are next presented as a formal system by going back to the
initial elements and developing their consequences in a systematic manner.
The next order of business is to give the precise axioms used to develop
the formal system of logical graphs. The axioms derive from C.S. Peirce's
various systems of graphical syntax via the “calculus of indications”
described in Spencer Brown's “Laws of Form”. The formal proofs to follow
will use a variation of Spencer Brown's annotation scheme to mark each step
of the proof according to which axiom is called to license the corresponding
step of syntactic transformation, whether it applies to graphs or to strings.
Regards,
Jon
cc: https://www.academia.edu/community/VrW8bL
cc: https://mathstodon.xyz/@Inquiry/111070230310739613