Alex,
I am not talking about a "standard" or "official" or "universal" top-level processor.
This is a topic I've discussed before and published before: To be safe, secure, and intelligent, an AI system (robot or just an intelligent processor) should have a top-level control unit that serves the same basic functions as the human (and other mammalian) frontal lobes: serve as the conscious central control unit.
As you say below, such a system would have a supervisor, scheduler, and other system-level processes. Even a mouse-level intelligence would be far superior to any pf today's so-called "intelligent systems".
The goal of a human-level AGI would be far in the future. I doubt that it could be achieved in the 21st C.
This is the topic of my talk in the recent ontology summit series, you can read the slides or view the YouTube.. There is much more to say, and I'll include more references later. But I believe this topic is more important than trying to develop a universal formalization of whatever -- primarily because any such formal system would very rapidly become obsolete.
John
----------------------------------------
From: "Alex Shkotin" <alex.shkotin(a)gmail.com>
John,
About "top-level processor". I am far from robotics to discuss robot OS structure. I hope there is Supervisor, Scheduler and other system level processes there. Is there any subsystem to name "top-level processor" I don't know.
Alex
пт, 11 окт. 2024 г. в 23:25, John F Sowa <sowa(a)bestweb.net>:
Alexandre Rademaker: We don’t necessarily need to throw away the meanings. A safe translation should account for a 1-N mapping.. from surface to logical representations. Context or even some statistical preference can select the most preferable reading.
Yes. That is why we need a top-level symbolic processor that can determine what to do for any particular issue that may arise.
Alex Shkotin: With robots it's better not to use vague terms or sentences. It's dangerous. Good robots will tell: I don't understand, bad ones can make a mess of things.
As I said to Alexandre, the top-level processor should use symbolic methods for determining what to do.
Alex: My way is to represent knowledge formally. The precision of knowledge itself remains the same initially and may be better after we apply knowledge processing algorithms to this formalized knowledge.
Think of the top-level symbolic processor as a gate-keeper. It is in the best position to determine what to do. In many cases, the best thing is to ask a question or even a series of questions before making a decision.
The top-level processor may use LLMs in the simplest and most secure way: Translate a query in any natural language to and from whatever internal form the system uses. After the top-level processor has determined what to do, it can pass the translated result to whatever subroutines can handle it. Those subroutines may or may not use LLMs or many, many other tools of various kinds.
Basic point: One size does not fit all. The top-level processor determines which of many internal processors should or should not be invoked. Anything that seems dangerous can be sent to a security system, which may or may not reject the input or even send it to proper authorities to handle it.
John
Alican,
Fundamental difference: A vague statement has a broad range of meaning. A more precise statement has a narrower range of meaning. Therefore, a vague statement is more likely to be true. A more precise statement is more likely to false.
Alican: Doesn't narrowing down the meaning of a symbol typically lead to a more "precise" interpretation?
Yes. And therefore, the more precise statement is more likely to be false.
Alican: Also, from my observation of Alex's work, in my opinion, that's what he is trying to achieve.
Yes. And that is why I keep telling him to avoid turning a true but vague statement into a precise but false statement.
Example: Buying an ice cream cone, and specifying a perfect sphere of vanilla ice cream that is exactly 10 centimeters in diameter in a cone that is precisely 9,7 cm in diameter at the top and 15 cm in length.
That is very precise, very stupid, and likely to get yourself laughed at or thrown out of the store.
I used a trivial example of an ice cream cone. But the same principle applies to every statement about a continuum of any kind. The degree of precision should be appropriate to the requirements of the subject matter. That is true of a continuum of any and every kind for any purpose of any and every kind.
John
----------------------------------------
From: "Alican Tüzün" <tuzunalican(a)gmail.com>
John and Alex,
@John
Doesn't narrowing down the meaning of a symbol typically lead to a more "precise" interpretation?
If a set of symbols (or sign vehicle) signifies a more limited set of immediate objects, it results in a more specific reference. This increased specificity can lead to
a more focused interpretation (the effect or interpretation in the mind). Overall, sign creation will be more "precise".
E.g., Number 1 and word One. The latter symbol can be interpreted with more things, while the former is less. Overall, isn't the sign-making with Number 1 easier or, in your discussion words, more "precise"?
If I understood something wrong, please correct me.
@Alex
Also, from my observation of Alex's work, in my opinion, that's what he is trying to achieve. Also correct me, Alex, if I understood wrong.
Best,
Alican%40mail.gmail.com.
Alexandre Rademaker: We don’t necessarily need to throw away the meanings. A safe translation should account for a 1-N mapping.. from surface to logical representations. Context or even some statistical preference can select the most preferable reading.
Yes. That is why we need a top-level symbolic processor that can determine what to do for any particular issue that may arise.
Alex Shkotin: With robots it's better not to use vague terms or sentences. It's dangerous. Good robots will tell: I don't understand, bad ones can make a mess of things.
As I said to Alexandre, the top-level processor should use symbolic methods for determining what to do.
Alex: My way is to represent knowledge formally. The precision of knowledge itself remains the same initially and may be better after we apply knowledge processing algorithms to this formalized knowledge.
Think of the top-level symbolic processor as a gate-keeper. It is in the best position to determine what to do. In many cases, the best thing is to ask a question or even a series of questions before making a decision.
The top-level processor may use LLMs in the simplest and most secure way: Translate a query in any natural language to and from whatever internal form the system uses. After the top-level processor has determined what to do, it can pass the translated result to whatever subroutines can handle it. Those subroutines may or may not use LLMs or many, many other tools of various kinds.
Basic point: One size does not fit all. The top-level processor determines which of many internal processors should or should not be invoked. Anything that seems dangerous can be sent to a security system, which may or may not reject the input or even send it to proper authorities to handle it.
John
Ravi,
Probability is another method for dealing with many kinds of continuous issues. Fortunately, the mathematical methods of probability and statistics are very well developed.
This is another kind of symbolic reasoning that LLMs, by themselves, cannot handle. A system that uses symbolic methods can invoke reasoning methods of many kinds: formal logic, probability, statistics, and various computational tools.
Arithmetic, for example, is ideal for a computer, but LLMs are horrible for anything except trivial computations. There is 60+ years of symbolic reasoning methods in AI and computer science. LLMs can't replace them.
General principle: Symbolic methods must be in control of the overall system. They can determine which, when, and how other methods, including LLMs, can be used. They can also prevent the dangers caused by runaway AI methods.
John
----------------------------------------
From: "Ravi Sharma" <drravisharma(a)gmail.com>
John
What happens to statistical entities which most are? If we can not define them by FOL what do we do?
I realize we can apply logic to artifacts (real) that are statistical in nature, for precise filtering etc. as example.
This brings me back to what tools are going to be available in future cyber or AI scenarios that would have some ability to understand context, provenance, real or virtual tagging etc so that we can distinguish real vs "processed' reality?
Thanks.Ravi
Kingsley,
Your reply shows how and why many applications of LLMs can be valuable.
KI: [They] can be more concise, aligned with the objectives of the message. In my experience, NotebookLM encourages a more disciplined approach to communication. It also highlights an often-overlooked aspect of LLMs—they’re just tools. Operator skills still significantly impact the output, meaning one size still doesn’t fit all in our diverse world :)
I agree that they can gather valuable information and produce useful results, but the human user has to evaluate the results. In your example, 6 out of 8 steps depend on some human to accept, reject, or guide what the LLM-based technology is doing.
Our Permion.ai company uses LLMs for what they do best, The symbolic methods of our VivoMind company (prior to 2010) were very advanced for that time. The new Permion.ai technology combines the best features of the symbolic methods with the LLM methods. It builds on the good stuff, rejects the bad stuff, and gets advice from the users about the doubtful stuff.
John
----------------------------------------
From: "Kingsley Idehen' via ontolog-forum" <ontolog-forum(a)googlegroups.com>
Hi Dan,
On 10/11/24 8:18 AM, 'Dan Brickley' via ontolog-forum wrote:
Something like https://www.darpa.mil/work-with-us/heilmeier-catechism then?
- What are you trying to do? Articulate your objectives using absolutely no jargon.
- How is it done today, and what are the limits of current practice?
- What is new in your approach and why do you think it will be successful?
- Who cares? If you are successful, what difference will it make?
- What are the risks?
- How much will it cost?
- How long will it take?
- What are the mid-term and final “exams” to check for success?
Yes, but it can be more concise, aligned with the objectives of the message. In my experience, NotebookLM encourages a more disciplined approach to communication. It also highlights an often-overlooked aspect of LLMs—they’re just tools. Operator skills still significantly impact the output, meaning one size still doesn’t fit all in our diverse world :)
Kingsley
Kingsley,
I strongly agree with your 8 point method. And it strongly supports my many comments about the need to evaluate and correct output generated by LLMs.
Note that points (1) and (2) are human preparatory work. (4) is human evaluation. (5) is human correction. (6 & 7) are more evaluation. And (8) is the final application.
In summary, 6 out of the 8 points depend on human work. With current LLM applications human evaluation is far more reliable than current computational methods. No claim of ARTIFCIAL GENERAL intelligence can be based on a system that requires that much human intelligence to make the results dependable.
I am not rejecting the value of the LLM-based technology. I am merely rejecting the claims that it is on the way toward AGI.
John
___________________
From: Kingsley Idehen
Hi Everyone,
Here’s a new example of what’s possible with Google’s NotebookLM as an AI Agent for creating audio summaries from a variety of sources (e.g., clipboard text, doc urls, pdfs etc.).
How-To: Generate a Podcast with NotebookLM for Distribution Across Social Media Platforms
Communicating complex, thorny issues to a target audience requires delivering content in their preferred format. For humans, the preferred communication modality typically follows this order: video, audio, and then text. In the age of GenAI, leveraging tools like NotebookLM makes it easier than ever to streamline communication. Here’s a step-by-step guide on how to create and distribute a podcast using NotebookLM:
- Collate notes and topic references (e.g., hyperlinks)
- Feed the collated material into NotebookLM
- Wait a few minutes for NotebookLM to generate a podcast
- Listen to the initial version
- Tweak the material (add or remove content as needed)
- Listen to the revised edition
- If satisfied, add the podcast to an RSS or Atom feed
- Share the feed for subscription by interested parties
Alex,
There are two very different issues: (1) Syntactic translation from one notation to another; (2) Semantic interpretation of the source or target notations.
For a formally defined notation, such as FOL or any notation that is defined by its mapping to FOL, there is a single very precise definition of its meaning.
For a natural language, almost every word has a continuous range of meanings. The only words (or phrases) that have a precise meaning are technical terms from some branch of science or engineering. Examples: hydrogen, oxygen, volt, ampere, gram, meter...
If you translate a sentence from a natural language to formal language, that might narrow down the meaning in the target language, But that very precise meaning may be very differentt from what the original author had intended.
Summary: Translation is not magic. It cannot make a vague sentence precise.
John
_______________________________________
From: "Alex Shkotin"
<alex.shkotin(a)gmail.com>
John,
Let me clarify what I meant by "English is HOL" by example.
Sentence: "I see a blue jay drinking out of the birdbath."
HOL-structure: (I see ((a (blue jay)) (drinking (out of)) (the birdbath)))
where
"of" is an unary operator used in postfix form, applied to "out" being an argument. As a result we get "(out of)" an expression or term.
But this term is itself an unary operator used in postfix form, applied to "drinking" to create a term "(drinking (out of))", being binary operator in infix form being applied to two arguments: left one: "(a (blue jay))", and right one: "(the birdbath)".
As a result we have a proposition which is a right argument for another binary operator in infix form "see", which has the left argument "I".
And we are talking here not about Logic, but about Language.
In every syntactically correct phrase, words are combined: one word is applied to another. The result is something like molecules, but in the World of
Words.
How to get this structure from a chain of words? How to work with these structures to get what? Some pictures? True|false value?
This is the questions 🔬
Alex
Information = Comprehension × Extension • Preamble
• https://inquiryintoinquiry.com/2024/10/04/information-comprehension-x-exten…
All,
Eight summers ago I hit on what struck me as a new insight into one
of the most recalcitrant problems in Peirce's semiotics and logic of
science, namely, the relation between “the manner in which different
representations stand for their objects” and the way in which different
inferences transform states of information. I roughed out a sketch of
my epiphany in a series of blog posts then set it aside for the cool of
later reflection. Now looks to be a choice moment for taking another look.
A first pass through the variations of representation and reasoning detects the
axes of iconic, indexical, and symbolic manners of representation on the one hand
and the axes of abductive, inductive, and deductive modes of inference on the other.
Early and often Peirce suggests a natural correspondence between the main modes of
inference and the main manners of representation but his early arguments differ from
his later accounts in ways deserving close examination, partly for the extra points in
his line of reasoning and partly for his explanation of indices as signs constituted by
convening the variant conceptions of sundry interpreters.
Resources —
Inquiry Blog • Survey of Pragmatic Semiotic Information
• https://inquiryintoinquiry.com/2024/03/01/survey-of-pragmatic-semiotic-info…
OEIS Wiki • Information = Comprehension × Extension
• https://oeis.org/wiki/Information_%3D_Comprehension_%C3%97_Extension
C.S. Peirce • Upon Logical Comprehension and Extension
• https://peirce.sitehost.iu.edu/writings/v2/w2/w2_06/v2_06.htm
Regards,
Jon
cc: https://www.academia.edu/community/LGqOKr
cc: https://mathstodon.xyz/@Inquiry/113249701127551380
Alex,
Your statement (from the end of your note) depends on what subject you're talking about. "Let me remind myself that the English language is formal at its core and for the language of communication between robots and people it is better to simply talk about simple English, etc."
No. That depends entirely on the subject matter.. If your sentence is about mathematics, it can be translated very accurately to and from a mathematical formula. But if your statement is about what you see when you open your eyes, every word and phrase about the scene would be vague.
Just consider the sentence "I see a blue jay drinking out of the birdbath." There is a continuous infinity of information in the image that you saw. No matter how long you keep describing the situation, a skilled artist could not draw or paint an accurate picture of what you saw.
However, if the artist had a chance to look at the scene for just a few seconds, he or she could draw or paint an image that would be far more accurate than anything you could describe.
That is just one short example of the difference between the discrete (and describable) and the continuous (and undescribable).
Conclusion: An ontology of something that runs on digital computer can be specified precisely in English or Russian or any other natural language. But an ontology of the real world in all its continuous detail can never be expressed precisely in any language with a discrete set of words or symbols.
John
----------------------------------------
From: "Alex Shkotin" <alex.shkotin(a)gmail.com>
John,
I am happy you agreed here:
JFS:"Alex: "We need to formalize our scientific theories to use computers to their full potential." I agree,..."
AS: And the next step is to just align our terminology: not necessarily use the same, but to understand used by other parts.
JFS:"…but the formalization is ALWAYS context dependent. The engineering motto is fundamental:
ALL THEORIES ARE WRONG, BUT SOME ARE USEFUL.
That is true about formalization. It's only precise for subjects that can be expressed in finite bit strings. For 99.9% of all the information we get every second of our lives, vagueness is inescapable. We must deal with it by informal methods of approximations. Any formal statement is FALSE in general, but it may be useful when the limitations are made explicit.
"
AS: We do not use the term context when describing the situation in which the entity being studied is located (usually a system in some state and process). Usually it is described with what other systems and how it interacts and what happens on the border. Remotely acting forces are generally known: gravity and electromagnetic field. Of course we must take into account external flows of bodies, for example particles in the case of ISS. By the way, at the moment for some systems it is necessary to describe their information interaction. You can try to cover all this with the term context, but usually it seems that this is not used. But why not!
I'll write more about finite bit strings later.
In general: our robots must use formal language and algorithmic reasoning and acting. If they are boring we will have to endure it.
Let me remind myself that the English language is formal at its core and for the language of communication between robots and people it is better to simply talk about simple English, etc.
Alex
Marco,
I am not in a state of shock.
But I realize that the physicists who vote on this issue are clueless about AI.
John
----------------------------------------
From: "Marco Neumann" <marco.neumann(a)gmail.com>
I take the silence here on ontolog as revealing, are you all still in a state of shock? :)
https://www.nobelprize.org/prizes/physics/2024/summary/
I was certainly surprised and had a look at the motivation by the committee and it states among other details "the technique involves iteratively changing the strength of the connections between the magnets in an attempt to find a minimum value for the energy of the system" in combination with Boltzmann machines sounds definitely better than machines think but I'm still not entirely convinced it merits the physics award..
Well, I think I am now forced to change my tune when I disparagingly talk about GenAI in the future. Still for me applications like ChatGPT are first of all manufactured products and not a science.
Do I now need a physics degree and a degree in brain science to understand GenAI?
Best.
Marco