I received an offline note that said I was being too negative about the power of the LLM-based technology.
I never wanted to give that impression. I am enthusiastic about that technology and its potential.
But I am also enthusiastic about the achievements of the 70 years of AI and computer science. The most important new developments take advantage of both. I often criticize attempts to use LLMs for applications that they cannot support -- deductive reasoning is a prime example.
I am most enthusiastic about the hybrids, but I am critical of attempts to make LLMs do things that they cannot achieve by themselves. Therefore, I frequently comment on the failures of LLM applications that do not make a proper balance. There are many, many examples. Many of the ones I discuss were sent to me by other people -- often in offline notes by people who would prefer not to be mentioned.
If anybody thinks that I have not achieved a proper balance in one or more notes, please cite the examples. Some people do so on the lists and others send me offline notes.
John
Ravi,
There is a huge difference between the theoretical issues about what LLMs (or the smaller SLMs) can do as the underlying technology and what any particular software system can do.
The limitations of LLM technology (summarized in the note that started this thread) cannot be extended by systems that just add various interfaces to the LLMs. But applications that combine LLMs with other technology (from AI, computer science, or many kinds of applications) may support a wide variety of technology.
Examples that we have discussed before include Wolfram's use of LLMs to support an English-like front end to their powerful Mathematica system. That does everything that anyone has done with Mathematica and adds an English-like front end to give users a simpler and more friendly interface. Many other companies are supporting such technology with varying degrees of success. They implement a friendly interface to their previous systems.
Our VivoMind system in 2010 included technology that was very powerful and ACCURATE for applications that cannot be done with LLMs even today. See https://jfsowa.com/talks/cogmem.pdf
Our new Permion.ai system combines a newer version of what VivoMind could do with LLMs to support the interface. Arun Majumdar and I have discussed these issues in talks that we gave in the past year or so.
I believe that is the wave of the future: use LLMs as one component of an Ai system that uses other kinds of technology to implement functionality that LLMs, by themselves, cannot support.
Answer to your question: The features you're asking for in the note below would be very easy to implement -- just add an on/off button for features you don't want. That does not require new technology. It just requires somebody to add that button.
John
-------------------------------------------------------------------
From: "Ravi Sharma" <drravisharma(a)gmail.com>
John
Are we at a point where
1. We can turn AI off say on phone apps and desktops
2. We can limit the content access to AI for inputs to hopefully focus the results better
Regards
Thanks.Ravi(Dr. Ravi Sharma, Ph.D. USA)
Michael,
The examples you cite illustrate the strengths and weaknesses of LLMs. They show why multiple methods of evaluation are necessary.
1. The failures mentioned in paragraph 1 show that writing a program requires somebody or something that can understand a problem statement and generate a sequence of commands (in some detailed notation) to specify a method for solving that problem. LLMs can't do that
2. The second paragraph shows that ChatGPT had a better selection of answers available in May or perhaps an improvement in its ability to find answers. It's possible that very few dental clinicians had ever used ChatGPT for that purpose. You experiment and the work by the dental clinicians in India may have added enough new patterns that dental clinicians worldwide would have benefitted.
3. The third paragraph shows how ChatGPT learns how to do what it does best: translate from one notation to another. Since you did all the problem analysis to generate Python with miscellaneous errors, it learned how to translate your personal dialect of Python to the official Python syntax. That is an excellent example of LLMs at their best. It was learning how to translate, not learning how to understand.
4. I would say that there is a major difference. Wikipedia is not improved by any method of learning (by humans or machines). Instead, some articles are excellent products of collaboration by experts on the subject matter. But other articles were written hastily by people who don't have the expertise or the patience to do a thorough research of the topic. The Wikipedia editors usually mark those articles that require further attention. But there are many articles that fall between the cracks -- nobody knows whether they are accurate or not.
John
----------------------------------------
From: "Michael DeBellis" <mdebellissf(a)gmail.com>
[Paragraph 1] I agree. I've asked ChatGPT and Copilot for SPARQL queries, not extremely complicated, either things I thought I would attempt rather than going back to the documentation or in some cases to get DBpedia or Wikidata info because I find the way they structure data to be not very intuitive and it takes me forever to figure out how to find things like all the major cities in India (if anyone knows some good documentation on the DBpedia or Wikidata models please drop a note). I think part of the problem is that people see what looks like well formatted code and assume it actually works. None of the SPARQL queries I've ever gotten worked.
[2] We did an experiment in February this year with dental clinicians in India where we gave them a bunch of questions and had them use ChatGPT to get answers and they rated the answers very highly even though almost all of them were incomplete, out of date, had minor or major errors. On the other hand, when I ran the same questions through ChatGPT (and in both cases I used 3.5) in May it was radically different. Almost all the answers were spot on.
[3] And for coding, I have to say I find the AI support in PyCharm (my Python IDE) to be a great time saver. Most of the time now I never finish typing. The AI can figure out what I'm doing by figuring out patterns and puts the suggested completion in grey and all I do is hit tab. It's also interesting how it learned. My code is fairly atypical Python, because it involves manipulating knowledge graphs and at first I was getting mostly worthless suggestions. But after a few days it figured out the patterns to read and write to the graph and it has been an incredible benefit. I like it for the same reason I always copy and paste names whenever I can rather than typing, it drastically cuts down on typing errors.
[4] All this reminds me of the debates people had about Wikipedia. Some people thought it was worthless because you can always find some example of vandalism where there is garbage in an article. And other people think it is the greatest thing on the Internet. The answer is somewhere in the middle. Wikipedia is incredibly useful and also an amazing example of how people can collaborate just to contribute their knowledge, the way people collaborate on that site is so different than most of the Internet, but you should never use it as a primary source. Always check the references. That's the way I feel about Generative AI. Like Wikipedia I think it is a great resource in spite of the fact that some people claim it can do much more than it really can and that it can still be wrong. It's just another tool and if used properly, an incredibly useful one.
Michael
https://www.michaeldebellis.com/blog
On Saturday, July 27, 2024 at 4:41:24 PM UTC-7 John F Sowa wrote:
Another of the many reasons why Generative AI requires other methods -- such as the 70 yeas of AI and computer science -- to test, evaluate, and correct anything and everything that it "generates",
As the explanation below says, it does not "UNDERSTAND" what it is doing It just finds and reproduces patterns that it finds in its huge volume of data. Giving it more data gives it more patterns to choose from. But it does nothing to help it understand any of them.
This method enables it to surpass human abilities on IQ tests, law exams, medical exams, etc. -- for the simple reason that the answers to those exams can be found somewhere on the WWW. In other words, Generative AI does a superb job of CHEATING on exams. But it is hopelessly clueless in solving problems whose solution depends on understanding the structure and the goal of the problem.
For similar reasons, the article mentions that self-driving cars fail in complex environments, such as busy streets in city traffic. The number and kinds of situations are far more varied and complex than anything they have been trained on. Carnegie Mellon University is involved in more testing of self-diving cars because Pittsburgh has the most complex and varied patterns. It has more bridges than any other city in the world. It also has three major rivers, many hills and valleys, steep winding roads, complex intersections, tunnels, foot traffic, and combinations of any or all of the above.
Drivers who test self-driving cars in Pittsburgh say that they can't go for twenty minutes without having to grab the steering wheel to prevent an accident. (By rhe way, I learned to drive in P:irravurgh. Then I went to MIT and Harvard,, where the Boston patterns are based on 300-year-old cow paths.)
John
Logic of Relatives
• https://inquiryintoinquiry.com/2024/08/05/logic-of-relatives-a/
Relations Via Relative Terms —
The logic of relatives is the study of relations
as represented in symbolic forms known as rhemes,
rhemata, or relative terms.
Introduction —
The logic of relatives, more precisely, the logic of relative terms,
is the study of relations as represented in symbolic forms called
rhemes, rhemata, or relative terms. The treatment of relations
by way of their corresponding relative terms affords a distinctive
perspective on the subject, even though all angles of approach must
ultimately converge on the same formal subject matter.
The consideration of relative terms has its roots in antiquity
but it entered a radically new phase of development with the
work of Charles Sanders Peirce, beginning with his paper
“Description of a Notation for the Logic of Relatives,
Resulting from an Amplification of the Conceptions
of Boole's Calculus of Logic” (1870).
References —
• Peirce, C.S., “Description of a Notation for the Logic of Relatives,
Resulting from an Amplification of the Conceptions of Boole's Calculus
of Logic”, Memoirs of the American Academy of Arts and Sciences 9,
317–378, 1870. Reprinted, Collected Papers CP 3.45–149. Reprinted,
Chronological Edition CE 2, 359–429.
• https://www.jstor.org/stable/25058006
• https://archive.org/details/jstor-25058006
• https://books.google.com/books?id=fFnWmf5oLaoC
Readings —
• Aristotle, “The Categories”, Harold P. Cooke (trans.),
pp. 1–109 in Aristotle, Vol. 1, Loeb Classical Library,
William Heinemann, London, UK, 1938.
• Aristotle, “On Interpretation”, Harold P. Cooke (trans.),
pp. 111–179 in Aristotle, Vol. 1, Loeb Classical Library,
William Heinemann, London, UK, 1938.
• Aristotle, “Prior Analytics”, Hugh Tredennick (trans.),
pp. 181–531 in Aristotle, Vol. 1, Loeb Classical Library,
William Heinemann, London, UK, 1938.
• Boole, George, An Investigation of the Laws of Thought
on Which are Founded the Mathematical Theories of Logic
and Probabilities, Macmillan, 1854. Reprinted with
corrections, Dover Publications, New York, NY, 1958.
• Peirce, C.S., Collected Papers of Charles Sanders Peirce,
Vols. 1–6, Charles Hartshorne and Paul Weiss (eds.),
Vols. 7–8, Arthur W. Burks (ed.), Harvard University Press,
Cambridge, MA, 1931–1935, 1958. Cited as CP volume.paragraph.
• Peirce, C.S., Writings of Charles S. Peirce : A Chronological Edition,
Volume 2, 1867–1871, Peirce Edition Project (eds.), Indiana University
Press, Bloomington, IN, 1984. Cited as CE 2.
Resources —
Charles Sanders Peirce
• https://mywikibiz.com/Charles_Sanders_Peirce
Relation Theory
• https://oeis.org/wiki/Relation_theory
Survey of Relation Theory
• https://inquiryintoinquiry.com/2024/03/23/survey-of-relation-theory-8/
Peirce's 1870 Logic of Relatives
• https://oeis.org/wiki/Peirce%27s_1870_Logic_Of_Relatives_%E2%80%A2_Overview
Regards,
Jon
cc: https://www.academia.edu/community/5AEQjj
Relations & Their Relatives • 1
• https://inquiryintoinquiry.com/2024/07/31/relations-their-relatives-1-a/
All,
Sign relations are special cases of triadic relations in much
the same way binary operations in mathematics are special cases
of triadic relations. It amounts to a minor complication that
we participate in sign relations whenever we talk or think about
anything else but it still makes sense to try and tease the separate
issues apart as much as we possibly can.
As far as relations in general go, relative terms are often
expressed by means of slotted frames like “brother of __”,
“divisor of __”, and “sum of __ and __”. Peirce referred to
these kinds of incomplete expressions as “rhemes” or “rhemata”
and Frege used the adjective “ungesättigt” or “unsaturated” to
convey more or less the same idea.
Switching the focus to sign relations, it's fair to ask what kinds
of objects might be denoted by pieces of code like “brother of __”,
“divisor of __”, and “sum of __ and __”. And while we're at it, what
is this thing called “denotation”, anyway?
Resources —
Relation Theory
• https://oeis.org/wiki/Relation_theory
Triadic Relations
• https://oeis.org/wiki/Triadic_relation
Sign Relations
• https://oeis.org/wiki/Sign_relation
Survey of Relation Theory
• https://inquiryintoinquiry.com/2024/03/23/survey-of-relation-theory-8/
Peirce's 1870 Logic Of Relatives
• https://oeis.org/wiki/Peirce%27s_1870_Logic_Of_Relatives_%E2%80%A2_Overview
Regards,
Jon
cc: https://www.academia.edu/community/Vj80Dj