Dan,
I certainly agree that 99,x% of our knowledge of the world is based on an integration of a
lifetime of experience. There is no way that any of us can say exactly where we first
acquired a huge number of ideas that are thoroughly built into our view of life and the
world. But we can often associate different kinds of knowledge with different periods of
our lives. And with different kinds of people.
For example, if you name any kind of food, I can tell you whether I first encountered it
as a child at home, at the home of some friend or relative, at a restaurant, at school, in
my garden, while traveling away from home, in what country or kind of restaurant, etc..
There are many kinds of things I can say I learned from my father, my mother, my
grandmother, friends, relatives, schools, etc. I might not be able to pin down many
specific items, but I can classify many kinds of things I first discovered in what kinds
of place, time of life, with what kinds of people, etc.
And if these are recent things, I can say whether I got them from TV, from reading, from
email, from browsing. and often from exactly which person or publication. When the source
is important I remember. But even if I don't remember the exact individual, I usually
remember enough that I can find the source with a bit of computer searching.
I certainly have much more background knowledge than I can get from ChatGPT. And I have
downloaded a lot of information that is located on my computer, and I can find or search
for its origin quite easily. But ChatGPT is one of the few computer systems that cannot
tell me anything about where it got the information it uses,
That is definitely not a good feature. It's a very strong reason for using hybrid
systems for reasoning. LLMs are good for translating different languages and formats,
But sources are extremely important, and computer systems should be able to keep or find
info about sources. It's helpful to know if some item came from Vladimir Putin or the
FBI. (My belief in those two is the opposite of Trump's.)
Symbolic AI is far better than humans in this regard, and LLMs are far worse than humans.
That is not a good point in favor of LLMs as the primary source of intelligence, They are
certainly useful for what they do. But much more is necessary.
John
----------------------------------------
From: "Dan Brickley' via ontolog-forum"
<ontolog-forum(a)googlegroups.com>
On Sat, Feb 22, 2025 at 19:54 John F Sowa <sowa(a)bestweb.net> wrote:
Humans can tell you where they got their info,
Not me!
and they can answer your questions about their method of reasoning to derive those
answers.
I can’t! (for 99.x% of my knowledge of the world)
Dan