Functional Logic • Inquiry and Analogy • Discussion 1
• http://inquiryintoinquiry.com/2023/07/06/functional-logic-inquiry-and-analo…
Re: Functional Logic • Inquiry and Analogy • 8
• https://inquiryintoinquiry.com/2023/06/28/functional-logic-inquiry-and-anal…
All,
Post 8 used the following Figure to illustrate
Dewey's example of a simple inquiry process.
Dewey's “Sign of Rain” Example
• https://inquiryintoinquiry.files.wordpress.com/2022/04/deweys-sign-of-rain-…
John Mingers shared the following observations.
<QUOTE JM:>
Liked the example — a couple of questions/comments.
1. In the diagram you have included with the Triadic sign,
although with dotted lines, an interpretive agent.
Now I thought that Peirce was a bit cagey about this.
Wasn't he clear that the interpretant was not to be
identified with an actual interpreter? What is
your thinking on this?
2. I do agree that there needs to be an interpreter
but does it need to be a person? Surely it could
be any organism that can interact with relations?
</QUOTE>
The cool air is something our hero interprets as a sign of rain and
his thought of rain is an interpretant sign of the very same object.
The relation between the interpretant sign and the interpretive agent
is clear enough as far as a beginning level of description goes. But
a fully pragmatic, semiotic, and system-theoretic account will demand
a more fine-grained analysis of what goes on in the inquiry process.
Speaking very roughly, an interpreter is any agent or system —
animal, vegetable, or mineral — which actualizes or embodies
a triadic sign relation.
Several passages from Peirce will help to flesh out the
bare abstractions. I'll begin collecting them on the
linked blog page and discuss them further as we proceed.
Regards,
Jon
Cf: Systems of Interpretation • 1
https://inquiryintoinquiry.com/2023/05/05/systems-of-interpretation-1-2/
All,
Questions have arisen about the different styles of diagrams
and figures used to represent triadic sign relations in Peircean
semiotics. What do they mean? Which style is best? Among the
most popular pictures some use geometric triangles while others
use the three‑pronged graphs Peirce used in his logical graphs
to represent triadic relations.
Diagrams and figures, like any signs, can serve to communicate
the intended interpretants and thus to coordinate the conduct of
interpreters toward the intended objects — but only in communities of
interpretation where the conventions of interpretation are understood.
Conventions of interpretation are by comparison far more difficult to
communicate.
That brings us to the first question we have to ask about the possibility
of communication in this area, namely, what conventions of interpretation
are needed to make sense of these diagrams, figures, and graphs?
Regards,
Jon
The subject line sounds like the beginning of a joke. Unfortunately for them, it wasn't a joke. See the news item below. There will be more news about this case later today.
For anyone who may be interested in ChatGPT and related systems, you can check the slides and the video of a talk by my colleague Arun Majumdar and me on May 31. For the slides by John Sowa, see EvaluatingGPT--JohnSowa_20230531.pdf (ontologforum.s3.amazonaws.com)
For the Video recording of both talks and a long Q/A discussion, see https://ontologforum.s3.amazonaws.com/General/EvaluatingGPT--JohnSowa-ArunM…
John
___________________________
New York lawyers blame ChatGPT for tricking them into citing ‘bogus legal research
Excerpts:
Attorneys Steven A. Schwartz and Peter LoDuca are facing possible punishment over a filing in a lawsuit against an airline that included references to past court cases that Schwartz thought were real, but were actually invented by the artificial intelligence-powered chatbot.
Schwartz explained that he used the groundbreaking program as he hunted for legal precedents supporting a client's case against the Colombian airline Avianca for an injury incurred on a 2019 flight. The chatbot, which has fascinated the world with its production of essay-like answers to prompts from users, suggested several cases involving aviation mishaps that Schwartz hadn't been able to find through usual methods used at his law firm.
The problem was, several of those cases weren't real or involved airlines that didn’t exist. Schwartz told Judge P. Kevin Castel he was “operating under a misconception ... that this website was obtaining these cases from some source I did not have access to.”
He said he “failed miserably” at doing follow-up research to ensure the citations were correct. “I did not comprehend that ChatGPT could fabricate cases,” Schwartz said.
The judge confronted Schwartz with one legal case invented by the computer program. It was initially described as a wrongful death case brought by a woman against an airline only to morph into a legal claim about a man who missed a flight to New York and was forced to incur additional expenses. “Can we agree that's legal gibberish?” Castel asked.
The judge said he'll rule on sanctions at a later date.
Source: https://www.nbcbayarea.com/news/national-international/new-york-lawyers-bla…
On May 31st, I presented a talk on "Evaluating and reasoning with and about GPT", and Arun Majumdar presented a demo that shows how the technology developed by Permion.ai LLC supports those methods.. These methods also show that ideas developed by Charles Sanders Peirce, especially in the last decade of his life, are still at the forefront of some of the latest developments in AI.
The Permion technology uses tensor calculus for relating the Large Language Models (LLMs) of GPT to more traditional computational methods of AI (logic, conceptual graphs, computational linguistics, neural networks, and statistics) . With these methods, the Permion software can use GPT for its ability to translate languages (natural and artificial) and to retrieve large volumes of useful data from the WWW.
By using conceptual graphs and formal deduction, Permion can detect and avoid the errors caused by the loose or nonexistent reasoning methods of GPT while taking advantage of its useful features for processing languages.
For the slides by John Sowa, see EvaluatingGPT--JohnSowa_20230531.pdf (ontologforum.s3.amazonaws.com)
For the Audio-Video recording of both talks, see https://ontologforum.s3.amazonaws.com/General/EvaluatingGPT--JohnSowa-ArunM…
John
Cf: Inquiry Into Inquiry • On Initiative 3
https://inquiryintoinquiry.com/2023/05/01/inquiry-into-inquiry-on-initiativ…
Re: Scott Aaronson • Should GPT Exist?
https://scottaaronson.blog/?p=7042
My Comment —
https://scottaaronson.blog/?p=7042#comment-1946961
The more fundamental problem I see here is the failure to grasp the
nature of the task at hand, and this I attribute not to a program
but to its developers.
Journalism, Research, and Scholarship are not matters of
generating probable responses to prompts or other stimuli.
What matters is producing evidentiary and logical supports
for statements. That is the task requirement the developers
of these LLM‑Bots are failing to grasp.
There is nothing new about that failure. There is a long history of attempts to
account for intelligence and indeed the workings of scientific inquiry based on
the principles of associationism, behaviorism, connectionism, and theories of
that order. But the relationship of empirical evidence, logical inference,
and scientific information is more complex and intricate than is dreamt of
in those reductive philosophies.
Note. The above comment was originally posted on March 1st
but appears to have been accidentally deleted.
Regards,
Jon