Cf: Inquiry Into Inquiry • Discussion 6
Re: Mathstodon • Nicole Rust
Computations or Processes —
How do you think about the building blocks of the brain?
I keep coming back to this thread about levels, along with others
on the related issue of paradigms, as those have long been major
questions for me. I am trying to clarify my current understanding
for a blog post. It will start out a bit like this —
A certain amount of “level” language is natural in the sciences
but “level” metaphors come with hidden assumptions about higher and
lower places in hierarchies which don't always fit the case at hand.
In complex cases what look at first like parallel strata may in time
be better comprehended as intersecting domains or mutually recursive
and entangled orders of being. When that happens we can guard against
misleading imagery by speaking of domains or realms instead of levels.
To be continued …
Cf: Systems of Interpretation • 1
Questions have arisen about the different styles of diagrams
and figures used to represent triadic sign relations in Peircean
semiotics. What do they mean? Which style is best? Among the
most popular pictures some use geometric triangles while others
use the three‑pronged graphs Peirce used in his logical graphs
to represent triadic relations.
Diagrams and figures, like any signs, can serve to communicate
the intended interpretants and thus to coordinate the conduct of
interpreters toward the intended objects — but only in communities of
interpretation where the conventions of interpretation are understood.
Conventions of interpretation are by comparison far more difficult to
That brings us to the first question we have to ask about the possibility
of communication in this area, namely, what conventions of interpretation
are needed to make sense of these diagrams, figures, and graphs?
The subject line sounds like the beginning of a joke. Unfortunately for them, it wasn't a joke. See the news item below. There will be more news about this case later today.
For anyone who may be interested in ChatGPT and related systems, you can check the slides and the video of a talk by my colleague Arun Majumdar and me on May 31. For the slides by John Sowa, see EvaluatingGPT--JohnSowa_20230531.pdf (ontologforum.s3.amazonaws.com)
For the Video recording of both talks and a long Q/A discussion, see https://ontologforum.s3.amazonaws.com/General/EvaluatingGPT--JohnSowa-ArunM…
New York lawyers blame ChatGPT for tricking them into citing ‘bogus legal research
Attorneys Steven A. Schwartz and Peter LoDuca are facing possible punishment over a filing in a lawsuit against an airline that included references to past court cases that Schwartz thought were real, but were actually invented by the artificial intelligence-powered chatbot.
Schwartz explained that he used the groundbreaking program as he hunted for legal precedents supporting a client's case against the Colombian airline Avianca for an injury incurred on a 2019 flight. The chatbot, which has fascinated the world with its production of essay-like answers to prompts from users, suggested several cases involving aviation mishaps that Schwartz hadn't been able to find through usual methods used at his law firm.
The problem was, several of those cases weren't real or involved airlines that didn’t exist. Schwartz told Judge P. Kevin Castel he was “operating under a misconception ... that this website was obtaining these cases from some source I did not have access to.”
He said he “failed miserably” at doing follow-up research to ensure the citations were correct. “I did not comprehend that ChatGPT could fabricate cases,” Schwartz said.
The judge confronted Schwartz with one legal case invented by the computer program. It was initially described as a wrongful death case brought by a woman against an airline only to morph into a legal claim about a man who missed a flight to New York and was forced to incur additional expenses. “Can we agree that's legal gibberish?” Castel asked.
The judge said he'll rule on sanctions at a later date.
On May 31st, I presented a talk on "Evaluating and reasoning with and about GPT", and Arun Majumdar presented a demo that shows how the technology developed by Permion.ai LLC supports those methods.. These methods also show that ideas developed by Charles Sanders Peirce, especially in the last decade of his life, are still at the forefront of some of the latest developments in AI.
The Permion technology uses tensor calculus for relating the Large Language Models (LLMs) of GPT to more traditional computational methods of AI (logic, conceptual graphs, computational linguistics, neural networks, and statistics) . With these methods, the Permion software can use GPT for its ability to translate languages (natural and artificial) and to retrieve large volumes of useful data from the WWW.
By using conceptual graphs and formal deduction, Permion can detect and avoid the errors caused by the loose or nonexistent reasoning methods of GPT while taking advantage of its useful features for processing languages.
For the slides by John Sowa, see EvaluatingGPT--JohnSowa_20230531.pdf (ontologforum.s3.amazonaws.com)
For the Audio-Video recording of both talks, see https://ontologforum.s3.amazonaws.com/General/EvaluatingGPT--JohnSowa-ArunM…