Hi all.
John, thank you for interrelating the multiple semantics and ontology projects in
https://jfsowa.com/ikl, a handy resource!
Regarding the article forwarded by Alex, Stamper’s Semiotic ladder would be pertinent:
http://assets.cs.ncl.ac.uk/seminars/101.pdf (and multiple subsequent materials on the
subject:
https://scholar.google.com/scholar?q=stampers+semiotic+ladder)
I’ve referred to it in my past work, e.g.:
* A Transaction-oriented architecture for enterprise systems - Sheffield Hallam
University Research Archive (shu.ac.uk)<https://shura.shu.ac.uk/7988/>
* Semiotic models of trust and usability for agent-managed Grid services - Sheffield
Hallam University Research Archive (shu.ac.uk)<https://shura.shu.ac.uk/1173/>
I believe that the late Stamper’s work is underrepresented, but it would be interesting to
hear your thoughts.
Thanks!
Simon
Dr Simon Polovina
Department of Computing, Sheffield Hallam University, UK
Ontolog Forum
Profile<https://ontologforum.com/index.php/SimonPolovina>
From: John F Sowa <sowa(a)bestweb.net>
Sent: Wednesday, July 17, 2024 10:47 PM
To: ontolog-forum(a)googlegroups.com; CG <cg(a)lists.iccs-conference.org>
Subject: [CG] Going beyond LLMs (was A formalized approach to consider agents' norms
and values
Alex and Gary,
The article cited in your notes is based on logical methods that go far beyond anything
that is being done with LLMs and their applications to generative AI. Such developments
are important for evaluating and testing the accuracy of the output of OpenGPT and related
systems.
I won't say anything in detail about the cited article. But I noticed that some of
the researchers mentioned in the article participated in the IKRIS project, which was
developing metalanguage extensions to Common Logic. For a summary of that project with
multiple references for further study, see
https://jfsowa.com/ikl .
The documents cited there include a list of the participants and the topics they were
working on. The issues they discuss are vitally important for testing and evaluating the
results generated by LLMs. Without such evaluation, the output generated by LLMs cannot
be trusted for any critical applications.
John
________________________________
From: "Alex Shkotin"
<alex.shkotin@gmail.com<mailto:alex.shkotin@gmail.com>>
Subject: Re: [ontolog-forum] A formalized approach to consider agents' norms and
values
Gary, thank you!
I have sent it to my favorite formal philosophers
https://www.facebook.com/share/SaGkSXTmVF2HcJp9/
Alex
вт, 16 июл. 2024 г. в 18:12, Gary Berg-Cross <:" style="box-sizing:
border-box; color: rgb(0, 102, 147); text-decoration: underline; user-select:
auto;">gbergcross@gmail.com>:<mailto:gbergcross@gmail.com%20target=>
Ken Forbus posted this elsewhere but it should be of interest to this community:
"How can an AI system build up and maintain an accurate mental model of people's
norms, in order to avoid social friction? This is difficult because norms not only vary
between groups but also evolve over time. Taylor Olson's approach is to develop a
formal defeasible deontic calculus, building on his prior work on representing social and
moral norms, which enables resolving norm conflicts in reasonable ways. This paper
appeared at the Advances in Cognitive Systems conference in Palermo last month."
https://arxiv.org/abs/2407.04869?
Gary Berg-Cross