Alex and Gary,

The article cited in your notes is based on logical methods that go far beyond anything that is being done with LLMs and their applications to generative AI.  Such developments are important for evaluating and testing the accuracy of the output of OpenGPT and related systems.  

I won't say anything in detail about the cited article.  But I noticed that some of the researchers mentioned in the article participated in the IKRIS project, which was developing metalanguage extensions to Common Logic.   For a summary of that project with multiple references for further study, see https://jfsowa.com/ikl

The documents cited there include a list of the participants and the topics they were working on.  The issues they discuss are vitally important for testing and evaluating the results generated by LLMs.  Without such evaluation, the output generated by LLMs cannot be trusted for any critical applications.

John
 


From: "Alex Shkotin" <alex.shkotin@gmail.com>
Subject: Re: [ontolog-forum] A formalized approach to consider agents' norms and values

Gary, thank you!

I have sent it to my favorite formal philosophers https://www.facebook.com/share/SaGkSXTmVF2HcJp9/

Alex

вт, 16 июл. 2024 г. в 18:12, Gary Berg-Cross <:" style="box-sizing: border-box; color: rgb(0, 102, 147); text-decoration: underline; user-select: auto;">gbergcross@gmail.com>:

Ken Forbus posted this elsewhere but it should be of interest to this community:

"How can an AI system build up and maintain an accurate mental model of people's norms, in order to avoid social friction? This is difficult because norms not only vary between groups but also evolve over time. Taylor Olson's approach is to develop a formal defeasible deontic calculus, building on his prior work on representing social and moral norms, which enables resolving norm conflicts in reasonable ways. This paper appeared at the Advances in Cognitive Systems conference in Palermo last month."
https://arxiv.org/abs/2407.04869?

Gary Berg-Cross