I have made some revisions in my slides for the Ontology Summit talk on March 6. Most of the changes are minor clarifications.  The most important additions are about a quotation by Zellig Harris.

Z. Harris was the inventor of Transformational Grammar and the thesis adviser to Noam Chomsky, who went much further on the syntactic features.  But Harris had put more emphasis on the semantics, which many linguists believe is much more important than syntax.

The single most important quotation by Harris is one that William Frank used to quote at the end of all his notes to Ontolog Forum.  I mention that quotation on slide 40 and discuss its implications on slide 41:

"We understand what other people say through empathy — imagining ourselves to be in the situation they were in, including imaging wanting to say what they wanted to say."  Zellig Harris 

In January 2023, I sent a note to Frank to ask for the full citation for that quotation.  I got a response from his son, which said that William Frank had died in November 2022.  That's why he stopped sending notes to Ontolog Forum.   His notes were usually insightful, and it's sad that we no longer have his comments.

Other important issues, related to AGI, include the Central Executive, which is introduced in slide 35 in a discussion of neuroscience.  Slide 36 shows how a simulation of  a Central Executive could be added to an intelligent computer system.  

That addition won't immediately make the system more intelligent, but it can provide a place where issues of relevance, moral, and ethical behavior can be addressed.  That is important for designing systems that can evaluate and modify proposed actions that might cause dangerous or irresponsible behavior.

Slide 39 is important for addressing "wicked problems" that involve “complex interdependences between the systems involved, and incomplete, inconsistent, information about the problem context. Wicked problems are dynamic and span many domains with complex legal, ethical, and societal aspects.” 

New projects of any kind open up unexplored territory, and old solutions that LLMs can dig up are almost never adequate to address, much less solve them.  This is another area where new technology (or variations of older symbolic technology) are necessary.  And the Central Executive is the kind of system where such issues should be addressed -- usually in discussions with human executives and advisers.

Some people working with LLMs mention the problems, but I have not seen any discussions that could address them.  I believe that a Central Executive is an important step toward a solution.  And I also believe that the Central Executive should have some kind of empathy with humans -- artificial empathy is better than no empathy at all. 

Important addition:  Whenever the Central Executive encounters a wicked problem or a proposal for a dangerous or unethical action, it should notify a human executive who could take action or call a committee meeting to consider some major actions that may be required.

John