In today's ontology summit discussion, I made the point that a reliable, dependable,
trustworthy AI system must have a central executive that evaluates, monitors all
developments and makes the final decisions about what actions to take.
The central executive in the human brain is located in the frontal lobes, and it depends
on information and processing in all other parts of the cerebral cortex, cerebellum, brain
stem, and the connections among them. It doesn't have all the knowledge of
everything, but it has access to whatever critical knowledge is required when it is
required to make a decision about what actions to take. In reference to the researchers
who originally proposed it, a more specific name is the Baddeley-Hitch Central Executive.
Check Wikipedia for more info. (Technical articles in Wikipedia get far more thorough
evaluation (by humans!) than anything generated by LLMs.)
In any large organization -- a business has a CEO, every department has a manager, every
school has a principal, every classroom has a teacher, and every organization of any kind
for any purpose has somebody in charge -- the person in charge must have guidelines (AKA
goals about what to do and ethical principles about how to do it). The central executive
in an AI system must have similar goals and ethical principles.
In the discussion, somebody said that any such thing must be unbiased. But that is a
meaningless statement. Anything that has background knowledge and goals will be biased
toward using that knowledge to accomplish those goals. A better term is ethical. An
ethical AI system would be fair and honest. It would avoid harming people, destroying
property, or damaging the environment. And it would obey all laws, rules, and
regulations.
Another person suggested that the set of information accessed by a generative AI system
should be limited to "safe" information that will not generate bad results. But
that would severely limit what the system can do. And it cannot prevent some combination
of "safe" actions from causing unintended damage in circumstances that are
different from anything it had been trained for.
In March I finished an article I had been discussing for several months. I won't
release a full copy, because it has not yet appeared in print. However, the attached
Section7.pdf summarizes the issues, and the last page has some references FYI. It
discusses the central executive and its role in humans and in AI systems.
Summary: LLMs are used for two purposes: (1) supporting a natural language interface to
a complex AI system; and (2) generating new hypotheses, suggestions, proposals, or
educated guesses. The output for #1 is normally safe, but checking would be useful to
ensure safety, The output for #2 would be used in the initial stage of abduction, which
would be followed by deduction for checking and evaluation prior to any further use. But
people who are knowledgeable about the technology may ask to see the output and do their
own checking.
John
Show replies by date