A recent discussion about consciousness in Ontolog Forum showed that Peirce's writings are still important for understanding and directing research on the latest issues in artificial intelligence.  The note below is my response to a discussion about AI research on artificial consciousness.  The quotation from 1906 (EP 2:544) is still an excellent guide for ongoing research.

John
 



Alex and Ricardo,

Your notes remind me of the importance of vagueness and the limitations of precision in any field -- especially science, engineering, and formal ontology.  Rather than sessions about consciousness,  I recommend a study of vagueness.  That is why I changed the subject line.  For a summary of the issues, see below for an excerpt from an article I'm writing.

Alex> So we have not only plenty of theories [of consciousness], but R&D implementations.  Here a situation is possible that they need no formalization because they use math directly.  The formalization is still possible but when the main knowledge is in math, the math level is responsible for accuracy.

Yes.  Plenty of theories and some implementations, but no consensus on the theories, and nothing useful for any theoretical or practical applications of ontology.

Furthermore, every formal theory is stated in some version of mathematics.  Every version of logic -- from Aristotle to today -- is considered a branch of mathematics.  Formalization is always an  application of mathematics.  The notation used for the math is irrelevant.  Aristotle's syllogisms are the first version of formal logic, and he invented the first controlled natural language for stating them. 

Ricardo> I suggest this link: https://en.wikipedia.org/wiki/Artificial_consciousness   It is a bit old and biased, but gives a gist of what is being done in the artificial systems side.

Thanks for recommending that article.  It is an excellent overview with well over a hundred references to theory and implementations from every point of view, including Google's work up to 2022. 

But I would not call it "old and biased".  Although it does not include anything about the 2023 work on GPT and related systems, it cites Google's work on their foundations.  GPT systems, by themselves, do not do anything related to consciousness.

Ricardo, quoting from a note by JFS> The sentence "Any time wasted on discussing consciousness would have no practical value for any applications of ontology." sounds a biit disrespectful for the people that wrote the 100,500 books about consciousness that Anatoly mentioned. 

Please read what I wrote above.  I show a high respect for the ongoing research and publications.  But I make the point that none of that work is relevant to the theory and applications of ontology.   

Following is an excerpt from an article I'm writing.  Note the term 'mental model'.  I propose the following definition of consciousness:  the ability to generate, modify, and use mental models as the basis for perception, thought, action, and communication.  That definition is sufficiently vague to include normal uses of the word 'consciousness'.  It can also serve as a guideline for more detailed research and applications.  It could even be used to define artificial consciousness if and when any AI systems could "generate, modify, and use mental models as the basis for perception, thought, action, and communication."

John
______________________________________

Excerpt from a forthcoming article by J. F. Sowa:

Natural languages can be as precise as a formal language or as vague as necessary for planning and negotiating.  The precision of a formal language is determined by its form or syntax together with the meaning of its components.  But natural languages are informal because the precise meaning of a word or sentence depends on the situation in which it’s spoken, the background knowledge of the speaker, and the speaker’s assumptions about the background knowledge of the listeners. Since no one has perfect knowledge of anyone else’s background, communication is an error-prone process that requires frequent questions and explanations.  Precision and clarity are the goal not the starting point.  Whitehead (1937) aptly summarized this point:
Human knowledge is a process of approximation.  In the focus of experience, there is comparative clarity.  But the discrimination of this clarity leads into the penumbral background.  There are always questions left over.  The problem is to discriminate exactly what we know vaguely.
A novel theory of semantics, influenced by Wittgenstein’s language games and related developments in cognitive science, is the dynamic construal of meaning (DCM) proposed by Cruse (2002). The basic assumption of DCM is that the most stable aspect of a word is its spoken or written sign; its meaning is unstable and dynamically evolving as it is used in different contexts or language games. Cruse coined the term microsense for each subtle variation in meaning. This is an independent rediscovery of Peirce’s view: sign types are stable, but each interpretation of a sign token depends on its context in a pattern of other signs, the physical environment, and the background knowledge of the interpreter.
For the purpose of this inquiry a Sign may be defined as a Medium for the communication of a Form.  It is not logically necessary that anything possessing consciousness, that is, feeling of the peculiar common quality of all our feeling, should be concerned.  But it is necessary that there should be two, if not three, quasi-minds, meaning things capable of varied determination as to forms of the kind communicated.    (R793, 1906, EP 2:544)
These observations imply that cognition involves an open-ended variety of interacting processes. Frege’s rejection of psychologism and “mental pictures” reinforced the behaviorism of the early 20th century. But the latest work in neuroscience uses “folk psychology” and introspection to interpret data from brain scans (Dehaene 2014). The neuroscientist Antonio Damasio (2010) summarized the issues:
The distinctive feature of brains such as the one we own is their uncanny ability to create maps...  But when brains make maps, they are also creating images, the main currency of our minds.  Ultimately consciousness allows us to experience maps as images, to manipulate those images, and to apply reasoning to them.
The maps and images form mental models of the real world or of the imaginary worlds in our hopes, fears, plans, and desires.  They provide a “model theoretic” semantics for language that uses perception and action for testing models against reality.  Like Tarski’s models, they define the criteria for truth, but they are flexible, dynamic, and situated in the daily drama of life.