The Bourbaki were a group of brilliant mathematicians, who developed a totally unusable system of mathematics. That example below shows how hopelessly misguided they were. Sesame Street's method of teaching math is far and away superior to anything that the Bourbaki attempted to do. Sesame street introduces the number 1 as the starting point of counting. That is also Peirce's method.
Furthermore, the Bourbaki banished all diagrams from their system, and thereby violated every one of Peirce's principles of diagrammatic reasoning. Sesame Street emphasizes diagrams and imagery. Mathematics without diagrams and imagery is blind.
The so-called "new math" disaster of the late 1960s was a hopelessly misguided attempt to inculcate innocent students with set theory as the universal foundation for everything. Another violation of Peirce's methods.
Finally, there is no conflict whatever between deduction and discovery. As Peirce insisted, all discovery is based on diagrams (or images mapped to diagrams). Deduction is just an exploration of the content of some diagram or system of diagrams. There are, of course, many challenges in discovering all the provable implications. But once again, those implications are determined by elaboration and analysis of the starting diagrams.
There is much more to say, and it is closely related to my previous note about problems with AI. I'm currently writing an article that shows how Peirce's diagrammatic reasoning is far and away superior to the currently popular methods of Large Language Models. The LLMs do have some important features, but the LLMs are just one special case of one certain kind of diagram (tensor calculus). The human brain (even a fruit fly brain) can process many more kinds.
There is, of course, much more to say about this issue, but it will take a bit more time to gather the references.
John
----------------------------------------
From: "Evgenii Rudnyi" <rudnyi(a)freenet.de>
Sent: 8/22/23 11:13 AM
Recently I have seen a paper below that could be of interest to this
discussion as it shows that to work deductively even with the number 1
is not that easy.
Best wishes, Evgenii
Mathias, Adrian RD. "A Term of Length 4 523 659 424 929." Synthese 133,
no. 1 (2002): 75-86
"Bourbaki suggest that their definition of the number 1 runs to some
tens of thousands of symbols. We show that that is a considerable
under-estimate, the true number of symbols being 4 523 659 424 929, not
counting 1 179 618 517 981 disambiguatory links."
***** 1st CALL FOR PAPERS*****:
First international workshop on
*Ordinal Methods for Knowledge Representation and Capture (OrMeKR)*
in conjunction with
*The Twelfth International Conference on Knowledge Capture (K-CAP 2023)*
December 5th, 2023, Pensacola, Florida, USA
*Submission Deadline: October 15th, 2023*
1.1 Abstract and Scope:
───────────────────────
The concept of order (i.e., partial ordered sets) is predominant for perceiving
and organizing our physical and social environment, for inferring meaning and
explanation from observation, and for searching and rectifying decisions.
Compared to metric methods, however, the number of (purely) ordinal methods for
capturing knowledge from data is rather small, although in principle they may
allow for more comprehensible explanations. The reason for this could be the
limited availability of computing resources in the last century, which would
have been required for (purely) ordinal computations. Hence, typically
relational and especially ordinal data are first embedded in metric spaces for
learning. Therefore, in this workshop we want to collect and discuss ordinal
methods for capturing and representing knowledge, their role in inference and
explainability, and their possibilities for knowledge visualization and
communication. We want to reflect on these topics in a broad sense, i.e., as a
tool to arrange, compare and compute ontologies or concept hierarchies, as a
feature in learning and capturing knowledge, and as a measure to evaluate model
performance.
1.2 Topics of Interest
──────────────────────
• Ordinal Aspects for Knowledge Representation and Knowledge Bases
• Knowledge Visualization using Order Relations
• Ordinal Representation and Analysis of Ontologies
• Data Fidelity and Reliability of Ordinal Methods
• Theory and Application of Order Dimension and Related Notions
• Ordinal Knowledge Spaces and Ordinal Exploration
• Scaling and Processing Ordinal Information
• Metric Structures in Order Relations
• Algorithms for querying Large Ordinal Data
• Knowledge Discovery in metric-ordinal Heterogeneous Representation
• Ordinal Pattern Structures and Motifs
• Methods for Representation Learning of Order Relations
• Drawing of Hierarchical Graphs and Knowledge Structures
• Non-Linear Ranking in Recommendation Applications
• Linear Ordered Knowledge and Learning
• Scheduling and Planning
• Applications of Ordinal Methods to Scientific Knowledge (e.g., from domains
such as Biology, Physics, Social Sciences, Digital Humanities, etc.)
• Methodologically Related Fields such as Directed Graphs, Formal
Concept Analysis, Conceptual Structures, Relational Data,
Recommendation, Lattice Theory, with a Clear Reference to Order
Relations and Knowledge
1.3 Important Dates (all dates are AoE)
───────────────────────────────────────
• Submission: October 15, 2023
• Author Notification: October 29, 2023
• Camera Ready: November 12, 2023
1.4 Submission Guidlines and Conditions
───────────────────────────────────────
OrMeKR will focus on contributions to the theory and application of
ordinal methods in the realm of knowledge representation and
capture. The workshop welcomes *report papers* (summaries of past work
concerning ordinal methods), *research papers* (novel results),
*position papers* (discussing issues concerning the usefulness of
ordinal methods in KR), and *challenge papers* (describing limitations
and open research questions).
• Submissions should have a minimum of 5 pages and shall not exceed 8
pages.
• Submission must use the provided CEUR Template:
<https://www.kde.cs.uni-kassel.de/ormekr2023/ceur.zip>
• The workshop is not double-blind, hence authors should list their
names and affiliations on the submission.
• Accepted Papers will be published in CEUR Workshop Proceedings
corresponding to K-CAP.
• Authors of accepted workshop papers will present their work in
plenary sessions during the workshop on December 5th.
• Submissions should be emailed to: *[ormekr2023(a)cs.uni-kassel.de]*
1.5 Organizing Committee
────────────────────────
• Tom Hanika
⁃ Institute for Computer Science, University of Hildesheim, Germany
⁃ Berlin School of Library and Information Science,
Humboldt-Universität zu Berlin, Germany
• Dominik Dürrschnabel
⁃ Knowledge & Data Engineering Group, University of Kassel, Germany
• Johannes Hirth
⁃ Knowledge & Data Engineering Group, University of Kassel, Germany
1.6 Program Committee
─────────────────────
• Agnès Braud, Université de Strasbourg, France
• Diana Christea, Babes-Bolyai University, Romania
• Pablo Cordero, University of Malaga, Spain
• Bernhard Ganter, TU Dresden, Germany
• Rokia Missaoui, University of Quebec in Outaouais, Canada
• Robert Jäschke, Humboldt-Universität zu Berlin, Germany
• Giacomo Kahn, Université Lumière Lyon 2, France
• Léonard Kwuida, Bern University of Applied Sciences, Switzerland
• Sebastian Rudolph, TU Dresden, Germany
• Gerd Stumme, University of Kassel, Germany
• Francisco J. Valverde-Albacete, Universidad Rey Juan Carlos, Spain
Cf: Inquiry Into Inquiry • Discussion 6
http://inquiryintoinquiry.com/2023/04/30/inquiry-into-inquiry-discussion-6/
Re: Mathstodon • Nicole Rust
https://mathstodon.xyz/@NicoleCRust@neuromatch.social/110197230713039748
<QUOTE NR:>
Computations or Processes —
How do you think about the building blocks of the brain?
</QUOTE>
I keep coming back to this thread about levels, along with others
on the related issue of paradigms, as those have long been major
questions for me. I am trying to clarify my current understanding
for a blog post. It will start out a bit like this —
A certain amount of “level” language is natural in the sciences
but “level” metaphors come with hidden assumptions about higher and
lower places in hierarchies which don't always fit the case at hand.
In complex cases what look at first like parallel strata may in time
be better comprehended as intersecting domains or mutually recursive
and entangled orders of being. When that happens we can guard against
misleading imagery by speaking of domains or realms instead of levels.
To be continued …
Regards,
Jon
A recent discussion about consciousness in Ontolog Forum showed that Peirce's writings are still important for understanding and directing research on the latest issues in artificial intelligence. The note below is my response to a discussion about AI research on artificial consciousness. The quotation from 1906 (EP 2:544) is still an excellent guide for ongoing research.
John
----------------------------------------
Alex and Ricardo,
Your notes remind me of the importance of vagueness and the limitations of precision in any field -- especially science, engineering, and formal ontology. Rather than sessions about consciousness, I recommend a study of vagueness. That is why I changed the subject line. For a summary of the issues, see below for an excerpt from an article I'm writing.
Alex> So we have not only plenty of theories [of consciousness], but R&D implementations. Here a situation is possible that they need no formalization because they use math directly. The formalization is still possible but when the main knowledge is in math, the math level is responsible for accuracy.
Yes. Plenty of theories and some implementations, but no consensus on the theories, and nothing useful for any theoretical or practical applications of ontology.
Furthermore, every formal theory is stated in some version of mathematics. Every version of logic -- from Aristotle to today -- is considered a branch of mathematics. Formalization is always an application of mathematics. The notation used for the math is irrelevant. Aristotle's syllogisms are the first version of formal logic, and he invented the first controlled natural language for stating them.
Ricardo> I suggest this link: https://en.wikipedia.org/wiki/Artificial_consciousness It is a bit old and biased, but gives a gist of what is being done in the artificial systems side.
Thanks for recommending that article. It is an excellent overview with well over a hundred references to theory and implementations from every point of view, including Google's work up to 2022.
But I would not call it "old and biased". Although it does not include anything about the 2023 work on GPT and related systems, it cites Google's work on their foundations. GPT systems, by themselves, do not do anything related to consciousness.
Ricardo, quoting from a note by JFS> The sentence "Any time wasted on discussing consciousness would have no practical value for any applications of ontology." sounds a biit disrespectful for the people that wrote the 100,500 books about consciousness that Anatoly mentioned.
Please read what I wrote above. I show a high respect for the ongoing research and publications. But I make the point that none of that work is relevant to the theory and applications of ontology.
Following is an excerpt from an article I'm writing. Note the term 'mental model'. I propose the following definition of consciousness: the ability to generate, modify, and use mental models as the basis for perception, thought, action, and communication. That definition is sufficiently vague to include normal uses of the word 'consciousness'. It can also serve as a guideline for more detailed research and applications. It could even be used to define artificial consciousness if and when any AI systems could "generate, modify, and use mental models as the basis for perception, thought, action, and communication."
John
______________________________________
Excerpt from a forthcoming article by J. F. Sowa:
Natural languages can be as precise as a formal language or as vague as necessary for planning and negotiating. The precision of a formal language is determined by its form or syntax together with the meaning of its components. But natural languages are informal because the precise meaning of a word or sentence depends on the situation in which it’s spoken, the background knowledge of the speaker, and the speaker’s assumptions about the background knowledge of the listeners. Since no one has perfect knowledge of anyone else’s background, communication is an error-prone process that requires frequent questions and explanations. Precision and clarity are the goal not the starting point. Whitehead (1937) aptly summarized this point:
Human knowledge is a process of approximation. In the focus of experience, there is comparative clarity. But the discrimination of this clarity leads into the penumbral background. There are always questions left over. The problem is to discriminate exactly what we know vaguely.A novel theory of semantics, influenced by Wittgenstein’s language games and related developments in cognitive science, is the dynamic construal of meaning (DCM) proposed by Cruse (2002). The basic assumption of DCM is that the most stable aspect of a word is its spoken or written sign; its meaning is unstable and dynamically evolving as it is used in different contexts or language games. Cruse coined the term microsense for each subtle variation in meaning. This is an independent rediscovery of Peirce’s view: sign types are stable, but each interpretation of a sign token depends on its context in a pattern of other signs, the physical environment, and the background knowledge of the interpreter.
For the purpose of this inquiry a Sign may be defined as a Medium for the communication of a Form. It is not logically necessary that anything possessing consciousness, that is, feeling of the peculiar common quality of all our feeling, should be concerned. But it is necessary that there should be two, if not three, quasi-minds, meaning things capable of varied determination as to forms of the kind communicated. (R793, 1906, EP 2:544)These observations imply that cognition involves an open-ended variety of interacting processes. Frege’s rejection of psychologism and “mental pictures” reinforced the behaviorism of the early 20th century. But the latest work in neuroscience uses “folk psychology” and introspection to interpret data from brain scans (Dehaene 2014). The neuroscientist Antonio Damasio (2010) summarized the issues:
The distinctive feature of brains such as the one we own is their uncanny ability to create maps... But when brains make maps, they are also creating images, the main currency of our minds. Ultimately consciousness allows us to experience maps as images, to manipulate those images, and to apply reasoning to them.The maps and images form mental models of the real world or of the imaginary worlds in our hopes, fears, plans, and desires. They provide a “model theoretic” semantics for language that uses perception and action for testing models against reality. Like Tarski’s models, they define the criteria for truth, but they are flexible, dynamic, and situated in the daily drama of life.