Peirce's 1885 “Algebra of Logic” • Selection 1.1
• https://inquiryintoinquiry.com/2024/03/24/peirces-1885-algebra-of-logic-sel…
All,
I'm laying down a few source materials
in preparation for a later discussion.
Selection from C.S. Peirce, “On the Algebra of Logic :
A Contribution to the Philosophy of Notation” (1885)
❝§1. Three Kinds Of Signs❞
❝Any character or proposition either concerns one subject,
two subjects, or a plurality of subjects. For example, one
particle has mass, two particles attract one another, a particle
revolves about the line joining two others. A fact concerning
two subjects is a dual character or relation; but a relation
which is a mere combination of two independent facts concerning
the two subjects may be called “degenerate”, just as two lines
are called a degenerate conic. In like manner a plural character
or conjoint relation is to be called degenerate if it is a mere
compound of dual characters.
❝A sign is in a conjoint relation to the thing denoted and to the mind.
If this triple relation is not of a degenerate species, the sign is
related to its object only in consequence of a mental association,
and depends upon a habit. Such signs are always abstract and general,
because habits are general rules to which the organism has become
subjected. They are, for the most part, conventional or arbitrary.
They include all general words, the main body of speech, and any
mode of conveying a judgment. For the sake of brevity I will call
them “tokens”. [Note. Peirce more frequently calls these “symbols”.]
Regards,
Jon
cc: https://www.academia.edu/community/LpeZP7
cc: https://mathstodon.xyz/@Inquiry/112156450035935700
***CoKA: Call for Contributions***
================================================================
Conceptual Knowledge Acquisition: Challenges, Opportunities, and Use Cases
Workshop at the 1st International Joint Conference on
Conceptual Knowledge Structures (CONCEPTS 2024)
September 9–13 2024, Cádiz, Spain
Workshop Website: https://www.kde.cs.uni-kassel.de/coka/
Conference website: https://concepts2024.uca.es
================================================================
Formal concept analysis (FCA) can help make sense of data and the underlying
domain --- provided the data is not too big, not too noisy, representative of
the domain, and if there is data in the first place. What if you don’t have such
data readily available but are prepared to invest in collecting it and have
access to domain experts or other reliable queryable sources of information?
Conceptual exploration comes to the rescue!
Conceptual exploration is a family of knowledge-acquisition techniques within
FCA. The goal is to build a complete implicational theory of a domain (with
respect to a fixed language) by posing queries to a domain expert. When properly
implemented, it is a great tool that can help organize the process of scientific
discovery.
Unfortunately, proper implementations are scarce and success stories of using
conceptual exploration are somewhat rare and limited in scope. With this
workshop, we intend to analyze the situation and, maybe, find a solution. If
- you succeeded in acquiring new knowledge about or building a satisfying
conceptual representation of some domain with conceptual exploration before;
- you attempted conceptual exploration in application to your problem but failed
miserably;
- you want to use conceptual exploration to analyze some domain, but you don’t
know where and how to start;
- you are aware of alternatives to conceptual exploration;
then come to the workshop to share your experiences, insights, ideas, and
concerns with us!
==================
Keywords and Topics
==================
Knowledge Acquisition and Capture
Conceptual Exploration
Design Patterns and Paradigmatic Examples
successful use cases and real-world applications
challenges and lessons learned
application principles
missing theoretical foundations
missing technical infrastructure
integration with other theories and technologies
=========================
Duration, Format, and Dates
=========================
We invite contributions in the form of an extended abstract of up to two pages.
In addition, supplementary material, such as data sets, detailed descriptions,
or visualizations, may be submitted.
The workshop is planned for half a day within the conference dates and at the
same venue. It will consist of several short presentations each followed by a
plenary discussion.
Please send your contributions until *July 10, 2024* to
tom.hanika(a)uni-hildesheim.de. If you are not sure whether your contribution
matches the topics or the format of the workshop, you are welcome to contact the
organizers prior to submitting the abstract. An acceptance notification will be
sent within two weeks upon receiving the submission.
===================
Workshop Organizers
===================
- Tom Hanika, University of Hildesheim
- Sergei Obiedkov, TU Dresden
- Bernhard Ganter, Ernst-Schröder-Zentrum, Darmstadt
All,
For many years, I have been associated with the CharGer software, a conceptual graph editor. I began this project in 1997, with the support and encouragement of the conceptual graph community who met annually at the Internaional Conference on Conceptual Structures (ICCS). Over the years a number of new features were added, which are documented in the CharGer Manual, which is distributed with the software.
Having recently received a couple of inquiries about the software, I thought I'd post its current address.
http://www.cs.uah.edu/~delugach/charger.php
The software is not currently actively supported. Based on Java swing, it uses fairly old technology. There are probably better solutions at hand, but I've been given the idea that the old software is still useful, especially for preparing conceptual graphs for presentations.
I'll be happy to answer any questions about it.
Enjoy!
Harry Delugach
Harry S. Delugach, Ph.D.
-------------------
Associate Professor Emeritus (retired)
Computer Science Dept.
OKT N-351
University Of Alabama In Huntsville
Huntsville AL 35899 U.S.A.
voice: +1 256.824.6614
fax: +1 256.824.6239
delugach(a)uah.edu
http://www.cs.uah.edu/~delugach
The Second International Measuring Ontologies for Value Enhancement Workshop (MOVE24): https://sysaffairs.org/move/move24-cfp
Call for Papers and Presentations (Online)
Date: 14-15 June 2024 (two days)
Deadline for submissions: April 30th, 2024
Workshop papers and presentations will be published online open-access by ILEnA. Selected and extended papers from the workshop will be published in a peer-reviewed special MOVE volume of the Journal of Artificial Intelligence and Applications (AIA). We look forward to you joining our emerging community!
Simon (co-organiser)
http://www.polovina.me.uk
To refresh my memory, I reread Peirce's Lowell Lectures about Gamma graphs. And the following passage from Lecture V (NEM 3, p. 365) explains what he meant in L376 when he said that he would keep the Gamma division:
"I must begin by a few words concerning gamma graphs; because it is by means of gamma graphs that I have been enabled to understand these subjects... In particular, it is absolutely necessary to representing the reasoning about these subjects that we should be able to reason with graphs about graphs and thus that we should have graphs of graphs."
That explains the issues we have been debating recently. Peirce had recognized the importance of graphs of graphs when he wrote "The better exposition of 1903 divided the system into three parts, distinguished as the Alpha, the Beta, and the Gamma, parts; a DIVISION I shall here adhere to, although I shall now have to add a Delta part in order to deal with modals",
That division would require some version of metalanguage for specifying modality and higher-order logic. But it does NOT imply all (or any) details that he happened to specify in 1903. Since he had earlier specified a version of metalanguage in 1898 (RLT), he had previously recognized the importance of metalanguage. The examples in the Lowell lectures are similar to his 1898 version. Since he never again used the details he specified in 1903 in any further MSS, it's unlikely that he would revive them in 1911.
The only feature he was reviving was the use of metalanguage. The 1898 version was just as good as anything he specified in 1903. Since it was simpler than the Gamma graphs, that would make it better. In fact, Peirce mentioned another version of metalanguage in R514 (June 1911) that was logically equivalent and syntactically similar to what he was writing in L376 (December 1911).
The novel features of L376 are sufficiently advanced to qualify as a fourth branch of EGs. But they require a bit more explanation. As I said before, they depend critically on the expertise of Allan Risteen. For that information, see the references to Risteen that are listed in the index to EP2. And the applications discussed in L376 have strong resemblances to the applications of the very similar IKL logic in 2006. For those, see the brief discussion and detailed references in https://jfsowa.com/ikl .
I'll write more about these topics in another note later this week.
John
Pragmatic Semiotic Information • 1
• http://inquiryintoinquiry.com/2024/03/03/pragmatic-semiotic-information-1/
All,
Information • What's it good for?
The good of information is its use in reducing our uncertainty
about an issue which comes before us. But uncertainty comes
in many flavors and so the information which serves to reduce
uncertainty can be applied in several ways. The situations of
uncertainty human agents commonly find themselves facing have
been investigated under many headings, literally for ages, and
the categories subtle thinkers arrived at long before the dawn
of modern information theory still have their uses in setting
the stage of an introduction.
Picking an example of a subtle thinker almost at random, the
philosopher‑scientist Immanuel Kant surveyed the questions of
human existence within the span of the following three axes.
• What's true?
• What's to do?
• What's to hope?
The third question is a bit too subtle for the present frame
of discussion but the first and second are easily recognizable
as staking out the two main axes of information theory, namely,
the dual dimensions of “information” and “control”. Roughly the
same space of concerns is elsewhere spanned by the dual axes of
competence and performance, specification and optimization, or
just plain knowledge and skill.
A question of what's true is a “descriptive question” and
there exist what are called “descriptive sciences” devoted
to answering descriptive questions about any domain of
phenomena one might care to name.
A question of what's to do, in other words, what must be done
by way of achieving a given aim, is a “normative question” and
there exist what are called “normative sciences” devoted to
answering normative questions about any domain of problems
one might care to address.
Since information plays its role on a stage set by uncertainty,
a big part of saying what information is will necessarily involve
saying what uncertainty is. There is little chance the vagaries
of a word like “uncertainty”, given the nuances of its ordinary,
poetic, and technical uses, can be corralled by a single pen, but
there do exist established models and formal theories which manage
to address definable aspects of uncertainty and these do have enough
uses to make them worth looking into.
Regards,
Jon
cc: https://www.academia.edu/community/VBP2Jz
cc: https://mathstodon.xyz/@Inquiry/112032763420333668
In my previous note, I forgot to include a link to the updated (March 8} slides for my talk on March 6. Here is the URL: https://ontologforum.s3.amazonaws.com/OntologySummit2024/TrackA/LLMs-are-cl… .
I also received an offline note about a linguistic theory that emphasizes semantics rather than syntax:
The method of Generative semantics by Seuren, https://www.mpi.nl/sites/default/files/2020-07/Seuren_Abralin_Article_2020.… . Other linguists and computational linguists have proposed, developed, and/or implemented related versions.
Methods that emphasize semantics have been used in conjunction with ontology to correct and avoid the errors and hallucinations created by LLMs. For critical applications, 99% correct can be a disaster. Nobody wants to fly in an airplane that has a 1% chance of crashing.
LLMs are very good for translating linear languages and notations. But when accuracy is essential, precise semantics is much more important than elegant syntax.
I also want to emphasize Section 3. That begins with slide 32, which has the title Neuro-Cognitive Cycles. The word 'cognitive' is much more general than 'symbolic', since it includes images as well as linear notations for language. Note slide 7, which shows an image in the mind of a policeman, and the attempt by a man who is trying to reconstruct an image from a verbal explanation.
In slide 24, I added a picture of a baby who is using sign language. For multi-dimensional topics, a sign language can be more detailed and precise that a spoken language.
This section also emphasizes Peirce's methods of reasoning in Slides 33 and 34, and their applications in the remaining slides. Slide 35 on the Central Executive, as defined by neuroscientists, shows how to avoid the errors, hallucinations, and dangers created by the Large Language Models (LLMs): Include a Central Executive, which has the responsibility and the power to evaluate any proposed language or actions and revise or reject those that may be erroneous or even dangerous.
Also note slide 39 on "Wicked Problems"; slide 40, which explains "Why Humans are not obsolete; and Slide 41, which asks whether there is "A Path to AGI?" The answer to that question is joke by George Burns, which might be taken seriously.
That reminds me of a remark by Ludwig Wittgenstein: "It's possible to write a book on philosophy that consists entirely of jokes." A Zen Buddhist could write a book on religion that consists entirely of jokes. Depending on the definition of 'joke', somebody might say that they have.
John