Dear all,
Greetings! I hope this email finds you and your family in good health.
We are in the middle of finalizing the program for another 24hr.
Transcontinental Meet-‘n-Greet, celebrating the philosophy of Karl
Popper. That
is, as announced last year, we continue the tradition of hosting this event
during the second week-end of January for the foreseeable future. In 2025, it
falls on Saturday, January 11. It will start at 3:00pm GMT (London, UK) on
Saturday and continue until 6pm GMT on Sunday, January 12.
*As you may already know, two important Popper scholars passed away in
2024: David Miller (1935-2024)
<https://philosophy.tabrizu.ac.ir/?lang=en> and Mark Notturno (1953-2024). * We
will be remembering them Sunday, January 12 starting at noon GMT. For the
latest version of the program, click here
<https://ourkarlpopper.net/2025-transcontinental-meet-n-greet/>.
We still have some open time slots (1 or 2 hrs) between 3:30am and noon GMT
on Sunday, January 12. If you would like to use one of these slots to
facilitate a conversation on a specific Popper-related theme, send an email
to ourkarlpopper(a)gmail.com.
We would also like to take this opportunity to invite you all to, if you
have not done so already, join the googlegroup,
ourkarlpopper(a)googlegroups.com. We created this googlegroup in 2020 for
organizational purposes, but we have done little with it so far. There are
tons of things that anyone can do to help with spreading the message of
Karl Popper's philosophy.
We are looking forward to getting your suggestions and feedback.
Feel free to forward this email to anyone who might be interested in
learning more about Karl Popper and his philosophy and/or who would like to
help with the programming and running of the Meet 'n Greets, the
googlegroup or the creation and management of the Popper-type dashboard.
To remain informed on the latest details of the 2025 Meet 'n Greet itself,
check https://ourkarlpopper.net/2025in t-transcontinental-meet-n-greet/
<https://ourkarlpopper.net/2025-transcontinental-meet-n-greet/>
My very best, Margaretha H.
On behalf of the organizing team (Luc Castelein, Rafe Champion, Elyse
Hargreaves, Phil Wood, Margaretha Hendrickx)
https://ourkarlpopper.net/2025-transcontinental-meet-n-greet/
PS. If you would like to be removed from this mailing list, send an email
to ourkarpopper(a)gmail.com <ourkarPopper(a)gmail.com>.
--
If you feed them stones instead of bread, the young people will revolt,
even if in so doing they mistake a baker for a stone-thrower.
(Karl Popper, *Objective Knowledge, *1979)
First-order logic is necessary and sufficient to specify any and every program that runs on a digital computer. But OWL 2 is limited, quirky, and far more difficult to learn and use that FOL.
Recommendation: Design OWL 3 as exactly compatible with the OWL 2 hierarchy, but replace Turtle or other notations for the constraint language with an easy to read, write, and remember version of FOL. For upward compatibility, keep all the OWL 2 features and syntax as an option which shall remain available in all future upgrades to OWL 3.
Syntax for the OWL 3 constraint language: The reserved words and phrases that are used to represent constraints: Some; Every; and; or; not; if... then,,,; only if; if and only if.
Every statement in the constraint language shall be a syntactically correct English sentence, which uses the reserved words above plus whatever words or symbols anybody chooses to represents entities in the subject domain.
Result: Statements in the constraint language may be read without any training by anybody who can read English. Learning to write the constraint language will require much less training than learning to write anything in OWL 2.
Observation: The very intelligent logicians who designed OWL 1 and 2, made an incorrect assumption about issues of decidability.
(1) undecidable statements are very complex, and nobody but a highly trained and knowledgeable logician would know how to write one,; 99.99% of software developers would not know how to read or write such a statement.
(2) Undecidable statements only cause a problem for a theorem prover; they would NEVER cause any problem if and when they are used to state a constraint and use a constraint.
(3) But the syntactic constraints to prevent undecidable statements cause Turtle and other notations to become far more complex, unreadable, and unwritable than pure simple FOL expressed as English sentences,
(4) For authors who do not read or write English, it's easy to specify exactly equivalent versions in every language spoken at the United Nations. Every one of those versions would have a simple and efficient translation to the English version.
John
**apologies for cross-posting**
The HHAI 2025 Doctoral Consortium (DC) will take place as part of the 4th
International Conference on Hybrid Human-Artificial Intelligence in June
2025, Pisa, Italy: https://hhai-conference.org/2025/
This forum will provide early as well as middle/late-stage PhD students in
the field of Hybrid Intelligence focusing on the study of Artificial
Intelligence systems that cooperate synergistically, proactively and
purposefully with humans, amplifying instead of replacing human
intelligence. The Doctoral Consortium will take place in person at the HHAI
2025 conference.
The DC provides an opportunity to present and discuss their doctoral
research ideas and progress in a supportive, formative and yet critical
environment and receive feedback from reviewers, mentors and peers from the
field of Hybrid Intelligence. The Doctoral Consortium will also provide
opportunities to network and build collaborations with other members of the
HHAI community. We welcome submissions across research HHAI-related domains
such as AI, HCI, cognitive and social sciences, philosophy & ethics,
complex systems, and others (see “Topics of interest”).
The event is intended for early as well as middle/late-stage PhD candidates
and asks them to formulate and submit a concrete PhD research proposal,
preferably supported by some preliminary results. The proposal will be
peer-reviewed. If accepted, students must register and physically attend
the event, which will include a range of interactive activities (among
which presentations and mentoring lunch). Details for the submission are
found below under “Submission Details”.
**Important Dates**
Submission Deadline: January 24, 2024
Reviews Released: March 18, 2025
Camera-ready Papers: April 13, 2025
Doctoral Consortium: June 10, 2025
All deadlines are 23:59 AoE (anywhere on Earth)
**Topics of Interest**
We invite research on different challenges in Hybrid Human-Artificial
Intelligence. The following list of topics is illustrative, not exhaustive:
• Human-AI interaction, interpretation and collaboration
• Adaptive human-AI co-learning and co-creation
• Learning, reasoning and planning with humans and machines
in the loop
• User modeling and personalisation
• Integration of learning and reasoning
• Transparent, explainable, and accountable AI
• Fair, ethical, responsible, and trustworthy AI
• Societal awareness of AI
• Multimodal machine perception of real-world settings
• Social signal processing
• Representations learning for Communicative or Collaborative
AI
• Symbolic representations for human-centric AI
• Human-AI Coevolution
• Foundation models and humans
• Human cognition-aware AI
• Decentralized human-AI systems
• Reliability and robustness in human-AI systems
• Applications of hybrid human-AI intelligence
**Submission Details**
All proposals must be submitted electronically via the EasyChair conference
submission system: https://easychair.org/my/conference?conf=hhai2025. Each
PhD student should provide the following information on Easychair:
• Research proposal (details below).
• Supplementary material: a maximum of 2-page PDF listing the
following:
• Names and affiliations of your research supervisor(s).
• Dissertation status: When you began your PhD, and expected
to complete it. Are there any constraints on the PhD duration from your
institute? This part helps us better tailor feedback to a realistic
timeline according to your program.
• Benefits statement: 1-2 paragraphs describing what you hope
to gain by participating in the doctoral consortium. Highlight specific
points where you require feedback.
• A personal statement citing three key papers in the field
and briefly describing how they influenced the student’s work and how HHAI
is defined in those works and the student’s research.
• List of the student’s relevant publications (if available).
**Research Proposal Details**
Students should submit a maximum of a 6-page description of their PhD
research proposal. Papers should be written in English and adhere to IOS
formatting guidelines. The limit of 6 pages is excluding references.
The papers should have a single author (the PhD candidate) and submissions
are *not* anonymous. Supervisors, other involved persons, and funding
agencies should be acknowledged in an Acknowledgements section.
Research proposals should contain the following elements:
• Context: The background and motivation for your research,
including the related work that frames your research
• Research questions/challenges: what are the research
questions/challenges that your dissertation addresses? Try to highlight how
it differs from existing literature.
• Method/approach and evaluation: how is each of the research
questions answered? How are results evaluated? If you are planning to
conduct studies or build prototypes, provide a brief description.
• Preliminary results (if available). Highlight results and
contribution to date and the timeplan for projected steps.
• Discussion and future work: What are intermediary
conclusions, and what are the planned next steps?
**Upon Acceptance of the Doctoral Consortium Proposal**
Accepted papers will be published in the main conference proceedings of the
Third International Conference on Hybrid Human-Machine Intelligence, in the
Frontiers in AI and Applications (FAIA) series by IOS Press. Authors will
have the opportunity to opt-out from being included in these proceedings,
with only their proposal title being mentioned on the web page.
Participants will be expected to give short, informal presentations of
their work during the consortium, to be followed by a discussion.
In case there are any questions regarding the call, or if you are unsure
about submitting your work to the HHAI 2025 Doctoral Consortium, please
reach out to dc(a)hhai-conference.org
**DC Chairs**
Jennifer Renoux (Örebro University)
Salvatore Ruggieri (University of Pisa)
Stefan Schlobach (Vrije Universiteit Amsterdam)
Shihan Wang (Utrecht University)
Igor, Mike, Bill, List,
As Yogi Berrra said, these discussions are "Deja vu all over again."
Re "No SQL": The person who coined that term, rewrote it as Not Only SQL The original SQL was designed for data that is best organized in a table. The fact that other data might be better represented in other formats does not invalidate the use of tables for data that is naturally tabular.
Re tree structure in ontologies: A tree structure for the NAMES of an ontology does NOT imply that the named data happens to be a tree. Some of the data might be organized in a tree, but other data might be better organized in a table, list, vector, matrix, tensor, graph, multidimensional shapes, or combinations of all of them.
The following survey article was written about 40 years of developments from 1970 to 2010. Some new methods have been invented since then, but 90% of the discussions are about new names for old ideas re-invented by people who didn't know the history. I wrote the survey, but 95% of the links are to writings by other people; https://jfsowa.com/ikl .
And by the way, I agree with Bill Burkett (on the list down below). He is one of the people I collaborated with on various committees in the past many years. We viewed the Deja Vu over and over and over. That's one reason why I don't get excited by new names.
John
----------------------------------------
From: "Igor Toujilov' via ontolog-forum" <ontolog-forum(a)googlegroups.com>
Mike,
I would not say "Ontologies are represented in graph structures" only.
Ontologies can be represented in a wide range of formalisms, including
graphs, which are just one possible representation. For example, there
are tools to store the same ontology in different representation
formats: RDF/XML, Turtle, OWL Functional Syntax, Manchester OWL
Syntax, etc. Yes, RDF and Turtle are graph representations. But OWL
Functional and Manchester syntaxes have nothing to do with graphs. And
yet they represent the same ontology.
I also disagree that "the workforce needs conventional everyday
interfaces driven by relational databases". It depends on your system
architecture. Today many systems use No-SQL or graph databases
successfully without any need for relational databases.
In real systems, the difference between data models and ontologies can
be sharp or subtle. Some systems continue using relational databases
while performing some tasks on ontologies. Other systems have
ontologies that are tightly integrated in the production process, so
sometimes it is hard to separate the ontologies from data. And of
course, there is a wide range of systems in between of those extreme
cases.
Igor
On Mon, 30 Dec 2024 at 19:14, Mike Peters <mike(a)redworks.co.nz> wrote:
>
> Hi David
>
> Great question.
>
> Ontologies are represented in graph structures. Non-relational databases like semantic or graph databases are better suited for this job, and ontologists (I'm not one) have no problem working with them.
>
> However, the workforce needs conventional everyday interfaces driven by relational databases. So, there is an import/export issue that David Hay could have written a book about, and I wish he had. His explanations are excellent. His book on UML and data modelling also bridged two different ways of looking at the world.
>
> Mike
>
>
> On Tuesday, 31 December 2024 at 07:32:15 UTC+13 deddy wrote:
>>
>> Mike -
>>
>> >
>> > pity he never wrote a book on data modelling ontologies.
>> >
>>
>> The distinction / difference between data models & ontologies is what...?
>>
>> ______________________
>> David Eddy
>>
>>
>> > -------Original Message-------
>> > From: Mike Peters <mi...(a)redworks.co.nz>
>> > To: ontolo...(a)googlegroups.com <ontolo...(a)googlegroups.com>
>> > Subject: Re: [External] Re: [ontolog-forum] Re: Design Pattern Ontology
>> > Sent: Dec 30 '24 13:19
>> >
>> > Hi Bill
>> >
>> > I agree; those are excellent books. Their work or influence is the
>> > basis for every relational database I have built.
>> >
>> > David Hay also wrote this one:
>> >
>> > Hay, D. C. (2011). UML and Data Modeling: A Reconciliation, Technics
>> > Publications.
>> >
>> > I found it very helpful. It's a pity he never wrote a book on data
>> > modelling ontologies.
>> >
>> > Mike Peters
>> > -----------------------------------
>> > Ajabbi
>> >
>> > PO Box 902
>> > Invercargill 9840
>> > New Zealand
>> >
>> > M 64+ 22 600 5006
>> >
>> > Skype redworksnz
>> > Email mi...(a)redworks.co.nz
>> > Facebook www.facebook.com/NZMikePeters
>> >
>> > Home www.mtchocolate.com
>> >
>> > Art Studio www.redworks.co.nz
>> > Software Architecture www.blog.ajabbi.com
>> >
>> > ------------------------------------------
>> >
>> > On Tue, 31 Dec 2024 at 06:14, 'Burkett, William [USA]' via
>> > ontolog-forum <ontolo...(a)googlegroups.com> wrote:
>> >
>> > > In the "data modelling world" (which, to me, is not different than
>> > > the "ontology world"), there are books by David Hay and Len
>> > > Silverston that are overtly focuses on design patterns:
>> > >
>> > > Hay, D. C. (1996). Data model patterns : conventions of thought. New
>> > > York, Dorset House Pub.
>> > >
>> > > Hay, D. C. (2006). Data model patterns : a metadata map. Amsterdam ;
>> > > Boston, Elsevier Morgan Kaufmann.
>> > >
>> > > Silverston, L. (2009). The data model resource book, Vols 1-3. New
>> > > York, John Wiley.
>> > >
>> > > These books provide a catalog of reusable and adaptable patterns for
>> > > all kinds of concepts that recur in most data models/ontologies.
>> > >
>> > > Bill Burkett
On Sunday, CBS 60 Minutes presented a segment about Khanmigo, an AI tutor that is powered by LLM technology. It has shown some very impressive results, and the teachers who use it in their classes have found it very helpful. It doesn't replace teachers. It helps them by offloading routine testing and tutoring,
https://www.cbsnews.com/video/khanmigo-ai-tutor-60-minutes-video-2024-12-08/
As I have said many times. there are serious limitations to the LLM technology, which requires evaluation to avoid serious errors and hallucinogenic disasters. Question; How can Khanmigo and related systems avoid those disasters?
I do not know the details of the Khanmigo implementation. But from the examples they showed, I suspect that they avoid mistakes by (1) Starting with a large text that was written, tested, and verified by humans (possibly with some computer aid); (2) For each topic, the system does Q/A primarily by translation; (3) And the LLM technology was first developed for translation and Q/A; (4) if the source text is tested and verified, a Q/A system that is based on that text can usually be very good and dependable.
But the CBS program did show an example where the system made some mistakes.
Summary: This example shows great potential for the LLM technology. But it also shows the need for evaluation by the traditional AI symbolic methods. Those methods have been tried and tested for over 50 years, and they are just as important today as they ever were.
As a reminder: LLMs can be used with a large volume of sources to find information and to generate hypotheses. But if the source is very large and unverified for accuracy, it can and does find and generate erroneous or even dangerously false information. That is why traditional AI methods are essential for evaluating what they find in a large volume of source data.
Danger; The larger the sources, the more likely that the LLMs will find bad data. Without evaluation, bigger is definitely not better. I am skeptical about attempts to create super large volumes of LLM data. Those systems consume enormous amounts of electricity with a diminishing return on investment.
There is already a backlash by employees of Google and Elon M.
John
--------------------------------------------------
CALL FOR PAPERS
HHAI 2025 - Hybrid Human-Artificial Intelligence
https://hhai-conference.org/2025/
June 9–13, 2025, Pisa, Italy
--------------------------------------------------
Hybrid Human-Artificial Intelligence (HHAI) is an international conference series that focuses on the study of Artificial Intelligence systems that cooperate synergistically, proactively and purposefully with humans, amplifying instead of replacing human intelligence. HHAI aims for AI systems that work together with humans, emphasizing the need for adaptive, collaborative, responsible, interactive and human-centered intelligent systems. HHAI systems leverage human strengths and compensate for human weaknesses, while taking into account social, ethical and legal considerations.
HHAI 2025 will be held on June 9–13, 2025, in Pisa, Italy, and is the fourth conference in the series. The HHAI field is driven by developments in AI, but it also requires fundamentally new approaches and solutions. Thus, we encourage collaborations across research domains such as AI, HCI, cognitive and social sciences, philosophy and ethics, complex systems, and others. In this fourth international conference, we invite scholars from these fields to submit their best original – new as well as in progress – works, and visionary ideas on Hybrid Human-Artificial Intelligence.
**Join the HHAI community and keep up with the news:**
Website: https://hhai-conference.org/2025/
Linkedin: https://www.linkedin.com/company/hhai-conference/
X: https://x.com/hhai_conference
Mastodon: https://sigmoid.social/@hhai
IMPORTANT DATES
Abstract submission: January 17th, 2025
Paper submission: January 24th, 2025
Acceptance notification: March 16th, 2025
Camera-ready version: April 13th, 2025
Conference: June 9-13, 2025
LOCATION
HHAI 2025 will be an in-person, single-track conference organized in Pisa, Italy. Workshops and tutorials (9-10 June) will be held at the University of Pisa and Scuola Normale Superiore in Pisa, Italy. The main conference (11-13 June) will be held at CNR.
TOPICS
We invite research on different challenges in Hybrid Human-Artificial Intelligence. The following list of topics is illustrative, not exhaustive:
- Human-AI interaction, interpretation and collaboration
- Adaptive human-AI co-learning and co-creation
- Learning, reasoning and planning with humans and machines in the loop
- User modeling and personalisation
- Integration of learning and reasoning
- Transparent, explainable, and accountable AI
- Fair, ethical, responsible, and trustworthy AI
- Societal awareness of AI
- Multimodal machine perception of real-world settings
- Social signal processing
- Representations learning for Communicative or Collaborative AI
- Symbolic representations for human-centric AI
- Human-AI Coevolution
- Foundation models and humans
- Human cognition-aware AI
- Decentralized human-AI systems
- Reliability and robustness in human-AI systems
- Applications of hybrid human-AI intelligence
We welcome contributions about all types of technology, from robots and conversational agents to multi-agent systems and machine learning models.
PAPER TYPES
In this conference, we wish to stimulate the exchange of novel ideas and interdisciplinary perspectives. To do this, we will accept three different types of papers:
- Full papers present original, impactful work (12 pages excluding references)
- Blue sky papers present visionary ideas to stimulate the research community (8 pages excluding references)
- Working papers present work in progress (8 pages excluding references)
Accepted full papers and Blue sky papers will be published in the Proceedings of the Fourth International Conference on Hybrid Human-Machine Intelligence, in the Frontiers of AI series by IOS Press. Working papers can be included in these proceedings, unless the authors request the paper to remain unpublished.
REVIEWING PROCESS & SUBMISSION GUIDELINES
Submissions of full, blue-sky, and working papers should be original work without substantial overlap with pre-published papers. All submissions should adhere to IOS formatting guidelines. Papers should be written in English and detailed submission instructions can also be found here.
**Important**
HHAI 2025 will follow a double-blind reviewing process. Thus, submissions must exclude all information that might disclose the authors’ names or affiliations.
All studies involving human participants should have received human-research ethics consent from the relevant institutions and mention this in the paper.
Work should be submitted in PDF format via Easychair (link to be announced soon).
On acceptance, at least one author should attend the conference. A significant contribution is expected from all authors.
PROGRAM CHAIRS
Chiara Boldrini (IIT-CNR, IT)
Luca Pappalardo (ISTI-CNR, IT)
Andrea Passerini (University of Trento, IT)
Shenghui Wang (University of Twente, NL)
CONFERENCE CHAIRS
Michela Milano (University of Bologna, IT)
Dino Pedreschi (University of Pisa, IT)
Stuart Russell (University of California Berkeley, US)
Ilaria Tiddi (Vrije Universiteit Amsterdam, NL)
CONTACT INFORMATION
For questions, you can reach the program chairs at: program(a)hhai-conference.org
The QwQ system combines LLM technology with traditional AI methods to do the evaluation. This is a hybrid technique that our Permion.ai system uses.
I don't know anything more that I read in the in the following text and the link to a more detailed article. But I believe that hybrid methods are essential for developing reliable and trustworthy AI systems.
John
----------------------
QwQ-32B is an experimental AI model designed to approach problem-solving with deep introspection, emphasizing questioning and reflection before reaching conclusions. Despite its limitations, including language-switching issues and recursive reasoning loops, QwQ demonstrates impressive capabilities in areas like mathematics and coding. For AI practitioners, QwQ represents an attempt to embed a philosophical dimension into reasoning processes, striving for deeper and more robust outcomes—important for teams aiming to build AI that is both effective and adaptable.
QwQ: Reflect Deeply on the Boundaries of the Unknown
https://qwenlm.github.io/blog/qwq-32b-preview
What does it mean to think, to question, to understand? These are the deep waters that QwQ (Qwen with Questions) wades into. Like an eternal student of wisdom, it approaches every problem - be it mathematics, code, or knowledge of our world - with genuine wonder and doubt. QwQ embodies that ancient philosophical spirit: it knows that it knows nothing, and that’s precisely what drives its curiosity. Before settling on any answer, it turns inward, questioning its own assumptions, exploring different paths of thought, always seeking deeper truth. Yet, like all seekers of wisdom, QwQ has its limitations. This version is but an early step on a longer journey - a student still learning to walk the path of reasoning. Its thoughts sometimes wander, its answers aren’t always complete, and its wisdom is still growing. But isn’t that the beauty of true learning? To be both capable and humble, knowledgeable yet always questioning? We invite you to explore alongside QwQ, embracing both its insights and its imperfections as part of the endless quest for understanding.
Limitations
QwQ-32B-Preview is an experimental research model developed by the Qwen Team, focused on advancing AI reasoning capabilities. As a preview release, it demonstrates promising analytical abilities while having several important limitations:
- Language Mixing and Code-Switching: The model may mix languages or switch between them unexpectedly, affecting response clarity.
- Recursive Reasoning Loops: The model may enter circular reasoning patterns, leading to lengthy responses without a conclusive answer.
- Safety and Ethical Considerations: The model requires enhanced safety measures to ensure reliable and secure performance, and users should exercise caution when deploying it.
- Performance and Benchmark Limitations: The model excels in math and coding but has room for improvement in other areas, such as common sense reasoning and nuanced language understanding.
Performance
Through deep exploration and countless trials, we discovered something profound: when given time to ponder, to question, and to reflect, the model’s understanding of mathematics and programming blossoms like a flower opening to the sun. Just as a student grows wiser by carefully examining their work and learning from mistakes, our model achieves deeper insight through patient, thoughtful analysis. This process of careful reflection and self-questioning leads to remarkable breakthroughs in solving complex problems. Our journey of discovery revealed the model’s exceptional ability to tackle some of the most challenging problems in mathematics and programming, including:
- GPQA: A Graduate-Level Google-Proof Q&A Benchmark, a challenging benchmark for evaluating scientific problem-solving abilities through grade school level questions.
- AIME: American Invitation Mathematics Evaluation, which tests mathematical problem solving with arithmetic, algebra, counting, geometry, number theory, and probability and other secondary school math topics.
- MATH-500: The 500 test cases of the MATH benchmark, a comprehensive dataset testing mathematical problem-solving.
- LiveCodeBench: A challenging benchmark for evaluating code generation and problem solving abilities in real-world programming scenarios.