Igor, Mike, Bill, List,
As Yogi Berrra said, these discussions are "Deja vu all over again."
Re "No SQL": The person who coined that term, rewrote it as Not Only SQL The original SQL was designed for data that is best organized in a table. The fact that other data might be better represented in other formats does not invalidate the use of tables for data that is naturally tabular.
Re tree structure in ontologies: A tree structure for the NAMES of an ontology does NOT imply that the named data happens to be a tree. Some of the data might be organized in a tree, but other data might be better organized in a table, list, vector, matrix, tensor, graph, multidimensional shapes, or combinations of all of them.
The following survey article was written about 40 years of developments from 1970 to 2010. Some new methods have been invented since then, but 90% of the discussions are about new names for old ideas re-invented by people who didn't know the history. I wrote the survey, but 95% of the links are to writings by other people; https://jfsowa.com/ikl .
And by the way, I agree with Bill Burkett (on the list down below). He is one of the people I collaborated with on various committees in the past many years. We viewed the Deja Vu over and over and over. That's one reason why I don't get excited by new names.
John
----------------------------------------
From: "Igor Toujilov' via ontolog-forum" <ontolog-forum(a)googlegroups.com>
Mike,
I would not say "Ontologies are represented in graph structures" only.
Ontologies can be represented in a wide range of formalisms, including
graphs, which are just one possible representation. For example, there
are tools to store the same ontology in different representation
formats: RDF/XML, Turtle, OWL Functional Syntax, Manchester OWL
Syntax, etc. Yes, RDF and Turtle are graph representations. But OWL
Functional and Manchester syntaxes have nothing to do with graphs. And
yet they represent the same ontology.
I also disagree that "the workforce needs conventional everyday
interfaces driven by relational databases". It depends on your system
architecture. Today many systems use No-SQL or graph databases
successfully without any need for relational databases.
In real systems, the difference between data models and ontologies can
be sharp or subtle. Some systems continue using relational databases
while performing some tasks on ontologies. Other systems have
ontologies that are tightly integrated in the production process, so
sometimes it is hard to separate the ontologies from data. And of
course, there is a wide range of systems in between of those extreme
cases.
Igor
On Mon, 30 Dec 2024 at 19:14, Mike Peters <mike(a)redworks.co.nz> wrote:
>
> Hi David
>
> Great question.
>
> Ontologies are represented in graph structures. Non-relational databases like semantic or graph databases are better suited for this job, and ontologists (I'm not one) have no problem working with them.
>
> However, the workforce needs conventional everyday interfaces driven by relational databases. So, there is an import/export issue that David Hay could have written a book about, and I wish he had. His explanations are excellent. His book on UML and data modelling also bridged two different ways of looking at the world.
>
> Mike
>
>
> On Tuesday, 31 December 2024 at 07:32:15 UTC+13 deddy wrote:
>>
>> Mike -
>>
>> >
>> > pity he never wrote a book on data modelling ontologies.
>> >
>>
>> The distinction / difference between data models & ontologies is what...?
>>
>> ______________________
>> David Eddy
>>
>>
>> > -------Original Message-------
>> > From: Mike Peters <mi...(a)redworks.co.nz>
>> > To: ontolo...(a)googlegroups.com <ontolo...(a)googlegroups.com>
>> > Subject: Re: [External] Re: [ontolog-forum] Re: Design Pattern Ontology
>> > Sent: Dec 30 '24 13:19
>> >
>> > Hi Bill
>> >
>> > I agree; those are excellent books. Their work or influence is the
>> > basis for every relational database I have built.
>> >
>> > David Hay also wrote this one:
>> >
>> > Hay, D. C. (2011). UML and Data Modeling: A Reconciliation, Technics
>> > Publications.
>> >
>> > I found it very helpful. It's a pity he never wrote a book on data
>> > modelling ontologies.
>> >
>> > Mike Peters
>> > -----------------------------------
>> > Ajabbi
>> >
>> > PO Box 902
>> > Invercargill 9840
>> > New Zealand
>> >
>> > M 64+ 22 600 5006
>> >
>> > Skype redworksnz
>> > Email mi...(a)redworks.co.nz
>> > Facebook www.facebook.com/NZMikePeters
>> >
>> > Home www.mtchocolate.com
>> >
>> > Art Studio www.redworks.co.nz
>> > Software Architecture www.blog.ajabbi.com
>> >
>> > ------------------------------------------
>> >
>> > On Tue, 31 Dec 2024 at 06:14, 'Burkett, William [USA]' via
>> > ontolog-forum <ontolo...(a)googlegroups.com> wrote:
>> >
>> > > In the "data modelling world" (which, to me, is not different than
>> > > the "ontology world"), there are books by David Hay and Len
>> > > Silverston that are overtly focuses on design patterns:
>> > >
>> > > Hay, D. C. (1996). Data model patterns : conventions of thought. New
>> > > York, Dorset House Pub.
>> > >
>> > > Hay, D. C. (2006). Data model patterns : a metadata map. Amsterdam ;
>> > > Boston, Elsevier Morgan Kaufmann.
>> > >
>> > > Silverston, L. (2009). The data model resource book, Vols 1-3. New
>> > > York, John Wiley.
>> > >
>> > > These books provide a catalog of reusable and adaptable patterns for
>> > > all kinds of concepts that recur in most data models/ontologies.
>> > >
>> > > Bill Burkett
On Sunday, CBS 60 Minutes presented a segment about Khanmigo, an AI tutor that is powered by LLM technology. It has shown some very impressive results, and the teachers who use it in their classes have found it very helpful. It doesn't replace teachers. It helps them by offloading routine testing and tutoring,
https://www.cbsnews.com/video/khanmigo-ai-tutor-60-minutes-video-2024-12-08/
As I have said many times. there are serious limitations to the LLM technology, which requires evaluation to avoid serious errors and hallucinogenic disasters. Question; How can Khanmigo and related systems avoid those disasters?
I do not know the details of the Khanmigo implementation. But from the examples they showed, I suspect that they avoid mistakes by (1) Starting with a large text that was written, tested, and verified by humans (possibly with some computer aid); (2) For each topic, the system does Q/A primarily by translation; (3) And the LLM technology was first developed for translation and Q/A; (4) if the source text is tested and verified, a Q/A system that is based on that text can usually be very good and dependable.
But the CBS program did show an example where the system made some mistakes.
Summary: This example shows great potential for the LLM technology. But it also shows the need for evaluation by the traditional AI symbolic methods. Those methods have been tried and tested for over 50 years, and they are just as important today as they ever were.
As a reminder: LLMs can be used with a large volume of sources to find information and to generate hypotheses. But if the source is very large and unverified for accuracy, it can and does find and generate erroneous or even dangerously false information. That is why traditional AI methods are essential for evaluating what they find in a large volume of source data.
Danger; The larger the sources, the more likely that the LLMs will find bad data. Without evaluation, bigger is definitely not better. I am skeptical about attempts to create super large volumes of LLM data. Those systems consume enormous amounts of electricity with a diminishing return on investment.
There is already a backlash by employees of Google and Elon M.
John
--------------------------------------------------
CALL FOR PAPERS
HHAI 2025 - Hybrid Human-Artificial Intelligence
https://hhai-conference.org/2025/
June 9–13, 2025, Pisa, Italy
--------------------------------------------------
Hybrid Human-Artificial Intelligence (HHAI) is an international conference series that focuses on the study of Artificial Intelligence systems that cooperate synergistically, proactively and purposefully with humans, amplifying instead of replacing human intelligence. HHAI aims for AI systems that work together with humans, emphasizing the need for adaptive, collaborative, responsible, interactive and human-centered intelligent systems. HHAI systems leverage human strengths and compensate for human weaknesses, while taking into account social, ethical and legal considerations.
HHAI 2025 will be held on June 9–13, 2025, in Pisa, Italy, and is the fourth conference in the series. The HHAI field is driven by developments in AI, but it also requires fundamentally new approaches and solutions. Thus, we encourage collaborations across research domains such as AI, HCI, cognitive and social sciences, philosophy and ethics, complex systems, and others. In this fourth international conference, we invite scholars from these fields to submit their best original – new as well as in progress – works, and visionary ideas on Hybrid Human-Artificial Intelligence.
**Join the HHAI community and keep up with the news:**
Website: https://hhai-conference.org/2025/
Linkedin: https://www.linkedin.com/company/hhai-conference/
X: https://x.com/hhai_conference
Mastodon: https://sigmoid.social/@hhai
IMPORTANT DATES
Abstract submission: January 17th, 2025
Paper submission: January 24th, 2025
Acceptance notification: March 16th, 2025
Camera-ready version: April 13th, 2025
Conference: June 9-13, 2025
LOCATION
HHAI 2025 will be an in-person, single-track conference organized in Pisa, Italy. Workshops and tutorials (9-10 June) will be held at the University of Pisa and Scuola Normale Superiore in Pisa, Italy. The main conference (11-13 June) will be held at CNR.
TOPICS
We invite research on different challenges in Hybrid Human-Artificial Intelligence. The following list of topics is illustrative, not exhaustive:
- Human-AI interaction, interpretation and collaboration
- Adaptive human-AI co-learning and co-creation
- Learning, reasoning and planning with humans and machines in the loop
- User modeling and personalisation
- Integration of learning and reasoning
- Transparent, explainable, and accountable AI
- Fair, ethical, responsible, and trustworthy AI
- Societal awareness of AI
- Multimodal machine perception of real-world settings
- Social signal processing
- Representations learning for Communicative or Collaborative AI
- Symbolic representations for human-centric AI
- Human-AI Coevolution
- Foundation models and humans
- Human cognition-aware AI
- Decentralized human-AI systems
- Reliability and robustness in human-AI systems
- Applications of hybrid human-AI intelligence
We welcome contributions about all types of technology, from robots and conversational agents to multi-agent systems and machine learning models.
PAPER TYPES
In this conference, we wish to stimulate the exchange of novel ideas and interdisciplinary perspectives. To do this, we will accept three different types of papers:
- Full papers present original, impactful work (12 pages excluding references)
- Blue sky papers present visionary ideas to stimulate the research community (8 pages excluding references)
- Working papers present work in progress (8 pages excluding references)
Accepted full papers and Blue sky papers will be published in the Proceedings of the Fourth International Conference on Hybrid Human-Machine Intelligence, in the Frontiers of AI series by IOS Press. Working papers can be included in these proceedings, unless the authors request the paper to remain unpublished.
REVIEWING PROCESS & SUBMISSION GUIDELINES
Submissions of full, blue-sky, and working papers should be original work without substantial overlap with pre-published papers. All submissions should adhere to IOS formatting guidelines. Papers should be written in English and detailed submission instructions can also be found here.
**Important**
HHAI 2025 will follow a double-blind reviewing process. Thus, submissions must exclude all information that might disclose the authors’ names or affiliations.
All studies involving human participants should have received human-research ethics consent from the relevant institutions and mention this in the paper.
Work should be submitted in PDF format via Easychair (link to be announced soon).
On acceptance, at least one author should attend the conference. A significant contribution is expected from all authors.
PROGRAM CHAIRS
Chiara Boldrini (IIT-CNR, IT)
Luca Pappalardo (ISTI-CNR, IT)
Andrea Passerini (University of Trento, IT)
Shenghui Wang (University of Twente, NL)
CONFERENCE CHAIRS
Michela Milano (University of Bologna, IT)
Dino Pedreschi (University of Pisa, IT)
Stuart Russell (University of California Berkeley, US)
Ilaria Tiddi (Vrije Universiteit Amsterdam, NL)
CONTACT INFORMATION
For questions, you can reach the program chairs at: program(a)hhai-conference.org