Hi Gerd / list.
Thanks for the updates!
I've cc'd the CG list, given the URL looks different from the one you used.
Simon
www.polovina.me.uk
-----Original Message-----
From: Gerd Stumme <stumme(a)cs.uni-kassel.de>
Sent: Wednesday, January 17, 2024 9:47 AM
To: fca-list(a)cs.uni-kassel.de; cg-list(a)cs.uni-kassel.de
Subject: [fca-list] Warning: Predatory conference on "FCA"
Dear all,
there is currently a call on the web for "ICFCA 2024: 18. International Conference on Formal Concept Analysis", organized by the "World Academy of Science, Engineering and Technology" [1] in April 2024 in New York.
Please note that this is a predatory conference! For not promoting this fake event further, I refrain from adding a link. Wikipedia is describing the predatory nature of this organisation.
Please note that this year, ICFCA is joining with CLA and ICCS, as previously announced. The joint conference, CONCEPTS 2024, is going to be held at Cádiz, Spain on September 9-13. Deadlines are March 6 for the journal track and March 18 for the proceedings track. Details can be found at https://concepts2024.uca.es/
Best regards,
Gerd
...
FYI
-----Original Message-----
From: Gerd Stumme <stumme(a)cs.uni-kassel.de>
Sent: Wednesday, January 17, 2024 9:47 AM
To: fca-list(a)cs.uni-kassel.de; cg-list(a)cs.uni-kassel.de
Subject: [fca-list] Warning: Predatory conference on "FCA"
Dear all,
there is currently a call on the web for "ICFCA 2024: 18. International Conference on Formal Concept Analysis", organized by the "World Academy of Science, Engineering and Technology" [1] in April 2024 in New York.
Please note that this is a predatory conference! For not promoting this fake event further, I refrain from adding a link. Wikipedia is describing the predatory nature of this organisation.
Please note that this year, ICFCA is joining with CLA and ICCS, as previously announced. The joint conference, CONCEPTS 2024, is going to be held at Cádiz, Spain on September 9-13. Deadlines are March 6 for the journal track and March 18 for the proceedings track. Details can be found at https://concepts2024.uca.es/
Best regards,
Gerd
--
Prof. Dr. Gerd Stumme, Hertie Chair of Knowledge & Data Engineering & Research Center for Information System Design (ITeG) & International Center for Higher Education Research (INCHER), University of Kassel & Research Center L3S & The Hessian Center for Artificial Intelligence (hessian.AI)
_______________________________________________
fca-list mailing list -- fca-list(a)cs.uni-kassel.de To unsubscribe send an email to fca-list-leave(a)cs.uni-kassel.de
In the Way of Inquiry • Discussion 1
• https://inquiryintoinquiry.com/2024/01/15/in-the-way-of-inquiry-discussion-…
Re: In the Way of Inquiry • Justification Trap
• https://inquiryintoinquiry.com/2023/01/10/in-the-way-of-inquiry-justificati…
Re: Academia.edu • Bhupinder Singh Anand
• https://www.academia.edu/community/5kQ3wL?c=vXZl8g
BSA:
❝Thanks for highlighting what I perceive as some challenging issues in
the foundations of what we seek to term as “Knowledge” and “Truth”. … ❞
Hi Bhupinder,
Just by way of venturing a few links between different schools of thought,
a very rough hint of the pragmatic approach to truth and knowledge can be
found in a fork of the Wikipedia article I helped write many years ago.
Pragmatic Theory Of Truth
• https://oeis.org/wiki/Pragmatic_Theory_Of_Truth
which begins as follows …
“Pragmatic theory of truth” refers to those accounts, definitions,
and theories of the concept “truth” distinguishing the philosophies
of pragmatism and pragmaticism. The conception of truth in question
varies along lines reflecting the influence of several thinkers,
initially and notably, Charles Sanders Peirce, William James, and
John Dewey, but a number of common features can be identified.
The most characteristic features are (1) a reliance on the “pragmatic
maxim” as a means of clarifying the meanings of difficult concepts,
“truth” in particular, and (2) an emphasis on the fact that the product
variously branded as belief, certainty, knowledge, or truth is the result
of a process, namely, “inquiry”.
Document History
• https://oeis.org/wiki/Pragmatic_Theory_Of_Truth#Document_history
Regards,
Jon
cc: https://www.academia.edu/community/VXpQ9V
cc: https://mathstodon.xyz/@Inquiry/111761975504300824
I came across more articles that explain the limitations of LLMs.
The first is a detailed article about the inability of LLMs to generalize beyond the content of what is stored in their huge volume of data that creates an immense amount of global warming to generate and use. Since the authors work for Google DeepMind, they cannot be considered prejudiced against LLM technology. The issues are complex, and the authors explain how some generalization is possible. But the generality of a 1960-style theorem prover cannot be achieved with the latest and greatest LLMs: https://arxiv.org/pdf/2311.00871.pdf
The second is a standard for HSTP; IEEE P2874 Spatial Web Protocol, Architecture and Governance Standard. SWF and IEEE SA are collaborating on a socio-technical governance and system standard for the Spatial Web. The collaboration meets rigorous requirements to become a globally adopted IEEE standard, informed by IEEE’s Ethically-Aligned Design P7000 Series, and focused on Autonomous Intelligent Systems. The formation of the IEEE P2874 Spatial Web Working Group was deemed a “public imperative.”
Important point: Vastly more information -- in kinds, amount, and precision -- can be represented spatially than linguistically. Anything that can be observed by any animal eye or any kind of instrument can be encoded, transmitted, transformed, and regenerated anywhere by any kind of computer or other device.
Peirce's goal of generalizing EGs to represent three dimensional moving images can now be achieved by the HSTP protocol. This representation might not be humanly attractive, but it could be transformed to and from images of any kind -- including whatever Peirce might have invented or hoped to invent.
The third explains the implications of Friston's work on active inference: Unlocking the Future of AI: Active Inference vs. LLMs by Denise Holt, https://medium.com/aimonks/unlocking-the-future-of-ai-active-inference-vs-l…
Denise H. apparently works for Verses AI, and it's possible that the company might not achieve everything they hope for on a schedule they hope for. But the ideas are sound, and I'm sure that somebody will implement something along these lines in a few more years. When they do, LLMs will still be useful for representing the user interface. See below for some implications. But the reasoning methods will be put on a sound basis, and the immense volume of stored LLMs will be irrelevant.
As for AGI -- the short answer is "piffle".
Happy New Year,
John
_________________________
The rise of Large Language Models (LLMs) like OpenAI’s ChatGPT, has stirred endless excitement and curiosity about the capabilities of Artificial Intelligence. These systems have the remarkable ability to generate human-like text and engage in diverse conversations, fueling expectations for AI’s future. However, as impressive as LLMs are, they have inherent limitations when compared to this new revolutionary approach to artificial intelligence known as Active Inference. Let’s dive into the fundamental differences between LLMs and Active Inference and why the latter is positioned to emerge as a vanguard of the future of AI.
Limitations of LLMs: Content Creation vs. Real-World Operations
LLMs are powered by deep learning on massive datasets, allowing them to recognize linguistic patterns attuned to various subject matter and generate outputs that seem coherent. However, this statistical pattern matching does not equate to true intelligence or understanding of the world. LLMs fall short in several critical aspects:
1. Contextual Awareness: LLMs lack the ability to actively perceive or reason about real-world situations as they unfold. Their operation solely depends on the data they were trained on, devoid of real-time sensory input.
2. Explainability: Understanding the decision-making processes of LLMs is an elusive challenge. Their outputs are essentially probabilistic guesses, even if fluently phrased.
3. Grounding in Reality: They hallucinate or fabricate responses outside their training distribution, unconstrained by real world knowledge, blurring the line between fact and fabrication.
4. Ability to Take Action — LLMs cannot act on their environment or test hypotheses through exploring the world. They are passive systems.
These deficiencies make LLMs poorly suited for most real-world applications, especially those requiring nuanced situational understanding or the ability to operate autonomously in dynamic physical environments. Their strengths lie more in generating content, ideas, and prose based on recognizing patterns in immense datasets.
In fact, a recent paper by Google provides evidence that transformers (GPT, etc) are unable to generalize beyond their training data. “We find strong evidence that the model can perform model selection among pre-trained function classes during in-context learning at a little extra statistical cost, but limited evidence that the models’ in-context learning behavior is capable of generalizing beyond their pre-training data.”
The Paradigm Shift — Active Inference: The Future of AI
Active Inference, based on the Free Energy Principle developed by Dr. Karl J. Friston, world reknowned neuroscientist and Chief Scientist at VERSES AI, represents a paradigm shift in AI.
Active Inference AI is modeled after how the human brain and biological systems work. Through this method, an Intelligent Agent is able to continuously sense its environment, take action in real-time based on that sensory input, and update its internal model of the world — just like humans do. This sets it apart from other AI approaches that are static and cannot adapt and evolve in real-time.
Active Inference has the capability to start off at a basic level and rapidly evolve in intelligence over time, similar to how a human child develops. This positions it to continuously improve and adapt as it accumulates more experiences and knowledge, making it far more advanced than current AI.
Active Inference also possesses two unique capabilities. Human laws and guidelines can be programmed into these systems, and they will abide by them in real-time, and these autonomous systems are also capable of introspection. They can report on their own processing and decisions, making them completely auditable. This gives them an unparalleled advantage in being able to grow and evolve in collaboration and cooperation alongside humans.
In contrast to deep learning, active inference is founded on principles of embodied cognition and Bayesian inference, delivering several key attributes:
Active Inference as Embodied AI:
> Sensory Integration and Real-Time Interaction: Active inference AI mimics human abilities to sense, perceive, and interact with the world in real time. It can see, hear, touch, and respond to environmental stimuli, similar to human sensory processing.
> World Modeling and Decision Making: This AI continuously updates its world model, akin to how humans learn and adapt. This evolving understanding allows it to engage in complex decision-making and problem-solving tasks.
> Planetary Management and Support: Active inference AI’s capabilities extend to managing large-scale systems like climate, biodiversity, and energy flows. It is envisioned to support and protect every individual and living entity on the planet, much like a global caretaker.
Unique Advantages and Benefits:
> Adaptability and Evolution: Unlike traditional AI, Active Inference AI evolves continually. It’s self-evolving, self-organizing, and self-optimizing, aligning with the concept of autopoiesis — a system capable of reproducing and maintaining itself.
> Comprehensive Application Spectrum: The AI’s capability extends far beyond current AI applications. It can address real-time updating and adaptable scenarios, making it ideal for complex tasks like running supply chains or smart cities.
> Holistic Integration: The technology is seen as bringing the planet to life, integrating the digital and physical worlds, and transforming the planet into a ‘digital organism’. This holistic approach signifies a major leap in AI capabilities.
This manifests in a new type of AI that is capable of:
· Perpetual Learning: Active inference agents continually update their beliefs by interacting with the world in real-time, integrating new observations into their internal model of how the world works.
· Contextual Awareness: By gathering multisensory input, agents build a dynamic understanding of unfolding situations, enabling complex reasoning and planning.
· Embodiment: Agents learn faster and perform better when they are embodied in simulated or physical forms, allowing them to test hypotheses through action.
· Explainability: An agent’s beliefs, desires, and decision-making processes are transparent and grounded in its observations and prior knowledge.
· Flexible Cognition: Active Inference agents can seamlessly transfer knowledge across diverse contexts and challenges, similar to human adaptability.
These attributes make Active Inference a groundbreaking approach to AI, but what takes it beyond the horizon is the integration with the universal network of the Spatial Web.
The Spatial Web: The Framework for Distributed and Multi-scale Intelligence
Active Inference’s potential is amplified when it converges with the Spatial Web Protocol, HSTP (Hyperspace Transaction Protocol) and HSML (Hyperspace Modeling Language). This convergence unleashes a distributed form of Active Inference and fosters collective intelligence among multitudes of Intelligent Agents.
HSTP: The Backbone of Distributed Active Inference
At the core of this integration is HSTP, the Hyperspace Transaction Protocol, serving as the digital nervous system. It enables seamless communication and data exchange among various technologies, sensors, machines, and Intelligent Agents, unifying them on a common network, creating a dynamic, real-time ecosystem. HSTP captures real-time data from multiple sources, forging a comprehensive contextual understanding of any given situation, much like the human nervous system.
HSML: The Lingua Franca of the Spatial Web
HSML, the Hyperspace Modeling Language, plays a pivotal role in this orchestration. It’s not just a language; it’s the bridge between diverse technologies and agents in the Spatial Web. By programming context into the digital twin spaces of every element in the world, HSML creates a unifying layer that resonates across the entire network. HSML acts as the translator in a multilingual conversation, ensuring every technology, sensor, machine, and Intelligent Agent can comprehend and communicate effectively.
The World Model: A Living, Breathing Entity
This ingenious amalgamation of Active Inference with HSTP and HSML gives birth to a comprehensive “world model,” providing an evolutionary pathway to AGI and beyond. This world model is not a static representation; it’s a living, breathing entity that continuously adapts and evolves in real time. It’s the digital twin of the real world, reflecting the ever-changing dynamics, interactions, and complexities of the environment it models. With HSTP and HSML, the world model acquires a depth of understanding that transcends the capabilities of traditional AI models. It gains the ability to perceive, reason, and adapt to real-time events with unparalleled accuracy. It becomes the foundation for all perception, decision-making, and action within the AI system.
In essence, smart cities will function like a digital organism, paralleling the human body’s brain and nervous system. They utilize a dynamic, holographic world model, constantly updated by real-time sensor data, analogous to sensory neurons. This model represents various aspects of the city, climate, or other environments and is linked to the Internet of Things (IoT), which includes drones, robots, and automated systems. These elements act like motor neurons, executing changes in the physical world. The sensory network then updates the world model based on these changes, creating a feedback loop.
This process depicts the evolution of intelligence, where the system continuously refines its understanding and interaction with the world, reducing surprises through accurate inferences from the current world model. The Free Energy Principle provides the mathematical framework for this evolutionary process. While Natural Selection explains the evolution of physical bodies in animals, the evolution of world models can be considered a branch of Memetics — similar to Genetics — but not biologically based. The Free Energy Principle provides us with the Mathematics of Evolution of Intelligence in these complex, interconnected systems.
AI that is Capable of Governance
The explainability of Active Inference AI (the ability of self-introspection and self-reporting) coupled with the use of HSML, where human laws can be made programmable, enabling AI to understand and comply with them in real-time, offers a solution where the development and implementation of trustworthy and governable AI systems prevail.
In July 2023, VERSES AI, along with Dentons Law Firm, and the Spatial Web Foundation, published a groundbreaking industry report titled, The Future of Global AI Governance, establishing frameworks for ethical, robust, and lawful AI. These frameworks refer to successful proofs of concept using Active Inference AI and the Spatial Web Protocol — HSTP and HSML — demonstrating remarkable success in programming AI to comply with human laws, integrating various AIs into a larger, governable network. This report afirms the importance of regulation standards and proposes categorizing AI into different levels of intelligence and capabilities for appropriate governance methods. The overarching theme is to use these technologies to govern the autonomous systems themselves to ensure responsible and effective governance of AI for safety, privacy, and ethical use.
The act of correcting a machine occurs within the code, and through these technological breakthroughs, VERSES has developed a way to do that.
Real-World Impact: Active Inference’s Applications
The distinct capabilities of Active Inference find applications across a myriad of domains where situational awareness, adaptability, and autonomy are paramount:
· Robotics: Enabling control of autonomous robots and vehicles operating in dynamic real-world environments, from factories to homes and automobiles to drones.
· Logistics: Optimizing delivery drones and coordinating swarms for safe and efficient navigation.
· Healthcare: Personalized care through smart beds, wearables, and assistive robots for patient monitoring.
· Smart Cities: Managing critical systems, including hospitals, airports, supply chains, traffic flows, public services, and infrastructure, through distributed networks.
· Finance: Detecting fraud, risk, and anomalies in transactions in real-time.
· Scientific Discovery: Streamlining processes such as materials development, drug discovery, and particle physics.
In contrast to current LLMs, which remain narrow, passive, and bound by static training data, Active Inference represents a developmental approach to AI, characterized by embodiment, context, and dynamism. While LLMs excel in content creation, Active Inference holds the next evolution of AI that can grapple with the complexities of the real world. As the development of this technology matures, it promises to revolutionize AI integration across domains, from robotics to finance and scientific discovery.
A New Era of AI
In the realm of AI, where understanding, perception, and adaptability are paramount, the convergence of Active Inference with the Spatial Web Protocol — HSTP and HSML, is the catalyst of transformation. It ushers in a new era where AI transcends boundaries and limitations, where it possesses the contextual world model essential for true intelligence and the cultivation and expansion of universal knowledge.
Words vs. the World
Large Language Models have captured the imagination of the public regarding AI’s potential, but they face significant limitations compared to the emerging implementation of Active Inference. Active Inference’s continuous interaction and learning model lead to more flexible, context-aware intelligence. Active Inference agents excel at understanding nuanced situations, adapting to new environments, and operating autonomously in the real world. While LLMs are ideal for content generation, Active Inference paves the way for AI to navigate the complexities of real-world challenges. As this new paradigm advances, it will drive innovation and automation across an array of fields, transforming the landscape of AI applications.
Differential Propositional Calculus • 15
• https://inquiryintoinquiry.com/2023/12/04/differential-propositional-calcul…
Fire over water:
The image of the condition before transition.
Thus the superior man is careful
In the differentiation of things,
So that each finds its place.
— I Ching ䷿ Hexagram 64
Differential Extension of Propositional Calculus —
This much preparation is enough to begin introducing my
subject, if I excuse myself from giving full arguments
for my definitional choices until a later stage.
To express the goal in a turn of phrase, the aim is to
develop a differential theory of qualitative equations,
one which can parallel the application of differential
geometry to dynamical systems. The idea of a tangent
vector is key to the work and a major goal is to find
the right logical analogues of tangent spaces, bundles,
and functors. The strategy is taken of looking for the
simplest versions of those constructions which can be
discovered within the realm of propositional calculus,
so long as they serve to fill out the general theme.
Reference —
Wilhelm, R., and Baynes, C.F. (trans.), The I Ching,
or Book of Changes, Foreword by C.G. Jung, Preface
by H. Wilhelm, 3rd edition, Bollingen Series XIX,
Princeton University Press, Princeton, NJ, 1967.
Resources —
Differential Logic and Dynamic Systems
• https://oeis.org/wiki/Differential_Logic_and_Dynamic_Systems_%E2%80%A2_Part…
Differential Extension of Propositional Calculus
•
https://oeis.org/wiki/Differential_Logic_and_Dynamic_Systems_%E2%80%A2_Part…
Regards,
Jon
cc: https://www.academia.edu/community/l7pvk5
But a Turing machine that is connected to the WWW is an oracle machine:
MN> Yes, Turing describes NON-algorithmic machines (like the oracle machine—the o-machine as he called it)—but so far we are stuck in the algorithmic.
The following article has a good readable historical development of the issues. (The word 'readable' means that if you had studied some of these topics many years ago and forgot almost everything, the article has enough clear discussion that you don't need to do any further studying elsewhere. If you remember a little more, you can flip through quite fast. That is not something you can say about most publications on these topics.)
Turing Oracle Machines, Online Computing, and Three Displacements in Computability Theory
Robert I. Soare, http://www.people.cs.uchicago.edu/~soare/History/turing.pdf
Following is the final paragraph on p. 60:
Conclusion 14.4. For pedagogical reasons with beginning students it is
reasonable to first present Turing a-machines and ordinary computability.
However, any introductory computability book should then present as soon
as possible Turing oracle machines (o-machines) and relative computability.
Parallels should be drawn with offline and online computing in the real world
John
----------------------------------------
From: "Nadin, Mihai" <nadin(a)utdallas.edu>
There is NO general intelligence—good for everything; rather concrete intelligence, as the context defines its characteristics.
I hope that these notes explain my invitation to my respected colleagues to read Hilbert’s challenge and Turing’s paper. Yes, Turing describes NON-algorithmic machines (like the oracle machine—the o-machine as he called it)—but so far we are stuck in the algorithmic.
Best wishes for a happy and healthy 2024
Mihai Nadin
I agree with Mihai Nadin "that AGI is yet another of those impossible to achieve tasks." I have repeatedly said that it won't be achieved in the 21st C, but I won't make any predictions about the 22nd. So far, nobody has produced the slightest shred of evidence for any kind of AGI any sooner. Best summary of the issues: "AGI is 30 years in the future, always was and always will be." There are still some diehards who claim that the prediction from the year 2000 will come to pass in the next 6 years, but the hopes for generative AI are already dying. -- But there are many useful applications for better natural language interfaces to all kinds of systems, not just AI.
Dan Brickley dug up some excellent references on predictive coding, and Karl Friston is one of the pioneers in the field (see below). A recent book (2022) from MIT Press with a foreword by Friston covers the field: "Active Inference: The Free Energy Principle in Mind, Brain, and Behavior." Chapters of that book can be downloaded for free. Appendix C has an annotated example of the Mathlab code.
I believe that this is the approach and the software techniques that Verses AI has adopted. I don't know how well Friston and his colleagues can develop this approach, but I strongly suspect that some of the co-authors and/or their colleagues and students will be working with them. However, practical applications always take more time and more investment than was predicted. (I worked at IBM R & D for 30 years, and I know the issues from close observation and participation.)
Ricardo Sanz: Friston's work is ok. Neuroscience, statististics and optimal control. Good, ol' classic math. VERSES' narrative is classic bullshit. Not "breakthrough" bullshit; just classic bullshit. In my opinion, anthropocentrism, the intelligence=brain fallacy, and biomesmerization are the biggest roadblocks in the way to AGI.
Neuroscience is much broader than anthropomorphism. Living things from bacteria on up are far more successful in complex behavior than any of the latest and greatest driverless cars. Furthermore, very few of the people who have been working on generative AI know anything about neuroscience or the other branches of cognitive science. Therefore, none of the work in those fields could deter (or inspire) them. And it shows.
I won't defend the claims by Verses AI unless and until they come up with software that implements their promises. But I love their criticisms of generative AI. I can't see how anybody could claim that it's on a path toward AGI.
John
----------------------------------------
From: "Dan Brickley" <danbri(a)danbri.org>
For an implementation-oriented survey see https://github.com/BerenMillidge/Predictive_Coding_Papers and in general work under “predictive processing” and “predictive coding” banners
Also this book has PDFs available;
https://direct.mit.edu/books/oa-monograph/5299/Active-InferenceThe-Free-Ene… and also gets pretty specific eg ch8 on continuous time dynamical systems representation, see
https://doi.org/10.7551/mitpress/12441.003.0012
Dan
After a bit of searching, I found more info about Verses AI and their new chief scientist. I like the approach they're taking: putting more emphasis on natural thinking process in neuroscience. And their new chief scientist has publications that would lead them in that direction. The ideas look good, and I would recommend them. But I don't know how far he and his colleagues have gone in implementing them, or how long it will take for anything along those lines to be running in a practical system.
However, it's unlikely that any company would hire somebody as chief scientist without a considerable amount of prior work. And I doubt that any company would make an announcement in a full-page ad in the New York Times unless they already had some kind of prototype.
Following is a list of theoretical publications by Karl Friston: https://www.fil.ion.ucl.ac.uk/~karl/#_Computational_neuroscience
None of them describe an implementation. But it's possible that he and his colleagues (and/or graduate students) have implemented something that Verses AI wanted.
And by the way, one reason why I like this approach is that it's related to methods that Peirce was suggesting. He is famous for his innovations in logic, but he also had many ideas about biosemiotics and reasoning methods in living things down to the level of insects and plants. He even mentioned possible aliens in outer space as agents that might continue research if humans didn't survive.
Although I don't know whether Verses AI will succeed with their plans, I believe that the direction they're taking is more promising than anything OpenAI or Google is doing. I believe that any design that ignores neuroscience is a dead end for AGI.
John
___________________
An excerpt from https://www.verses.ai/press-2/vers-karl-friston
“It is with great enthusiasm and excitement that we welcome Karl Friston to VERSES as our Chief Scientist,” said Gabriel René, Founder, and CEO of VERSES. “Dr. Friston’s breakthrough work in neuroscience and biologically-inspired AI, known as Active Inference, aligns beautifully with our vision and mission to enable a “smarter world” where AI powers the applications of the 21st century. As the originator of this principle, it is only fitting that Karl has a significant role in VERSES AI research and development all the way through their applied uses in product commercialization.”
Friston who was ranked #1 most influential neuroscientist in the world by Semantic Scholar in 2016 has had an illustrious and decorated scientific career. He became a Fellow of the Royal Society in 2006 and The Royal Society of Biology in 2012, received the Weldon Memorial Prize and Medal in 2013 for his remarkable contributions to mathematical biology and was elected as a member of EMBO in 2014 and the Academia Europaea in 2015. He was the 2016 recipient of the Charles Branch Award for unparalleled breakthroughs in Brain Research and the Glass Brain Award from the Organization for Human Brain Mapping. He holds Honorary doctorates from the universities of York, Zurich, Liège, and Radboud University.
“I am delighted and honored to join VERSES. I have seldom met such a friendly, focused, committed, and right-minded group of colleagues. On a personal note, my appointment as Chief Scientist is exactly the kind of dénouement of my academic career I had hoped for – a dénouement that marks the beginning of a new and exciting journey of discovery and enabling.” said Karl Friston.
Verses AI published an article in the NY Times that criticizes and debunks generative AI, and proposes an alternative. I agree with their criticism, but I don't know enough about the alternative to make any further comments. If anybody has difficulty getting the following website, an excerpt without the graphics follows.
In any case, it confirms my basic point: the technology based on LLMs is valuable for many purposes, especially translations between and among languages, natural and artificial. But there is a huge amount of intelligence (by humans and other living things) that it cannot do. Google and others supplement LLMs with different technologies.
The question about how much and what kind of other technology is an open question. The reference below is a suggestion.
John
_______________________
https://medium.com/aimonks/verses-ai-announces-agi-breakthrough-invokes-ope…
In an unprecedented move by VERSES AI, today’s announcement of a breakthrough revealing a new path to AGI based on ‘natural’ rather
than ‘artificial’ intelligence, VERSES took out a full page ad in the NY Times with an open letter to the Board of Open AI appealing to their
stated mission “to build artificial general intelligence (AGI) that is safe and benefits all of humanity.”
Specifically, the appeal addresses a clause in the Open AI Board’s charter that states in pursuit of their mission to “to build artificial general
intelligence (AGI) that is safe and benefits all of humanity,” and the concerns about late stage AGI becoming a “competitive race without
time for adequate safety precautions. Therefore, if a value-aligned, safety-conscious project comes close to building AGI before we do, we
commit to stop competing with and start assisting this project.”
What Happened?
VERSES has achieved an AGI breakthrough within their alternative path to AGI that is Active Inference. And they are appealing to Open AI
“in the spirit of cooperation and in accordance with [their} charter.”
According to their press release today, “VERSES recently achieved a significant internal breakthrough in Active Inference that we believe
addresses the tractability problem of probabilistic AI. This advancement enables the design and deployment of adaptive, real-time Active
Inference agents at scale, matching and often surpassing the performance of state-of-the-art deep learning. These agents achieve superior
performance using orders of magnitude less input data and are optimized for energy efficiency, specifically designed for intelligent computing
on the edge, not just in the cloud.”
In a video published as part of the announcement today titled, “The Year in AI 2023,” VERSES takes a look at the incredible journey of AI
acceleration over this past year and what it suggests about the current path from Artificial Narrow Intelligence (where we are now) to Artificial
General Intelligence — AGI (the holy grail of AI automation)… Noting that all of the major players of Deep Learning technology have publicly
acknowledged throughout the course of 2023 that “another breakthrough” is needed to get to AGI. For many months now, there has been
overwhelming consensus that machine learning/deep learning cannot achieve AGI. Sam Altman, Bill Gates, Yann LeCunn, Gary Marcus,
and many others have publicly stated so.
Just last month, Sam Altman declared at the Hawking Fellowship Award event at Cambridge University that “another breakthrough is needed”
in response to a question asking if LLMs are capable of achieving AGI.
[See graphic in article]
Even more concerning are the potential dangers of proceeding in the direction of machine intelligence, as evidenced by the “Godfather of AI”,
Geoffrey Hinton, creator of back propagation and the deep learning method, withdrawing from Google early this year over his own concerns
of the potential harm to humanity by continuing down the path he had dedicated half a century of his life to.
So What Are The Potential Dangers of Deep Learning Neural Nets?
The many problems that pose these potential dangers of continuing down the current path of generative AI, are compelling and quite serious.
· Black box problem
· Alignment problem
· Generalizability problem
· Halucination problem
· Centralization problem — one corporation owning the AI
· Clean data problem
· Energy consumption problem
· Data update problem
· Financial viability problem
· Guardrail problem
· Copyright problem
All Current AI Stems from This ‘Artificial’ DeepMind Path
[see graphics and much more of this article]
. . .