In an unprecedented move by VERSES AI, today’s announcement of a breakthrough revealing a new path to AGI based on ‘natural’ ratherthan ‘artificial’ intelligence, VERSES took out a full page ad in the NY Times with an open letter to the Board of Open AI appealing to theirstated mission “to build artificial general intelligence (AGI) that is safe and benefits all of humanity.”Specifically, the appeal addresses a clause in the Open AI Board’s charter that states in pursuit of their mission to “to build artificial generalintelligence (AGI) that is safe and benefits all of humanity,” and the concerns about late stage AGI becoming a “competitive race withouttime for adequate safety precautions. Therefore, if a value-aligned, safety-conscious project comes close to building AGI before we do, wecommit to stop competing with and start assisting this project.”
What Happened?
VERSES has achieved an AGI breakthrough within their alternative path to AGI that is Active Inference. And they are appealing to Open AI“in the spirit of cooperation and in accordance with [their} charter.”
According to their press release today, “VERSES recently achieved a significant internal breakthrough in Active Inference that we believeaddresses the tractability problem of probabilistic AI. This advancement enables the design and deployment of adaptive, real-time ActiveInference agents at scale, matching and often surpassing the performance of state-of-the-art deep learning. These agents achieve superiorperformance using orders of magnitude less input data and are optimized for energy efficiency, specifically designed for intelligent computingon the edge, not just in the cloud.”
In a video published as part of the announcement today titled, “The Year in AI 2023,” VERSES takes a look at the incredible journey of AIacceleration over this past year and what it suggests about the current path from Artificial Narrow Intelligence (where we are now) to ArtificialGeneral Intelligence — AGI (the holy grail of AI automation)… Noting that all of the major players of Deep Learning technology have publiclyacknowledged throughout the course of 2023 that “another breakthrough” is needed to get to AGI. For many months now, there has beenoverwhelming consensus that machine learning/deep learning cannot achieve AGI. Sam Altman, Bill Gates, Yann LeCunn, Gary Marcus,and many others have publicly stated so.Just last month, Sam Altman declared at the Hawking Fellowship Award event at Cambridge University that “another breakthrough is needed”in response to a question asking if LLMs are capable of achieving AGI.
[See graphic in article]Even more concerning are the potential dangers of proceeding in the direction of machine intelligence, as evidenced by the “Godfather of AI”,Geoffrey Hinton, creator of back propagation and the deep learning method, withdrawing from Google early this year over his own concernsof the potential harm to humanity by continuing down the path he had dedicated half a century of his life to.So What Are The Potential Dangers of Deep Learning Neural Nets?
The many problems that pose these potential dangers of continuing down the current path of generative AI, are compelling and quite serious.
All Current AI Stems from This ‘Artificial’ DeepMind Path· Black box problem
· Alignment problem
· Generalizability problem
· Halucination problem
· Centralization problem — one corporation owning the AI
· Clean data problem
· Energy consumption problem
· Data update problem
· Financial viability problem
· Guardrail problem
· Copyright problem
[see graphics and much more of this article]. . .