Alex.

I read that web page you cited.   What Google calls "foundation models" I would call "mappings based on specialized ontologies".  They include three kindds:  (1) text to image, (2) text to code, and (3) speech to text.

I believe they are making a serious mistake by using English text in their foundation.  The article I'm writing, which puts Peirce's diagrammatic reasoning at the center, is more general, flexible, and powerful.  It also avoids a huge number of complex issues that differ from one natural language to another -- even worse, the words differ from one kind of application to another, even in the same language.

Thanks for citing that article.  I am now finishing the final Section 7 of my article, and this method by Google gives me a clear target to shoot at.  I'm actually glad to see that Google is making that mistake -- because it makes it easier to compete with them.

That diagram by Gartner puts foundation models at the top of the hype cycle.  That means they are about to plunge into the trough of disillusionment.  I would enjoy giving them a little push.

John
 


From: "Alex Shkotin" <alex.shkotin@gmail.com>

John,

I am talking about this part of Gartner's picture you gave in attachment.
image.png
It was unknown to me that guys from AI technology have their own ideas for the term "foundation models" [1] (just an example).

Alex

[1] https://ai.google/discover/foundation-models/