In the copy of Section 7 from my recent article, I compared human methods of learning and reasoning and theories by Peirce to recent developments by ChatGPT and LLMs.

Research shows that human learning methods at the level of a baby are superior to the best methods of LLMs.

John
____________________

This baby with a head camera helped teach AI how kids learn language

Human babies are far better at learning than even the very best large language models. To be able to write in passable English, ChatGPT had to be trained on massive data sets that contain millions upon millions of words. Children, on the other hand, have access to only a tiny fraction of that data, yet by age three they’re communicating in quite sophisticated ways.

A team of researchers at New York University wondered if AI could learn like a baby. What could an AI model do when given a far smaller data set—the sights and sounds experienced by a single child learning to talk?

A lot, it turns out. This work, published in Science, not only provides insights into how babies learn but could also lead to better AI models. Read the full story.



--
All contributions to this forum are covered by an open-source license.
For information about the wiki, the license, and how to subscribe or
unsubscribe to the forum, see http://ontologforum.org/info/
---
You received this message because you are subscribed to the Google Groups "ontolog-forum" group.
To unsubscribe from this group and stop receiving emails from it, send an email to ontolog-forum+unsubscribe@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/ontolog-forum/b5fabb9b5e4747399c8db212fc398f7b%40bestweb.net.