AI models unlock the secrets of language learning

AI learns language like a child: A fascinating study unveils new insights

2372110997

Imagine if an artificial intelligence (AI) model could learn language just like a child does—by seeing and hearing the world through their eyes and ears. Well, that's precisely what a group of scientists has done in a groundbreaking new study.

Using headcam video recordings, researchers trained an AI model to learn words and concepts from the experiences of a single child. These recordings, captured from when the child was only six months old until their second birthday, provided a mere one percent glimpse into the child's waking hours. Surprisingly, this limited dataset proved to be sufficient for genuine language learning.

By studying the real language-learning process that children go through, the researchers aimed to shed light on the mysteries surrounding word acquisition. Do children rely on language-specific biases, innate knowledge, or just associative learning to grasp new words? These were the questions they sought to answer.

To develop the AI model, the scientists meticulously analyzed the child's learning process using weekly video footage from a head-mounted camera. Over the course of 60 hours, they identified a staggering quarter of a million word instances, capturing the words spoken to the child and associating them with corresponding video frames of what the child saw.

The video footage showcased the child engaging in various activities throughout their development, such as mealtimes, reading books, and playtime. This diversity allowed the researchers to train a multimodal neural network with two separate modules. One module processed single frames of the video, while the other module processed the transcribed speech directed at the child.

Using an algorithm called contrastive learning, which learns by making associations within the input data, the researchers combined and trained these modules. The AI model was then able to link visual and linguistic cues, understanding that certain words referred to objects visible in the child's view. This process enabled the gradual association of words with visuals, mimicking a child's language learning journey.

Once the model was trained, it was put to the test. The researchers presented the model with a target word and four different image options, challenging it to select the image that matched the word. Astoundingly, the AI model learned a substantial number of words and concepts from the child's everyday experiences.

Notably, the model displayed an ability to generalize learned words to visuals that were different from those encountered during training. This observation mirrored the concept of generalization seen in children studied in laboratories.

This study opens up new avenues for understanding how children acquire language and provides fascinating insights into the potential of AI. By training AI models to learn language through a child's eyes and ears, scientists can unlock the secrets of language acquisition and further explore the intricate processes that shape our ability to communicate.

Join our WhatsApp Channel to get the latest news, exclusives and videos on WhatsApp