Learning Sound and Meaning Jointly in Early Language Acquisition
Language acquisition — the process by which children break into their native language — is both an exciting scientific frontier and a critical applied issue. During the course of acquisition, babies both have to learn the meanings of words and learn to recognize different ways the same word is pronounced, all while ignoring irrelevant variations (e.g. “dog” is the same word even when spoken by two different people). These two aspects of language learning are both important problems but have typically been studied independently. Exciting new findings suggest that studying these two problems together may lead to advances, however. In our proposed project, we will launch a large scale interdisciplinary investigation at the interface of phonological (sound) and semantic (meaning) learning. The objective is to inquire if such an interaction is possible, how it is predicted to develop across time and through what mechanisms. It will bring together experts in low-level speech processing at Laboratoire de Science Cognitive et Psycholinguistique (LSCP) in Paris, and experts in high-level semantic learning at the Stanford Language and Cognition Lab. This project will integrate cutting edge speech technology tools with computer vision algorithms to analyze data from recorded interactions of children with their parents.