Affiliation:
1. Yale Child Study Center Yale University School of Medicine New Haven Connecticut USA
2. Department of Psychology Florida International University Miami Florida USA
Abstract
AbstractIntersensory processing of social events (e.g., matching sights and sounds of audiovisual speech) is a critical foundation for language development. Two recently developed protocols, the Multisensory Attention Assessment Protocol (MAAP) and the Intersensory Processing Efficiency Protocol (IPEP), assess individual differences in intersensory processing at a sufficiently fine‐grained level for predicting developmental outcomes. Recent research using the MAAP demonstrates 12‐month intersensory processing of face‐voice synchrony predicts language outcomes at 18‐ and 24‐months, holding traditional predictors (parent language input, SES) constant. Here, we build on these findings testing younger infants using the IPEP, a more comprehensive, fine‐grained index of intersensory processing. Using a longitudinal sample of 103 infants, we tested whether intersensory processing (speed, accuracy) of faces and voices at 3‐ and 6‐months predicts language outcomes at 12‐, 18‐, 24‐, and 36‐months, holding traditional predictors constant. Results demonstrate intersensory processing of faces and voices at 6‐months (but not 3‐months) accounted for significant unique variance in language outcomes at 18‐, 24‐, and 36‐months, beyond that of traditional predictors. Findings highlight the importance of intersensory processing of face‐voice synchrony as a foundation for language development as early as 6‐months and reveal that individual differences assessed by the IPEP predict language outcomes even 2.5‐years later.
Funder
National Institute of Child Health and Human Development
Subject
Developmental and Educational Psychology,Pediatrics, Perinatology and Child Health
Cited by
3 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献