Affiliation:
1. Max Planck Institute for Psycholinguistics, Nijmegen
2. Donders Institute for Brain Cognition and Behaviour, Nijmegen
Abstract
AbstractLanguage is inherently multimodal. In spoken languages, combined spoken and visual signals (e.g., co‐speech gestures) are an integral part of linguistic structure and language representation. This requires an extension of the parallel architecture, which needs to include the visual signals concomitant to speech. We present the evidence for the multimodality of language. In addition, we propose that distributional semantics might provide a format for integrating speech and co‐speech gestures in a common semantic representation.
Cited by
1 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献