Abstract
ABSTRACTThe auditory system relies on detailed and summary representations; when local acoustic details exceed system constraints, they are compacted into a set of average statistics, and a summary structure emerges. Such compression is pivotal for abstraction and sound-object recognition. Here, we assessed whether computations subtending local and statistical representations of sounds could be distinguished at the neural level. A computational auditory model was employed to extract auditory statistics from natural sound textures (i.e., fire, wind, rain) and to generate synthetic exemplars in which local and statistical properties were controlled. Participants were passively exposed to auditory streams while the EEG was recorded. In distinct streams, we manipulated sound duration (short, medium, long) to vary the amount of acoustic information. Short and long sounds were expected to engage local or summary statistics representations, respectively. Data revealed a clear dissociation. As predicted, in discriminations based on local information – compared to summary-based ones – auditory responses of greater magnitude were measured selectively for short sounds, while the opposite pattern emerged for longer sounds. Neural oscillations revealed that local features and summary statistics representations rely on neural activity occurring at different temporal scales, faster (beta) or slower (theta-alpha), respectively. These dissociations in neural response emerged without explicit engagement in a discrimination task, strongly suggesting that such processing may be pre-attentive in nature. Overall, this study demonstrates that the auditory system developed a neural architecture relying on distinct coding to automatically discriminate changes in the auditory environment based on acoustic details and their summary representations.SIGNIFICANCE STATEMENTPrior to this study, it was unknown whether we could directly measure auditory discriminations based on local features or statistical properties of sounds. Results show that the two auditory modes (local and summary statistics) are pre-attentively attuned to the temporal resolution (high or low) at which a change has occurred. In line with the temporal resolutions of auditory statistics, faster or slower neural oscillations (temporal scales) code sound changes based on local or summary representations. These findings expand our knowledge of some fundamental mechanisms underlying the function of the auditory system.
Publisher
Cold Spring Harbor Laboratory
Cited by
1 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献