Modeling Multisensory Integration
Modeling Multisensory Integration
The different senses, such as vision, touch, or audition, often provide redundant information for perceiving our environment. For instance, the size of an object can be determined by both sight and touch. In this chapter, Loes C.J. van Dam, Cesare V Parise, and Marc Ernst discuss the statistical optimal framework for combining redundant sensory information to maximize perceptual precision – the Maximum Likelihood Estimation (MLE) framework – and provides examples on how human performance approaches optimality. In the MLE framework, each cue is weighed according to its precision, that is, the more precise sensory estimate receives a higher weight when integration occurs. However, before integrating multisensory information, the perceptual system needs to determine whether or not the sensory signals correspond to the same object or event (the so-called correspondence problem). Current ideas on how the perceptual system solves the correspondence problem are provided in the same mathematical framework. Additionally, the chapter briefly reviews learning and developmental influences on multisensory integration.
Keywords: Perception, Redundant estimates, Multisensory integration, Maximum Likelihood Estimation, Correspondence problem, Learning
MIT Press Scholarship Online requires a subscription or purchase to access the full text of books within the service. Public users can however freely search the site and view the abstracts and keywords for each book and chapter.
Please, subscribe or login to access full text content.
If you think you should have access to this title, please contact your librarian.
To troubleshoot, please check our FAQs, and if you can't find the answer there, please contact us.