Abstract
INTEGRATING information across the senses can enhance our ability to detect and classify stimuli in the environment. For example, auditory speech perception is substantially improved when the speaker's face is visible. In an fMRI study designed to investigate the neural mechanisms underlying these crossmodal behavioural gains, bimodal (audio-visual) speech was contrasted against both unimodal (auditory and visual) components. Significant response enhancements in auditory (BA 41/42) and visual (V5) cortices were detected during bimodal stimulation. This effect was found to be specific to semantically congruent cross-modal inputs. These data suggest that the perceptual improvements effected by synthesizing matched multi-sensory inputs are realised by reciprocal amplification of the signal intensity in participating unimodal cortices.
View more >>