Action planning and predictive coding when speaking (Wang et al., 2014).
Sensations resulting from actions that the person himself has performed are processed differently from sensations coming from external sources, and self-generated sensations are suppressed. As we have discussed elsewhere, the reduction in the processing of one’s own speech is the result of the comparison of speech sounds with the corollary discharges that are generated by efferent copies of motor commands sent to sensory regions. One of our current projects aims to study the neurobiological basis of abnormal experiences of the self and therefore we have selected this article for our literature session in which, using EEG with anatomical MRI, the authors observe that activity of the inferior frontal gyrus during the 300 ms before speech was associated with suppressed processing of speech sounds in the auditory cortex around 100 ms after speech onset, which is reflected by the N1 component. The authors state that these findings indicate that an efferent copy of speech areas in the prefrontal cortex is transmitted to the auditory cortex, where suppressed processing of anticipated speech sounds takes place. The authors also analyse P2 and, after observing no changes in this potential during speech, suggest that, although sensory processing is suppressed as reflected in N1 potential, perceptual gaps may be filled as reflected in the lack of P2 suppression, explaining the discrepancy between sensory suppression and preserved sensory experiences. These findings, together with the coherence between relevant brain regions before and during speech, provide insights into the complex interactions between action planning and the sensory processing of speech itself.