
Notably, an early deficit in differentiating such rapid temporospectral variations is associated with specific language impairments in childhood ( Benasich and Tallal, 2002). They further differentiate between highly similar consonant–vowel syllables that vary in duration only by milliseconds ( Molfese and Molfese, 1979 Bertoncini et al., 1987). At birth newborns are able to distinguish their native language from other rhythmically different languages ( Mehler et al., 1988 Kuhl, 2004) and show adult-like, preattentive discrimination of different vowels ( Cheour-Luhtanen et al., 1995 Alho et al., 1998). Before the infant utters its first word, linguistic input is preferentially processed ( Ramus et al., 2000 Vouloumanos and Werker, 2007). To acquire the specifically human faculty of language, infants face the challenging problem of being confronted with a complex auditory signal. From birth, the brain is tuned to critical temporal properties of linguistic signals to facilitate one of the major needs of humans: to communicate. Hence, our findings support the hypothesis that development of speech perception is linked to basic capacities in auditory processing. That is, the newborn auditory cortex is sensitive to the temporal structure of the auditory input and shows an emerging tendency for functional asymmetry. Our data furthermore suggest that responses to slow acoustic modulations are lateralized to the right hemisphere. In line with multi-time-resolution conceptions, modulations on the time scale of phonemes elicit strong bilateral cortical responses. The newborn brain preferentially processes temporal modulations especially relevant for phoneme perception.

Applying concurrent electrophysiological (EEG) and hemodynamic (near-infrared spectroscopy) recording to newborn infants listening to temporally structured nonspeech signals, we provide evidence that newborns process nonlinguistic acoustic stimuli that share critical temporal features with language in a differential manner.

Several studies in adults link hemispheric specialization for certain aspects of speech perception to an asymmetry in cortical tuning and reveal that the auditory cortices are differentially sensitive to spectrotemporal features of speech. However, whether these computational asymmetries can be considered a function of linguistic attributes or a consequence of basic temporal signal properties is under debate. Pioneering studies have demonstrated that hemispheric specializations for language processing are already present in early infancy. Understanding the rapidly developing building blocks of speech perception in infancy requires a close look at the auditory prerequisites for speech sound processing.
