Objectives
To review recent research studies concerning the importance of high-frequency amplification for speech perception in adults and children with hearing loss and to provide preliminary data on the phonological development of normal-hearing and hearing-impaired infants.
Design and Setting
With the exception of preliminary data from a longitudinal study of phonological development, all of the reviewed studies were taken from the archival literature. To determine the course of phonological development in the first 4 years of life, the following 3 groups of children were recruited: 20 normal-hearing children, 12 hearing-impaired children identified and aided up to 12 months of age (early-ID group), and 4 hearing-impaired children identified after 12 months of age (late-ID group). Children were videotaped in 30-minute sessions at 6- to 8-week intervals from 4 to 36 months of age (or shortly after identification of hearing loss) and at 2- and 6-month intervals thereafter. Broad transcription of child vocalizations, babble, and words was conducted using the International Phonetic Alphabet. A phoneme was judged acquired if it was produced 3 times in a 30-minute session.
Subjects
Preliminary data are presented from the 20 normal-hearing children, 3 children from the early-ID group, and 2 children from the late-ID group.
Results
Compared with the normal-hearing group, the 3 children from the early-ID group showed marked delays in the acquisition of all phonemes. The delay was shortest for vowels and longest for fricatives. Delays for the 2 children from the late-ID group were substantially longer.
Conclusions
The reviewed studies and preliminary results from our longitudinal study suggest that (1) hearing-aid studies with adult subjects should not be used to predict speech and language performance in infants and young children; (2) the bandwidth of current behind-the-ear hearing aids is inadequate to accurately represent the high-frequency sounds of speech, particularly for female speakers; and (3) preliminary data on phonological development in infants with hearing loss suggest that the greatest delays occur for fricatives, consistent with predictions based on hearing-aid bandwidth.
Although the effects of severe-to-profound hearing loss have been studied extensively, less is known about the influence of mild-to-moderate hearing loss on speech and language development in young children. Previous studies have suggested that even mild hearing loss can compromise communication abilities, academic performance, and psychosocial behavior.1-6 For example, although the speech of children with mild-to-moderate hearing loss was intelligible, Elfenbein et al4 reported that misarticulation of fricatives and affricates was common, particularly for children with 3-frequency pure-tone average thresholds greater than 45 dB HL (hearing level). In addition, significant delays in vocabulary development, verbal abilities, and reasoning skills2 and increased errors in noun and verb morphology (eg, cat vs cats, keep vs keeps) have been reported.4,7
These performance deficits have been attributed to factors such as reduced signal audibility, diminished opportunities for overhearing, and the limited bandwidth of current hearing aids. The latter factor is the focus of this report. As will be discussed later, this is particularly relevant because cochlear implants do not appear to operate under the same bandwidth constraints as hearing aids. As such, the goals of this report are to review the literature with respect to the contribution of high-frequency audibility to speech perception in adults and children with hearing loss and to provide preliminary data concerning phonological development in children with hearing loss.
Role of high-frequency audibility in speech perception: adult performance
Until recently, all of the studies in this area have been conducted with adults, and the findings have suggested that an increase in high-frequency gain may not improve, and in some cases actually may degrade, speech recognition for listeners with high-frequency hearing losses,8-13 presumably due to nonfunctional or dead regions in the cochlea.14 To date, however, there is no consensus on the magnitude of this effect in general or on the degree of hearing loss at which high-frequency amplification results in degradation in speech recognition.
Several studies have shown degradation in performance for some listeners as the bandwidth is widened,8,9 whereas others have found that increasing stimulus bandwidth results in no improvement—but no degradation—in performance.10,13 In addition, the degree of hearing loss at which loss of benefit or degradation is thought to occur has varied from a low of 55 dB HL9 to a high of 80 dB HL.8 Other investigators have shown improvements for some subjects (eg, flat hearing loss) or under certain conditions (eg, soft speech and fricatives).10,12,15 Recently, Hornsby and Ricketts16 found that listeners with flat sensorineural hearing losses are able to use high-frequency acoustic information in a manner similar to normal-hearing subjects.
From the existing data obtained from adults with hearing loss, it may be difficult to predict how high-frequency audibility influences speech perception and language development in children. In 2 studies,8,10 for example, the test materials consisted of sentences. When contextual information is available and language skills are well developed, even severe low-pass filtering is unlikely to degrade performance markedly. In other studies where nonsense syllables were used,9,13 however, only a relatively small number of test items contained high-frequency energy. As a result, one would not expect the overall scores to be influenced by a lack of high-frequency audibility.
Effect of stimulus bandwidth on fricative perception in children
Recently, Stelmachowicz and colleagues17 investigated the effect of stimulus bandwidth on the perception of /s/ in normal-hearing and hearing-impaired children and adults. Eighty subjects (20 per group) participated. Nonsense syllables containing the phonemes /s/, /f/, and /θ/, produced by a male, a female, and a child speaker, were low-pass filtered at 5 frequencies from 2 to 9 kHz. Frequency shaping was provided for the hearing-impaired subjects only. Figure 1 shows the percentage correct for /s/ as a function of low-pass frequency for the 3 speakers. For all speakers, both groups of children performed more poorly than their adult counterparts at similar bandwidths when performance was above chance (33%). Likewise, both hearing-impaired groups performed more poorly than their normal-hearing counterparts. More important, significant speaker effects and speaker × group effects were observed. For the male speaker, maximum performance was reached at a bandwidth of approximately 4 kHz for the normal-hearing adults but not until 5 kHz for the other 3 groups of subjects. A very different pattern is seen for the female speaker, where the mean performance continues to improve to a bandwidth of 9 kHz for all 4 groups. In addition, at upper bandwidths that are typical of current digital behind-the-ear hearing aids (4-5 kHz), performance was near chance. For the child speaker, performance increased gradually across the 6 bandwidths. These results are consistent with the acoustic energy in /s/ as spoken by these 3 speakers. Figure 2 shows the one-third–octave band spectra of /s/ used in this study. The solid line shows the male /s/ with a first primary peak at 5000 Hz. The dashed and dotted lines show the spectra of the child and female /s/, which continues to rise until 9 kHz. The child /s/ has more midfrequency energy, which may account for the differences observed in Figure 1. Given the bandwidth of current hearing aids, it is likely that the peak energy of a female /s/ may not always be audible to hearing-aid users. As a result, children with hearing loss may hear the plural form of words reasonably well when spoken by a man but inconsistently or not at all when spoken by a woman or another child. As a result, they may experience inconsistent exposure to /s/ across different talkers, situations, and contexts.
These results led us to question how well young children with hearing loss might perceive the bound morphemes /s/ or /z/ (eg, cat vs cats, bug vs bugs) when listening through hearing aids.18 We developed a picture-based test of perception, consisting of 20 easily identified nouns in plural or singular form. Test items (eg, "Show me ducks") were spoken by male and female speakers. Stimuli were presented in the sound field at 50 dB of hearing loss, and the child's task was to point to the correct item(s). Data were obtained from 40 children (aged 5-13 years) with a wide range of sensorineural hearing losses. All children wore their personal hearing aids for this task. A repeated-measures analysis of variance revealed significant main effects for speaker and form. Figure 3 shows performance for the plural words as a function of age. For both speakers, there is considerable variability and no obvious age-related trends. That is, some children aged 6 to 7 years achieved 100% performance, whereas some aged 10 to 11 years performed at chance. Mean performance for the female speaker was poorer than for the male speaker, and the variability across children was slightly greater for the female speaker. To determine what factors might explain these individual differences, a factor analysis was performed for each speaker. Variables were age at test, age of amplification, aided audibility of the fricative noise, and hearing level. Results of this analysis revealed that midfrequency audibility (2-4 kHz) appeared to be most important for perception of the fricative noise for the male speaker, whereas a somewhat wider frequency range (2-8 kHz) was important for the female speaker. Unfortunately, the upper frequency limit of current behind-the-ear hearing aids is generally less than 5 kHz. The results of this study may have important implications for speech and language development in young hearing-impaired children. Because most infants and young children spend most of their time with female caregivers, the audibility of these important speech sounds may be inconsistent (ie, heard readily when the father is speaking and less often, or not at all, when the mother is speaking). As such, what appears to be inconsistent usage by adults may delay the formation of linguistic rules (eg, understanding that "some" or "many" should be followed by a plural noun). This delay may, in turn, have an impact on a child's ability to fill in the blanks in difficult listening situations. This problem is not restricted to children with severe-to-profound hearing loss; aided audibility of high-frequency speech sounds is problematic even for children with mild-to-moderate hearing losses.
Role of high-frequency audibility in speech production: self-monitoring
An additional concern regarding audibility of high-frequency speech cues is the ability of children with hearing loss to adequately monitor their own speech. Numerous factors influence self-monitoring, including vocal effort, degree and configuration of hearing loss, and hearing-aid characteristics. In addition, it has been shown that the acoustic characteristics of a speaker's own voice received at the ear differ from that measured in front of the mouth.19 Specifically, the spectrum at the ear contains greater energy below 1 kHz and less energy above 2 kHz than the spectrum in face-to-face conversation (0° azimuth). Although these differences may have little influence on the development of speech production in children with normal hearing, this reduction in high-frequency energy may limit the audibility of important high-frequency speech sounds in children with hearing loss. To quantify the magnitude of this effect, we evaluated the spectral characteristics of speech simultaneously recorded at the ear and at a reference position (30 cm in front of the mouth).20 Twenty adults (10 men and 10 women) and 26 children (aged 2-4 years) with normal hearing were asked to repeat 9 short sentences. Long-term average speech spectra were calculated for the sentences, and short-term spectra were calculated for selected phonemes within the sentences (/m/, /n/, /s/, /sh/, /f/, /a/, /u/, and /i/). Figure 4 shows the average spectra at the 2 microphone positions for each group. Relative to the reference position, the ear-level position shows higher amplitudes below 1 kHz and lower amplitudes above 2 kHz in all 3 groups. In addition, for frequencies of 2 kHz or greater, the overall level of the children's speech is approximately 5 to 6 dB lower than that of the adults, regardless of microphone location. To determine the input for self-monitoring purposes, spectra for the 8 phonemes were calculated at the ear-level position. Relatively small differences were observed across the 3 groups for the production of vowels, nasals, or the fricative /f/. However, as shown in Figure 5, the peak amplitudes of /s/ and /sh/ were substantially higher for the male and female speakers compared with the child speakers. In addition, higher peak frequencies were observed for the women for /s/ and /sh/ (7.3 and 4.5 kHz, respectively) compared with the men (5.4 and 3.0 kHz, respectively). The peak frequency of /s/ for these children occurred at a frequency greater than 8 kHz.
The results of this study suggest that 2 factors have the potential to reduce the audibility of high-frequency speech information produced by children. The lower overall amplitude of children's speech coupled with the decrease in energy at the ear appears to reduce signal amplitude by approximately 8 to 10 dB for frequencies of 4 kHz or greater relative to the face-to-face conversation with an adult speaker. Unfortunately, the limited bandwidth of current hearing aids would reduce high-frequency audibility even further. Elfenbein et al4 examined the speech production of a group of hearing-impaired children and found that even those with the mildest hearing losses exhibited significant misarticulation or omission of fricatives. These findings support the view that hearing-impaired children may not be able to monitor fricative production adequately. Cochlear implants do not have the same limitations with respect to stimulus bandwidth. That is, the effective bandwidth of a cochlear implant depends on the number and location of electrodes in relation to surviving nerve cells. In clinical practice, frequencies as high as 8 kHz generally are well represented. In a recent study, Grant et al21 analyzed the production of the fricatives /s/ and /z/ produced by children aged 4 to 11 years wearing a cochlear implant (n = 45) or hearing aids (n = 23). The production accuracy of /s/ and /z/ was 15% to 18% higher for the cochlear implant group compared with children using hearing aids. These results support the belief that cochlear implants provide a better representation of these high-frequency consonants than current hearing aids.
Phonological development in normal-hearing and hearing-impaired children
The studies reviewed thus far have focused on speech perception and production in preschool and school-age children. It is of interest to ask how the limited bandwidth of hearing aids might affect speech development in infants and younger children. At present, a longitudinal study to explore the role of auditory experience in early phonological, linguistic, and morphological development is under way at Boys Town National Research Hospital, Omaha, Neb. In the phonological portion of this study, 20 normal-hearing infants are videotaped in a naturalistic setting with their mothers every 4 to 6 weeks from 4 months to 3 years of age and every 6 months from 3 to 5 years of age. Two groups of hearing-impaired children are also enrolled in the study. The early-ID group consists of children aided by 12 months of age, and the late-ID group consists of children aided after 12 months of age. Preliminary phonological data are now available from 11 normal-hearing children, 3 children from the early-ID group, and 2 children from the late-ID group.
Figure 6 presents the mean percentage of phonemes acquired by 14 to 16 months of age for the normal-hearing and early-ID groups. The audiological thresholds for the 3 children with hearing loss are shown in Table 1. For purposes of this study, a phoneme was considered acquired if it appeared 3 times in a single 30-minute test session. Phonemes were divided into the following 3 categories: (1) vowels; (2) stops, nasals, glides, and liquids; and (3) fricatives. By 14 to 16 months of age, the normal-hearing children had acquired an average of 10 of 15 vowels, 10 of 13 stops, nasals, glides, and liquids, and 3 of 12 fricatives. Despite the fact that the mean age at which the 3 hearing-impaired children received hearing aids was 5 months, they showed marked delays in the acquisition of all phonemes relative to their normal-hearing peers (8 of 15 vowels; 5 of 13 stops, nasals, glides, and liquids; and 1 of 12 fricatives). The delay was the shortest for vowels and longest for the fricative class. The only fricative acquired by the children with hearing loss was /h/. This phoneme appears early in all infants and does not require coordinated movements of the tongue.
Further analysis revealed that, by 16 months of age, the early-ID group had acquired the same number of vowels, stops, nasals, glides, and liquids as the normal-hearing children had at 9 months of age (7-month delay). Fricatives were delayed even further compared with the normal-hearing children. This delay in fricative production is consistent with the notion that these children may not have sufficient access to the high-frequency components of speech, despite the fact that residual hearing in the 4- to 8-kHz region was good for at least 2 of these children. Furthermore, the delay in phonological development occurred for 12 of the 13 fricatives in the English language, not just the few selected fricatives studied in previous investigations. Since roughly 50% of the consonants in English are fricatives, this delay is likely to have a substantial influence on later speech and language development.
Although these delays may seem surprising given the early identification and management of hearing loss in these 3 children, they are substantially shorter than those observed in the 2 children from the late-ID group. At 36 months of age, phonological development of these 2 children, whose hearing losses were not identified until 22 and 26 months, showed even more pronounced delays (21 months). Thus, early amplification is highly effective in reducing these delays, although these children are not developing speech at the same rate as their normal-hearing peers. We hope that data from this ongoing longitudinal study will help identify factors that enhance early speech and language development and narrow the gap between normal-hearing and hearing-impaired children. Longitudinal monitoring of phonological development in individual children compared with normal-hearing peers may help identify situations where amplification and/or remediation are less than optimal. Such information may lead to earlier decisions regarding cochlear implantation, changes in rehabilitative services, or the identification of additional disabilities in a given child.
Although only preliminary phonological data are available now, the results of this longitudinal study ultimately will allow us to assess the relation between early phonological development, word learning, and morphological development in children with hearing loss. Recent data from children with cochlear implants suggests that the pattern of language development is strongly influenced by the perceptual prominence of relevant morphological markers.22 Data from our longitudinal study should help determine whether a similar pattern exists for hearing-aid users and whether these results can be predicted from the composition of a child's early babble.
Limited bandwidth of hearing aids: potential solutions
The results of the studies reviewed in this report may have important implications for clinical practice. Although the bandwidth of current hearing instruments is wider than ever before, the high-frequency gain in behind-the-ear hearing instruments, which are most appropriate for infants and young children, drops off precipitously above 5 kHz. Expansion of the signal bandwidth is particularly problematic in these types of instruments because of resonances associated with the tubing. Thus, the upper frequency limit of behind-the-ear instruments is well below the peak frequencies of /s/ spoken by children and women. As a result, providing adequate gain in the 6- to 8-kHz range is difficult, particularly for infants and young children where acoustic feedback is common. One potential solution to this problem is to widen the hearing-aid bandwidth. Thus far, however, technical problems and increased acoustic feedback have precluded the development of wider-bandwidth devices, particularly in behind-the-ear hearing aids.
An alternate approach might be the use of frequency compression or transposition schemes, whereby high-frequency signals are shifted to lower frequencies to provide adequate audibility. This type of approach has produced mixed results, with some studies showing substantial improvement and others showing no improvement or degradation in performance.23-26 However, the signal-processing schemes across studies have differed substantially in concept and implementation. In addition, some studies included subjects who clearly were not candidates for this type of technology. Systematic research studies are needed to address issues of candidacy, signal processing, and parameter optimization for these types of devices.
For some families, another option might be the use of cued speech to help enhance visual support for learning inaudible speech sounds. Finally, another alternative might be to consider cochlear implantation for children who are not considered implant candidates at present (eg, moderate or moderate-severe hearing loss). There are a number of reasons, however, to proceed with caution in this regard. Although the data presented herein suggest that the limited bandwidth of current hearing aids may contribute to phonological delays in young children, additional studies are needed to determine if these delays persist over time or can be minimized or eliminated by speech and language therapy. Visual cues, acoustic cues (eg, vocal transitions), and/or semantic cues ultimately may provide sufficient information to improve both perception and production of fricatives. In addition, it is not clear that monaural implantation with use of a hearing aid on the opposite ear would be superior to binaural hearing aids in this population. Specifically, is improved high-frequency audibility more important than binaural processing with similar inputs? As such, there is a critical need to explore nonsurgical options to increase the audibility of high-frequency speech sounds for young children in the process of developing speech and language.
Corresponding author and reprints: Patricia G. Stelmachowicz, PhD, Boys Town National Research Hospital, 555 N 30th St, Omaha, NE 68131 (e-mail: stelmach@boystown.org).
Submitted for publication August 20, 2003; final revision received November 25, 2003; accepted December 9, 2003.
This study was supported by grants R01 DC04300 and P30 DC04662 from the National Institutes of Health, Bethesda, Md.
This study was presented at the Ninth Symposium on Cochlear Implants in Children; May 24, 2003; Washington, DC.
We thank Barb Peterson, who assisted with data collection; Sharon Wood, MS, who assisted with phonological coding; and Tom Creutz, who provided programming for the longitudinal study.
1.Bess
FHDodd-Murphy
JParker
RA Children with minimal sensorineural hearing loss: prevalence, educational performance, and functional status.
Ear Hear.1998;19:339-354.
PubMedGoogle Scholar 2.Davis
JMElfenbein
JSchum
RBentler
RA Effects of mild and moderate hearing impairments on language, educational, and psychosocial behavior of children.
J Speech Hear Disord.1986;51:53-62.
PubMedGoogle Scholar 3.Davis
JMShepard
NStelmachowicz
PGorga
M Characteristics of hearing-impaired children in the public schools, II: psychoeducational data.
J Speech Hear Disord.1981;46:130-137.
PubMedGoogle Scholar 4.Elfenbein
JLHardin-Jones
MADavis
JM Oral communication skills of children who are hard of hearing.
J Speech Hear Res.1994;37:216-226.
PubMedGoogle Scholar 5.Markides
A The speech of deaf and partially-hearing children with special reference to factors affecting intelligibility.
Br J Disord Commun.1970;5:126-140.
PubMedGoogle Scholar 6.Markides
A The Speech of Hearing-Impaired Children. Dover, NH: Manchester University Press; 1983.
7.Norbury
CFBishop
DVBriscoe
J Production of English finite verb morphology: a comparison of SLI and mild-moderate hearing impairment.
J Speech Lang Hear Res.2001;44:165-178.
PubMedGoogle Scholar 8.Ching
TYDillon
HByrne
D Speech recognition of hearing-impaired listeners: predictions from audibility and the limited role of high-frequency amplification.
J Acoust Soc Am.1998;103:1128-1140.
PubMedGoogle Scholar 9.Hogan
CATurner
CW High-frequency audibility: benefits for hearing-impaired listeners.
J Acoust Soc Am.1998;104:432-441.
PubMedGoogle Scholar 10.Murray
NByrne
D Performance of hearing-impaired and normal hearing listeners with various high-frequency cut-offs in hearing aids.
Aust J Audiol.1986;8:21-28.
Google Scholar 11.Rankovic
CM An application of the articulation index to hearing aid fitting.
J Speech Hear Res.1991;34:391-402.
PubMedGoogle Scholar 12.Skinner
MW Speech intelligibility in noise-induced hearing loss: effects of high-frequency compensation.
J Acoust Soc Am.1980;67:306-317.
PubMedGoogle Scholar 13.Turner
CWCummings
KJ Speech audibility for listeners with high-frequency hearing loss.
Am J Audiol.1999;8:47-56.
PubMedGoogle Scholar 14.Moore
BCJHuss
MVickers
DAGlasberg
BRAlcantara
JI A test for the diagnosis of dead regions in the cochlea.
Br J Audiol.2000;34:205-224.
PubMedGoogle Scholar 15.Sullivan
JAAllsman
CSNielsen
LBMobley
JP Amplification for listeners with steeply sloping, high-frequency hearing loss.
Ear Hear.1992;13:35-45.
PubMedGoogle Scholar 16.Hornsby
BWRicketts
TA The effects of hearing loss on the contribution of high- and low-frequency speech information to speech understanding.
J Acoust Soc Am.2003;113:1706-1717.
PubMedGoogle Scholar 17.Stelmachowicz
PPittman
AHoover
BLewis
D The effect of stimulus bandwidth on the perception of /s/ in normal and hearing-impaired children and adults.
J Acoust Soc Am.2001;110:2183-2190.
PubMedGoogle Scholar 18.Stelmachowicz
PGPittman
ALHoover
BMLewis
DL Aided perception of /s/ and /z/ by hearing-impaired children.
Ear Hear.2002;23:316-324.
PubMedGoogle Scholar 19.Cornelisse
LEGagné
JPSeewald
R Ear level recordings of the long-term average spectrum of speech.
Ear Hear.1991;12:47-54.
PubMedGoogle Scholar 20.Pittman
ALStelmachowicz
PGLewis
DEHoover
BM Spectral characteristics of speech at the ear: implications for amplification in children.
J Speech Lang Hear Res.2003;46:649-657.
PubMedGoogle Scholar 21.Grant
LBow
CPaatsch
LBlamey
P Comparison of production of /s/ and /z/ between children using cochlear implants and children using hearing aids.
In: Poster presented at: Ninth Australian International Conference on Speech Science and Technology; December 2-5, 2002; Melbourne, Australia.
22.Svirsky
MAStallings
LMLento
CLYing
ELeonard
LB Grammatical morphological development in pediatric cochlear implant users may be affected by the perceptual prominence of the relevant markers.
Ann Otol Rhinol Laryngol Suppl.2002;189:109-112.
Google Scholar 23.MacArdle
BMWest
CBradley
JWorth
SMackenzie
JBellman
SC A study of the application of a frequency transposition hearing system in children.
Br J Audiol.2001;35:17-29.
PubMedGoogle Scholar 24.McDermott
HJDorkos
VPDean
MRChing
TY Improvements in speech perception with use of the AVR TranSonic frequency-transposing hearing aid.
J Speech Lang Hear Res.1999;42:1323-1335.
PubMedGoogle Scholar 25.McDermott
HJKnight
MR Preliminary results with the AVR ImpaCt frequency-transposing hearing aid.
J Am Acad Audiol.2001;12:121-127.
PubMedGoogle Scholar 26.Turner
CWHurtig
RR Proportional frequency compression of speech for listeners with sensorineural hearing loss.
J Acoust Soc Am.1999;106:877-886.
PubMedGoogle Scholar