[Skip to Navigation]
Sign In
Figure 1. 
Results from location identification experiment for adult listeners. Each panel shows the proportion of localization responses for every location tested, proportional to the size of the data points. Examples are shown from 3 subjects, arranged by rows. Results for the 3 conditions are arranged from top to bottom, including right ear alone, left ear alone, and bilateral.

Results from location identification experiment for adult listeners. Each panel shows the proportion of localization responses for every location tested, proportional to the size of the data points. Examples are shown from 3 subjects, arranged by rows. Results for the 3 conditions are arranged from top to bottom, including right ear alone, left ear alone, and bilateral.

Figure 2. 
Average and standard deviation root-mean-square (RMS) error values are plotted for adult subjects, comparing performance with the left ear alone, right ear alone, and bilateral.

Average and standard deviation root-mean-square (RMS) error values are plotted for adult subjects, comparing performance with the left ear alone, right ear alone, and bilateral.

Figure 3. 
Individual speech-in-babble data for 14 adult subjects with bilateral cochlear implants. Each panel shows the signal-noise ratio (SNR) at which performance reached 50% speech reception threshold (SRT), comparing a monaural condition with the bilateral condition. Results from conditions in which the babble was either near the poor ear or near the better ear are shown in the panels on the left and right sides of the graph, respectively. In each listening mode, performance is compared for poor ear vs bilateral (top) or better ear vs bilateral (bottom).

Individual speech-in-babble data for 14 adult subjects with bilateral cochlear implants. Each panel shows the signal-noise ratio (SNR) at which performance reached 50% speech reception threshold (SRT), comparing a monaural condition with the bilateral condition. Results from conditions in which the babble was either near the poor ear or near the better ear are shown in the panels on the left and right sides of the graph, respectively. In each listening mode, performance is compared for poor ear vs bilateral (top) or better ear vs bilateral (bottom).

Figure 4. 
Right/left discrimination results from 3 children with bilateral cochlear implants are shown, comparing performance under bilateral or monaural (first ear) conditions. Results from the 7 angular separations are collapsed because of the lack of statistically significant differences between these locations. The solid horizontal line marks the chance performance level.

Right/left discrimination results from 3 children with bilateral cochlear implants are shown, comparing performance under bilateral or monaural (first ear) conditions. Results from the 7 angular separations are collapsed because of the lack of statistically significant differences between these locations. The solid horizontal line marks the chance performance level.

Figure 5. 
Location identification results from 3 children with bilateral cochlear implants are shown, comparing performance under bilateral or monaural (first ear) conditions. The percentage of trials on which speaker locations were correctly identified are plotted. The solid horizontal line marks the chance performance level.

Location identification results from 3 children with bilateral cochlear implants are shown, comparing performance under bilateral or monaural (first ear) conditions. The percentage of trials on which speaker locations were correctly identified are plotted. The solid horizontal line marks the chance performance level.

Figure 6. 

Root-mean-square (RMS) error values are plotted for 3 children with bilateral cochlear implants, comparing performance under monaural (first ear) and bilateral conditions. Each subject had 1 mean value for the 150 trials tested in each condition, and those mean values are denoted by the bars.

Root-mean-square (RMS) error values are plotted for 3 children with bilateral cochlear implants, comparing performance under monaural (first ear) and bilateral conditions. Each subject had 1 mean value for the 150 trials tested in each condition, and those mean values are denoted by the bars.

Figure 7. Results of speech intelligibility measures for 3 children with bilateral cochlear implants using the CRISP (children's realistic index of speech perception) test are shown. Mean speech reception thresholds on bilateral conditions were subtracted from mean scores on monaural conditions; positive values represent a bilateral advantage, whereas negative values represent a bilateral disadvantage. For each child, performance under 4 conditions is plotted, including quiet, competing sound in front, competing sound on right (near first ear), and competing sound on left (near second ear).

Figure 7. Results of speech intelligibility measures for 3 children with bilateral cochlear implants using the CRISP (children's realistic index of speech perception) test are shown. Mean speech reception thresholds on bilateral conditions were subtracted from mean scores on monaural conditions; positive values represent a bilateral advantage, whereas negative values represent a bilateral disadvantage. For each child, performance under 4 conditions is plotted, including quiet, competing sound in front, competing sound on right (near first ear), and competing sound on left (near second ear).

Subject Demographics in Children*
Subject Demographics in Children*
1.
Wilson  BSFinley  CCLawson  DTWolford  RDEddington  DKRabinowitz  WM Better speech recognition with cochlear implants.  Nature.1991;352:236-238.PubMedGoogle Scholar
2.
Rauschecker  JPShannon  RV Sending sound to the brain.  Science.2002;295:1025-1029.PubMedGoogle Scholar
3.
Hochberg  IBoothroyd  AWeiss  MHellman  S Effects of noise and noise suppression on speech perception by cochlear implant users.  Ear Hear.1992;13:263-271.PubMedGoogle Scholar
4.
Skinner  MWClark  GMWhitford  LA  et al Evaluation of a new spectral peak coding strategy for the Nucleus 22 Channel Cochlear Implant System.  Am J Otol.1994;15 (suppl 2):15-27.PubMedGoogle Scholar
5.
Dorman  MFLoizou  PCFitzke  J The identification of speech in noise by cochlear implant patients and normal-hearing listeners using 6-channel signal processors.  Ear Hear.1998;19:481-484.PubMedGoogle Scholar
6.
Zeng  FGGalvin III  JJ Amplitude mapping and phoneme recognition in cochlear implant listeners.  Ear Hear.1999;20:60-74.PubMedGoogle Scholar
7.
Fu  QJShannon  RVWang  X Effects of noise and spectral resolution on vowel and consonant recognition: acoustic and electric hearing.  J Acoust Soc Am.1998;104:3586-3596.PubMedGoogle Scholar
8.
Nelson  PBJin  SHCarney  AENelson  DA Understanding speech in modulated interference: cochlear implant users and normal-hearing listeners.  J Acoust Soc Am.2003;113:961-968.PubMedGoogle Scholar
9.
Durlach  NColburn  HS Binaural phenomena.  In: Carterette  EC, Friedman  MP, eds.  Handbook of Perception.Vol 4. Orlando, Fla: Academic Press Inc; 1978:405-466. Google Scholar
10.
Blauert  J Spatial Hearing: The Psychophysics of Human Sound Localization. 2nd ed. Cambridge, Mass: MIT Press; 1997.
11.
Green  JDMills  DMBell  BALuxford  WMTonokawa  L Binaural cochlear implants.  Am J Otol.1992;13:502-506.PubMedGoogle Scholar
12.
van Hoesel  RTong  YHollow  RClark  G Psychophysical and speech perception studies: a case report on a binaural cochlear implant subject.  J Acoust Soc Am.1993;94:3178-3189.PubMedGoogle Scholar
13.
van Hoesel  R Preserving the benefits of listening with two ears using bilateral cochlear implants.  Paper presented at: 2003 Conference on Implantable Auditory Prostheses; August 19, 2003; Asilomar, Calif.
14.
van Hoesel  RTyler  R Speech perception, localization and lateralization with bilateral cochlear implants.  J Acoust Soc Am.2003;113:1617-1630.PubMedGoogle Scholar
15.
van Hoesel  RClark  GM Psychophysical studies with two binaural cochlear implant subjects.  J Acoust Soc Am.1997;102:495-507.PubMedGoogle Scholar
16.
Lawson  DTWilson  BSZerbi  M  et al Bilateral cochlear implants controlled by a single speech processor.  Am J Otol.1998;19:758-761.PubMedGoogle Scholar
17.
Long  CJ Bilateral Cochlear Implants: Basic Psychophysics [dissertation].  Boston: Massachusetts Institute of Technology; 2000.
18.
Gantz  BJTyler  RSRubinstein  JT  et al Binaural cochlear implants placed during the same operation.  Otol Neurotol.2002;23:169-180.PubMedGoogle Scholar
19.
Tyler  RSParkinson  AJWilson  BSWitt  SPreece  JPNoble  W Patients utilizing a hearing aid and a cochlear implant: speech perception and localization.  Ear Hear.2002;23:98-105.PubMedGoogle Scholar
20.
Muller  JSchon  FHelms  J Speech understanding in quiet and noise in bilateral users of the MED-EL COMBI 40/40+ cochlear implant system.  Ear Hear.2002;23:198-206.PubMedGoogle Scholar
21.
Van Hoesel  RJClark  GM Fusion and lateralization study with two binaural cochlear implant patients.  Ann Otol Rhinol Laryngol Suppl.1995;166:233-235.PubMedGoogle Scholar
22.
Geers  AE Predictors of reading skill development in children with early cochlear implantation.  Ear Hear.2003;24(suppl):59S-68S.PubMedGoogle Scholar
23.
Geers  ABrenner  CDavidson  L Factors associated with development of speech perception skills in children implanted by age five.  Ear Hear.2003;24(suppl):24S-35S.PubMedGoogle Scholar
24.
Miyamoto  RTHouston  DMKirk  KIPerdew  AESvirsky  MA Language development in deaf infants following cochlear implantation.  Acta Otolaryngol.2003;123:241-244.PubMedGoogle Scholar
25.
Nikolopoulos  TPLloyd  HStarczewski  HGallaway  C Using SNAP Dragons to monitor narrative abilities in young deaf children following cochlear implantation.  Int J Pediatr Otorhinolaryngol.2003;67:535-541.PubMedGoogle Scholar
26.
Winkler  FSchon  FPeklo  LMuller  JFeinen  CHHelms  J The Wurzburg questionnaire for assessing the quality of hearing in CI-children (WH-CIK) [in German].  Laryngorhinootologie.2002;81:211-216.PubMedGoogle Scholar
27.
Muller  J Bilateral cochlear implants: bilateral cochlear implants in children.  Paper presented at: Ninth Symposium on Cochlear Implants in Children; April 25, 2003; Washington, DC.
28.
Lenatz  T Bilateral cochlear implants: a clinical perspective.  Paper presented at: 2003 Conference on Implantable Auditory Prostheses; August 19, 2003; Asilomar, Calif.
29.
Litovsky  RArcaroli  JParkinson  APeters  RJohnstone  PYu  G Bilateral cochlear implants in adults and children.  Paper presented at: Conference on Implantable Auditory Prostheses; August 19, 2003; Asilomar, Calif.
30.
Rubinstein  JTGantz  BJTyler  RSLowder  MWitt  S Binaural cochlear implantation: promise and challenges.  Paper presented at: Ninth Symposium on Cochlear Implants in Children; August 25, 2003; Washington, DC.
31.
Litovsky  RY  inventor; Wisconsin Alumni Research Foundation, assignee Method and system for rapid and reliable testing of speech intelligibility in children.  US patent 6 584 440 B2. June 24, 2003.
32.
Wichmann  FAHill  NJ The psychometric function, I: fitting, sampling, and goodness of fit.  Percept Psychophys.2001;63:1293-1313.PubMedGoogle Scholar
33.
Wichmann  FAHill  NJ The psychometric function, II: bootstrap-based confidence intervals and sampling.  Percept Psychophys.2001;63:1314-1329.PubMedGoogle Scholar
34.
Hawley  MLLitovsky  RYColburn  HS Speech intelligibility and localization in complex environments.  J Acoust Soc Am.1999;105:3436-3448.PubMedGoogle Scholar
35.
Hawley  MLLitovsky  RYCulling  JF The benefit of binaural hearing in a cocktail party: effect of location and type of masker.  J Acoust Soc Am.2004;115:844-843.Google Scholar
36.
van Hoesel  RTyler  R Speech perception, localization and lateralization with bilateral cochlear implants.  J Acoust Soc Am.2003;113:1617-1630.PubMedGoogle Scholar
37.
Armstrong  MPegg  PJames  CBlamey  P Speech perception in noise with implant and hearing aid.  Am J Otol.1997;18(6, suppl):S140-S141.PubMedGoogle Scholar
38.
Ching  TYCPsarros  CHill  MDillon  HIncerti  P Should children who use cochlear implants wear hearing aids in the opposite ear?  Ear Hear.2001;22:365-380.PubMedGoogle Scholar
Original Article
May 2004

Bilateral Cochlear Implants in Adults and Children

Author Affiliations

From the Waisman Center, University of Wisconsin–Madison (Drs Litovsky and Yu and Ms Johnstone); Cochlear Americas, Denver, Colo (Mr Parkinson and Ms Arcaroli); and Dallas Cochlear, Dallas, Tex (Dr Peters and Ms Lake). The authors have no relevant financial interest in this article.

Arch Otolaryngol Head Neck Surg. 2004;130(5):648-655. doi:10.1001/archotol.130.5.648
Abstract

Objective  To measure the benefit (ie, sound localization and speech intelligibility in noise) of bilateral cochlear implants (CIs) in adults and in children.

Design, Setting, and Patients  Seventeen adults and 3 children underwent testing 3 months after activation of bilateral hearing. Adults received their devices in a simultaneous procedure and children in sequential procedures (3-8 years apart). Adults underwent testing of sound localization and speech intelligibility, with a single CI and bilaterally. Children underwent testing of sound localization, right/left discrimination, and speech intelligibility, with the first CI alone and bilaterally. We used computer games to attract the children's attention and engage them in the psychophysical tasks for long periods of time.

Results  Preliminary findings suggest that, for adults, bilateral hearing leads to better performance on the localization task, and on the speech task when the noise is near the poorer of the 2 ears. In children, localization and discrimination are slightly better under bilateral conditions, but not remarkably so. On the speech tasks, 1 child did not benefit from bilateral hearing. Two children showed consistent improvement with bilateral hearing when the noise was near the side that underwent implantation first.

Conclusions  Bilateral CIs may offer advantages to some listeners. The tasks described in this study might offer a powerful tool for measuring such advantages, especially in young children. The extent of the advantage, however, is difficult to ascertain after 3 months of bilateral listening experience, and might require a more prolonged period of adjustment and learning. Future work should be aimed at examining these issues.

In recent years, there have been improvements in speech-processing strategies used with cochlear implants (CIs). These improvements are particularly evident in speech understanding in quiet.1,2 However, for most CI users, speech reception in a noisy or complex environment is still very poor. To understand speech in noise, CI users need higher signal-noise ratios than normal-hearing listeners.3-6 Although noise reduction and spectral enhancement algorithms may prove to be important in improving speech-in-noise understanding,7,8 lack of binaural hearing and the inability to take advantage of the better-ear effect may also explain poor performance in multisource environments.9,10 Finally, normal-hearing listeners have a very large number of independent information channels, and the location of these channels along the cochlea is matched between the 2 ears, whereas CI users have a finite number of electrodes distributed along the tonotopic axis of the cochlea, and the electrode placement can vary.

To try to improve their ability to localize sound and to understand speech in noise, a small number of CI users worldwide have received 2 implants. To date, a handful of studies have been conducted on bilateral adult CI users.11-20 When coordinated stimulation of electrical pulses is used and care is taken to stimulate with loudness- and pitch-matched signals in both ears, fused auditory images can be produced that are lateralized in a direction consistent with the leading ear. Some reports suggest that bilateral listeners have good sensitivity to binaural level cues but poor sensitivity to timing cues relative to normal-hearing subjects.12,14,15,21 Others have found good performance with level cues and varying sensitivities to timing cues.16 Free-field studies with independent stimulation similarly suggest that dual implants can assist listeners in discriminating between 2 speaker positions, and (under some conditions) provide a benefit for speech understanding in the presence of competing sounds.18-20 Preliminary reports with multiple source locations and a large number of subjects are limited.13

In children, CIs have also become increasingly more successful in recent years. Children are better at expressive language, reading skills, and linguistic competence.22-25 Although measures of speech perception, intelligibility, language acquisition, working memory, and others have been investigated more extensively in recent years, little is known about the ability of children with CIs to function in realistic, complex auditory environments. After some reports14-17,21 that bilateral CIs in adults have provided certain benefits beyond those that are afforded by unilateral CIs, recent initiatives in which children also received bilateral CIs have been undertaken.26-29 Although the ethical implications might be complex,30 studying the abilities of these children to function in realistic acoustic environments is critical if we are to better understand the potential benefits and limitations of bilateral CI implantation in children.

The present study investigated directional hearing abilities and speech intelligibility in noise in adults and children with bilateral CIs. Adult subjects received their CIs in a simultaneous procedure, whereas children were all successful users of a unilateral CI who received a second CI several years later. This preliminary report offers an initial glimpse into the potential outcomes of bilateral implantation in these populations.

Methods
Subjects

The 3 children included in our study, aged 8, 8, and 12 years, were prelingually deaf. They received their first CIs several years before the second CI, and the testing discussed in the present report was conducted 2 to 3 months after activation of the second CI (Table 1). Seventeen adults (8 men and 9 women) participated. Fourteen were postlingually deaf, with 15 or fewer years of deafness. The other 3 adults were deaf since infancy or congenitally deaf. All adults received both CIs in a simultaneous procedure and underwent testing at 3 months after activation of their CIs. Average age of the adults was 52.7 years, and they had an average of 8 and 13 years' duration of hearing loss in the right and left ears, respectively. Adults received the CI24R(CS) CI. Two of the children also received 2 CI24 devices, and 1 child (subject 2) received a CI22 device in the first ear and a CI24 device in the second ear (Cochlear Americas, Denver, Colo). The same speech-coding strategies were used in both ears, although each ear underwent independent mapping for best results in that ear. Before testing, all subjects underwent a loudness-balancing session in which the perceived loudness of sounds from the 2 sides were matched as closely as possible. The research was approved by the institutional review board and human subjects committee of the University of Wisconsin–Madison and participating clinics, and all participants, or parent/guardian in the case of children, signed informed consent forms before participating in testing.

Psychophysical measures
Sound Localization Measurements in Adults

All 17 adults participated in this measure. Eight matched loudspeakers were positioned in a semicircular array at 20° intervals in the frontal hemifield, ranging from −70° to 70°. Speakers were positioned at ear level and a distance of 1.4 m (4.5 feet) from the center of the listener's head and numbered 1 to 8 from left to right. Subjects sat on a chair facing the speaker array and were asked to hold their head still during each trial presentation. Stimuli consisted of 4 bursts of 170-millisecond pink noise with 10-millisecond rise/fall times and a 50-millisecond interstimulus interval; they were digitally generated, amplified, and presented to loudspeakers at an average level of 65 dB (randomly roved ±6 dB from trial to trial). For each of the 8 locations, 20 trials occurred in random order. After each stimulus presentation, subjects reported the speaker number corresponding to the perceived location of the sound. The data presented consist of results of testing conducted with either CI alone and with both CIs activated together. Feedback as to correct/incorrect responses was not provided.

Speech Intelligibility Measurements in Adults

Fourteen adults participated in this measure. Speech materials consisted of Bamford-Kowal-Bench sentences, and competing sound materials consisted of multitalker babble (4 talkers). Each list of words had 8 or 10 sentences with a signal-noise ratio that decreased in 3-dB steps, starting with 21 and reaching 0 or –6 dB. The percentage of words correctly identified at each signal-noise ratio was entered into a formula that estimated speech reception thresholds (SRTs) as the signal-noise ratio at which intelligibility was at 50% correct. The speech was always presented from a loudspeaker placed in front of the listener (0°), and the babble was presented from the front (0°), the right (90°), or the left (−90°). The data presented consist of results of testing conducted with the better ear alone, the poor ear alone, and both CIs. Thus, there are 6 possible configurations of listening mode/stimulus. For each condition, subjects underwent testing on 4 sentence lists, which were presented in random order, and an average SRT was calculated for each subject and every condition.

Sound Localization Measurements in Children

Fifteen matched loudspeakers were positioned in a semicircular array at 10° intervals in the frontal hemifield, ranging from −70° to 70°. Speakers were positioned at ear level and a distance of 1.5 m (5 feet) from the center of the listener's head. Subjects sat on a chair facing the speaker array and were asked to hold their head still during each trial presentation. Stimuli consisted of 10 bursts of 25-millisecond pink noise with 5-millisecond rise/fall times and a 250-millisecond interstimulus interval, and were generated using a Tucker-Davis System III (Tucker-Davis Technologies, Alachua, Fla) with a multiplexer for loudspeaker selection. Stimuli were presented at an average level of 60 dB, and levels were randomly roved by ±6 dB from trial to trial. Each of the 15 locations was repeated 10 times, in random order. After each stimulus presentation, children used an interactive computerized pointing game to report the speaker corresponding to the perceived location of the sound. Testing was conducted in separate blocks for the monaural and bilateral listening modes.

Right/Left Discrimination Measurements in Children

The same apparatus with 15 loudspeakers was used. Right/left discrimination was measured for angular separations varying from 10° to 70°, and feedback was given after each trial.

Speech Intelligibility Measurements in Children

Speech reception thresholds were measured using the CRISP (children's realistic index of speech perception) test.31 Stimuli consisted of target words and competing sentences. All stimuli were prerecorded, digitized, and played back through specialized software during testing. Targets consisted of a closed set of 25 spondees spoken by a male voice (eg, hot dog, ice cream, barnyard, and rainbow). The root-mean-square levels were equalized for all words. The competitors consisted of 2 sentences spoken by a female voice. The present study was not aimed at testing the children's vocabulary, but rather at measuring their speech intelligibility for known words. Before testing, subjects underwent a familiarization session (approximately 5 minutes) in which they were presented with the picture-word combinations and tested to ensure that they associated each of the pictures with their intended auditory target. The target words were always presented from the front (0°). Competitors were presented from the front, right (90°; near the first CI), or left (−90°; near the second CI). A quiet condition was also included. Competitors were presented at a fixed sound pressure level of 60 dB. For each subject, SRTs were measured for each of the 4 conditions and 2 listening modes (monaural and bilateral). An adaptive 3-down/1-up tracking method was used to vary the level of the target signal, such that correct responses resulted in level decrement, and incorrect responses resulted in level increment.

The test involved a 1-interval 4-alternative-forced-choice discrimination procedure. On each trial, the child viewed a set of 4 pictures from the set of 25 picture-word matches. A word matching one of the pictures was randomly selected and presented from the front speaker. A leading phrase such as "point to the picture of the" or "where is the" preceded each target word. The child was asked to select the picture matching the heard word and to guess if he or she was not sure or if the word was not audible. The randomization process ensured that for every subject, on average, all 25 words were selected an equal number of times. Results were analyzed using a constrained maximum-likelihood method of parameter estimation outlined by Wichmann and Hill32,33 such that the estimated SRT corresponded to the stimulus value and slope of the psychometric function where performance was approximately 79.4% correct.

Results

Figure 1 shows location identification results from 3 adult subjects representing 3 types of behaviors on the task. In each panel, the stimulus-response matrix shows the proportion of responses that occurred for each stimulus location. In an ideal performer, the data would all fall along the diagonal, in the form of large high-proportion circles. Subject 12 represents the type of subject who cannot identify source locations with the right or left ear alone, but bilaterally this listener's performance is fairly high, with the responses distributed close to or on the diagonal. Subject 15 represents the type of listener who cannot identify source locations or lateralize the sounds on the right or left with either ear alone. With bilateral stimulation, although this listener does not identify locations as subject 12 does, there is a marked improvement in the ability to place the sources in the correct hemifield, on the right vs left. The third type of listener is shown by the data from subject 1, whose bilateral performance is no different than it is with the best of the 2 ears, in this case with the right ear alone. Of the 17 subjects who underwent testing to date, subjects 13, 2, and 2 showed behaviors similar to those of subjects 12, 15, and 1, respectively. Thus, for most bilateral subjects, having 2 CIs offers a benefit for sound source location identification. Figure 2 shows the mean and standard error for the root-mean-square error for the 17 subjects in 3 listening conditions. A 1-way analysis of variance revealed a significant main effect of condition, and post hoc t tests revealed that there were no significant differences between the right and left ears, but that either ear alone had a significantly higher error than the bilateral condition.

Individual factors, such as duration and type of hearing loss, speech-coding strategy, age at implantation, and preimplantation speech scores, were examined to determine whether any of them might explain the individual differences observed in the results. The best predictor for improvement with bilateral CIs was duration of bilateral hearing aid use before implantation. Of the subjects with bilateral benefit in localization, most had worn bilateral hearing aids for 10 to 30 years, compared with subjects with little or no bilateral benefit who wore hearing aids for 8 years or less. Regarding speech-coding strategy, subjects with localization benefit (eg, subject 12) used SPEAK (spectral peak) or ACE (Advanced Combination Encoders); of the 2 subjects with right/left improvement (eg, subject 15), one used SPEAK and the other used ACE; of the 2 subjects with no benefit (eg, subject 1), one used ACE and the other used CIS (continuous interleaved sampling). Of the 17 subjects studied, only 2 were fitted with the CIS strategy; the effectiveness of that strategy is therefore difficult to judge because those 2 listeners also had poor preoperative speech scores and relatively short durations of hearing aid use. Duration of deafness was not a strong predictor; 3 subjects with sudden onset of deafness were not the best performers. Finally, the predictability from preoperative speech scores was variable. Subjects with improved right/left performance had higher scores than average. Subjects with improved localization varied; eg, subject 12 had scores near 40% on the HINT test (Hearing in Noise test), but several other subjects with good localization performance had preoperative scores near 0% on the hearing in noise test.

Figure 3 shows speech-in-babble data from 14 of the 17 adult subjects. In this figure, comparisons are made between conditions in which subjects are listening through their poor ear, through their better ear, and bilaterally. When the babble is near the poor ear, there appears to be a bilateral advantage compared with the poor ear or the better ear. However, when the babble is near the better ear, the bilateral advantage is minimal or absent, regardless of which ear is the monaural condition. The condition resulting in the largest bilateral effect is one in which the multitalker babble occurs near the poor ear, and when hearing is added in the better ear.

Figure 4, Figure 5, and Figure 6 show results for localization tasks in the children. In Figure 4, we compared the percentage correct on right/left discrimination for monaural and bilateral conditions. Performance did not differ significantly for the 7 angular separations; hence, for each subject the data are collapsed across angles. Although there was no significant difference between monaural and binaural conditions, all 3 subjects appeared to perform slightly better in the binaural condition. Similarly, data from the sound localization measurement in Figure 5 suggest that the percentage correct for identifying source location was slightly above chance in bilateral conditions and below chance in monaural conditions. Again, these differences were not statistically significant. Finally, the root-mean-square error for sound localization, shown in Figure 6, suggests that the size of the errors made in identifying source locations was just slightly higher monaurally than bilaterally for subjects 2 and 3.

Speech intelligibility measures for the children are shown in Figure 7. Positive differences between monaural and bilateral SRTs suggest a bilateral benefit, and negative differences suggest a bilateral disadvantage. Differences smaller than 2 dB are not meaningful, since that is within the error of measurement of the test. There does not appear to be a uniform response or advantage across subjects and conditions. Subject 1 has a bilateral benefit when the competing sounds are in front or near the first implanted ear, and a disadvantage in quiet. Subject 2 also has an advantage for competitors near the first CI, and in quiet, and subject 3 seems to operate like a monaural listener, with a slightly better performance monaurally for competing sounds in front. Taken together, these findings suggest that children who have had several years of hearing with 1 CI and 3 months of hearing with 2 CI show signs of improvement on directional hearing tasks, although the improvement is not uniform across subjects or conditions and is not statistically significant.

Comment

Our results suggest that, in adults, simultaneous bilateral cochlear implantation presents some advantages in their ability to identify sound source locations and to hear speech in the presence of multitalker babble. Examination of individual factors (duration/type of hearing loss, speech-coding strategy, and preimplantation speech scores) suggests that the best predictor for improvement on the localization task was duration of bilateral hearing aid use before implantation. Given the small number of subjects using each speech-coding strategy, the effectiveness of the various strategies was not easy to judge. Similarly, the predictability from preoperative speech scores was variable. In children who were congenitally deaf and received sequential CIs 3 to 8 years apart, there are indications that bilateral hearing might offer some advantages on functional tasks in some of the subjects at 2 to 3 months after the second device is activated.

The tasks used in this study were designed to evaluate the ability of bilateral CI users to use both ears when functioning in complex, multisource environments. Listening with 2 ears is known to be beneficial for sound localization and speech understanding in noise, in normal-hearing and hearing-impaired listeners. For instance, spatial separation of the target and competing sounds, similar to the conditions we used with the competing sounds on the side, is known to provide a benefit of up to 12 dB, or 80% improvement.34,35

However, the work presented in this study only taps a single aspect of 2-ear listening, that with independent ears. True binaural hearing, such as that known to occur in normal-hearing mammals, has 2 important components. The first is a head-shadow effect. In a 1-ear situation, sounds that are nearer to the open ear have a greater intensity at the ear, and sounds that are on the opposite side of the head are attenuated by the head/shoulders, such that on reaching the open ear they are significantly reduced in intensity. Opening up the second ear can often improve the ability of a listener to take advantage of sources occurring from various directions, as is typically the case in multisource environments. A second, more complicated component of binaural hearing is the synchronization of signals that occurs in the auditory pathway along the frequency dimension. In normal-hearing listeners, differences between the ears in arrival time and intensity of sounds, known as binaural cues, provide robust information about the direction of sources and provide listeners with powerful tools for listening in complex environments.

The current technology offers CI users the opportunity to take advantage of the head-shadow effect. However, the extent to which binaural cues are available to the listeners and used effectively is not well understood. Some studies have suggested that bilateral listeners have good sensitivity to interaural level difference but poor sensitivity to interaural time difference relative to normal-hearing subjects,13-15,21 in whom thresholds are on the order of 20 to 50 microseconds and 1 dB for interaural time and interaural level difference discrimination, respectively. Others have found good performance with interaural time differences and varying sensitivities to interaural time differences.16 A recent study36 reported interaural time difference sensitivity of about 150 microseconds in 5 subjects, and 1 subject with long-term binaural deprivation showed sensitivity to interaural time differences of 300 microseconds.17 However, given the high likelihood that in bilateral CI users there are differences between the 2 ears in electrode placement, neural survival, and frequency tuning, it is highly likely that without enforcing synchronization between the 2 devices, the patients' use of true binaural stimuli is limited. Future work should address the extent to which synchronization and enhanced binaural hearing are functionally useful to bilateral CI users.

Results described in the present study also point to the importance of considering the plasticity of the nervous system as playing a role in the usability of binaural cues. First, the measurements described were made 3 months after listeners heard with 2 ears. It is highly likely that for both adults and children, that amount of time is insufficient for the acquisition of spatial awareness and regularity with which directional cues are processed consistently and perceived by the listener in a dependable manner. Second, there is a considerable difference in this study between the adults and children. Most of the adults (14/17) had postlingual deafness and received their 2 devices at the same time; hence, to a large extent their bilateral experience was maximized. Adults with best performance on the localization task also tended to be ones with greater exposure to bilateral hearing aids. In contrast, the 3 children all received their devices in a sequential procedure, with 3 to 8 years between receiving each device and little or no bilateral hearing aid use before implantation. It is possible that, having heard monaurally for a number of years, these children's auditory systems have become accustomed to using a single ear. More critically, it is difficult to know whether binaural neurons have lost their ability to use binaural cues as a result of a lack of that type of stimulation. Whether that ability can be regained remains an important question for future research.

Finally, given the potential advances in technology, gene therapy, hair-cell regeneration, and other potential treatments for hearing loss, there may be reasons for questioning bilateral implantation, especially in children.1,30 A growing population of children with residual hearing and speech understanding in the non-CI ear are being fitted with a hearing aid in that ear. An important issue to address, therefore, is whether a hearing aid fitted in the non-CI ear might provide a measurable difference in performance. Although reports on this topic are sparse, even in adults, there are suggestions that continued use of hearing aids after implantation might provide benefits under some conditions. Although few in number, existing reports in adults suggest that patients with a CI and a hearing aid exhibit an advantage on measures of speech understanding in noise, especially when the speech was in front and the noise was near the ear with the CI.19,37 In 2 of the 3 patients, there was also some improvement in the ability to localize sounds. In the proposed work, children with a CI and a hearing aid will be recruited and undergo testing with their CI alone, their hearing aid alone, and both, to determine whether the hearing aid offers measurable improvements in functional abilities. In 1 study with 16 children wearing CIs and hearing aids, on average, there were significant benefits in speech perception in noise, localization, and aural/oral function when listening with both devices compared with the CIs alone.38 However, in the noise condition, the effect of spatially separating the speech and noise, which is a strong measure of binaural benefit, was not measured.

If bilateral CIs could reliably provide such an improvement in CI users, there would be a remarkable change in the ability of CI users to hear in noisy environments. Although the measures used in this study and elsewhere do not uniformly demonstrate such large effects for all subjects and conditions, they suggest that listening with 2 ears vs 1 ear produces abilities that differ, whose ramifications on everyday living need to be evaluated more closely.

Back to top
Article Information

Corresponding author and reprints: Ruth Y. Litovsky, PhD, Waisman Center, Room 565, University of Wisconsin, 1500 Highland Ave, Madison, WI 53705 (e-mail: Litovsky@waisman.wisc.edu).

Submitted for publication September 15, 2003; final revision received December 2, 2003; accepted December 4, 2003.

This study was supported in part by grants R01DC003083 and 5R21DC005469 from the National Institute of Deafness and Communicative Disorders, Bethesda, Md; by the Wisconsin Alumni Research Foundation, Madison; and by Cochlear Americas, Denver, Colo.

This study was presented at the Ninth Symposium on Cochlear Implants in Children; April 25, 2003; Washington, DC.

We thank Belinda Henry, PhD, for her help with assessing loudness balance in the children, and Shelly Godar, MS CCC, Allison Olson, MS, and Emilie Bloch for assistance in data collection with the children. We especially thank the children and parents for their dedication to this work.

References
1.
Wilson  BSFinley  CCLawson  DTWolford  RDEddington  DKRabinowitz  WM Better speech recognition with cochlear implants.  Nature.1991;352:236-238.PubMedGoogle Scholar
2.
Rauschecker  JPShannon  RV Sending sound to the brain.  Science.2002;295:1025-1029.PubMedGoogle Scholar
3.
Hochberg  IBoothroyd  AWeiss  MHellman  S Effects of noise and noise suppression on speech perception by cochlear implant users.  Ear Hear.1992;13:263-271.PubMedGoogle Scholar
4.
Skinner  MWClark  GMWhitford  LA  et al Evaluation of a new spectral peak coding strategy for the Nucleus 22 Channel Cochlear Implant System.  Am J Otol.1994;15 (suppl 2):15-27.PubMedGoogle Scholar
5.
Dorman  MFLoizou  PCFitzke  J The identification of speech in noise by cochlear implant patients and normal-hearing listeners using 6-channel signal processors.  Ear Hear.1998;19:481-484.PubMedGoogle Scholar
6.
Zeng  FGGalvin III  JJ Amplitude mapping and phoneme recognition in cochlear implant listeners.  Ear Hear.1999;20:60-74.PubMedGoogle Scholar
7.
Fu  QJShannon  RVWang  X Effects of noise and spectral resolution on vowel and consonant recognition: acoustic and electric hearing.  J Acoust Soc Am.1998;104:3586-3596.PubMedGoogle Scholar
8.
Nelson  PBJin  SHCarney  AENelson  DA Understanding speech in modulated interference: cochlear implant users and normal-hearing listeners.  J Acoust Soc Am.2003;113:961-968.PubMedGoogle Scholar
9.
Durlach  NColburn  HS Binaural phenomena.  In: Carterette  EC, Friedman  MP, eds.  Handbook of Perception.Vol 4. Orlando, Fla: Academic Press Inc; 1978:405-466. Google Scholar
10.
Blauert  J Spatial Hearing: The Psychophysics of Human Sound Localization. 2nd ed. Cambridge, Mass: MIT Press; 1997.
11.
Green  JDMills  DMBell  BALuxford  WMTonokawa  L Binaural cochlear implants.  Am J Otol.1992;13:502-506.PubMedGoogle Scholar
12.
van Hoesel  RTong  YHollow  RClark  G Psychophysical and speech perception studies: a case report on a binaural cochlear implant subject.  J Acoust Soc Am.1993;94:3178-3189.PubMedGoogle Scholar
13.
van Hoesel  R Preserving the benefits of listening with two ears using bilateral cochlear implants.  Paper presented at: 2003 Conference on Implantable Auditory Prostheses; August 19, 2003; Asilomar, Calif.
14.
van Hoesel  RTyler  R Speech perception, localization and lateralization with bilateral cochlear implants.  J Acoust Soc Am.2003;113:1617-1630.PubMedGoogle Scholar
15.
van Hoesel  RClark  GM Psychophysical studies with two binaural cochlear implant subjects.  J Acoust Soc Am.1997;102:495-507.PubMedGoogle Scholar
16.
Lawson  DTWilson  BSZerbi  M  et al Bilateral cochlear implants controlled by a single speech processor.  Am J Otol.1998;19:758-761.PubMedGoogle Scholar
17.
Long  CJ Bilateral Cochlear Implants: Basic Psychophysics [dissertation].  Boston: Massachusetts Institute of Technology; 2000.
18.
Gantz  BJTyler  RSRubinstein  JT  et al Binaural cochlear implants placed during the same operation.  Otol Neurotol.2002;23:169-180.PubMedGoogle Scholar
19.
Tyler  RSParkinson  AJWilson  BSWitt  SPreece  JPNoble  W Patients utilizing a hearing aid and a cochlear implant: speech perception and localization.  Ear Hear.2002;23:98-105.PubMedGoogle Scholar
20.
Muller  JSchon  FHelms  J Speech understanding in quiet and noise in bilateral users of the MED-EL COMBI 40/40+ cochlear implant system.  Ear Hear.2002;23:198-206.PubMedGoogle Scholar
21.
Van Hoesel  RJClark  GM Fusion and lateralization study with two binaural cochlear implant patients.  Ann Otol Rhinol Laryngol Suppl.1995;166:233-235.PubMedGoogle Scholar
22.
Geers  AE Predictors of reading skill development in children with early cochlear implantation.  Ear Hear.2003;24(suppl):59S-68S.PubMedGoogle Scholar
23.
Geers  ABrenner  CDavidson  L Factors associated with development of speech perception skills in children implanted by age five.  Ear Hear.2003;24(suppl):24S-35S.PubMedGoogle Scholar
24.
Miyamoto  RTHouston  DMKirk  KIPerdew  AESvirsky  MA Language development in deaf infants following cochlear implantation.  Acta Otolaryngol.2003;123:241-244.PubMedGoogle Scholar
25.
Nikolopoulos  TPLloyd  HStarczewski  HGallaway  C Using SNAP Dragons to monitor narrative abilities in young deaf children following cochlear implantation.  Int J Pediatr Otorhinolaryngol.2003;67:535-541.PubMedGoogle Scholar
26.
Winkler  FSchon  FPeklo  LMuller  JFeinen  CHHelms  J The Wurzburg questionnaire for assessing the quality of hearing in CI-children (WH-CIK) [in German].  Laryngorhinootologie.2002;81:211-216.PubMedGoogle Scholar
27.
Muller  J Bilateral cochlear implants: bilateral cochlear implants in children.  Paper presented at: Ninth Symposium on Cochlear Implants in Children; April 25, 2003; Washington, DC.
28.
Lenatz  T Bilateral cochlear implants: a clinical perspective.  Paper presented at: 2003 Conference on Implantable Auditory Prostheses; August 19, 2003; Asilomar, Calif.
29.
Litovsky  RArcaroli  JParkinson  APeters  RJohnstone  PYu  G Bilateral cochlear implants in adults and children.  Paper presented at: Conference on Implantable Auditory Prostheses; August 19, 2003; Asilomar, Calif.
30.
Rubinstein  JTGantz  BJTyler  RSLowder  MWitt  S Binaural cochlear implantation: promise and challenges.  Paper presented at: Ninth Symposium on Cochlear Implants in Children; August 25, 2003; Washington, DC.
31.
Litovsky  RY  inventor; Wisconsin Alumni Research Foundation, assignee Method and system for rapid and reliable testing of speech intelligibility in children.  US patent 6 584 440 B2. June 24, 2003.
32.
Wichmann  FAHill  NJ The psychometric function, I: fitting, sampling, and goodness of fit.  Percept Psychophys.2001;63:1293-1313.PubMedGoogle Scholar
33.
Wichmann  FAHill  NJ The psychometric function, II: bootstrap-based confidence intervals and sampling.  Percept Psychophys.2001;63:1314-1329.PubMedGoogle Scholar
34.
Hawley  MLLitovsky  RYColburn  HS Speech intelligibility and localization in complex environments.  J Acoust Soc Am.1999;105:3436-3448.PubMedGoogle Scholar
35.
Hawley  MLLitovsky  RYCulling  JF The benefit of binaural hearing in a cocktail party: effect of location and type of masker.  J Acoust Soc Am.2004;115:844-843.Google Scholar
36.
van Hoesel  RTyler  R Speech perception, localization and lateralization with bilateral cochlear implants.  J Acoust Soc Am.2003;113:1617-1630.PubMedGoogle Scholar
37.
Armstrong  MPegg  PJames  CBlamey  P Speech perception in noise with implant and hearing aid.  Am J Otol.1997;18(6, suppl):S140-S141.PubMedGoogle Scholar
38.
Ching  TYCPsarros  CHill  MDillon  HIncerti  P Should children who use cochlear implants wear hearing aids in the opposite ear?  Ear Hear.2001;22:365-380.PubMedGoogle Scholar
×