Ear and Hearing 2024-08-09

A Scoping Review and Meta-Analysis of the Relations Between Cognition and Cochlear Implant Outcomes and the Effect of Quiet Versus Noise Testing Conditions

Amini, Andrew E.; Naples, James G.; Cortina, Luis; Hwa, Tiffany; Morcos, Mary; Castellanos, Irina; Moberly, Aaron C.

Publication date 01-07-2024


Objectives: Evidence continues to emerge of associations between cochlear implant (CI) outcomes and cognitive functions in postlingually deafened adults. While there are multiple factors that appear to affect these associations, the impact of speech recognition background testing conditions (i.e., in quiet versus noise) has not been systematically explored. The two aims of this study were to (1) identify associations between speech recognition following cochlear implantation and performance on cognitive tasks, and to (2) investigate the impact of speech testing in quiet versus noise on these associations. Ultimately, we want to understand the conditions that impact this complex relationship between CI outcomes and cognition.
Design: A scoping review following Preferred Reporting Items for Systematic Reviews and Meta-Analyses guidelines was performed on published literature evaluating the relation between outcomes of cochlear implantation and cognition. The current review evaluates 39 papers that reported associations between over 30 cognitive assessments and speech recognition tests in adult patients with CIs.
Six cognitive domains were evaluated: Global Cognition, Inhibition-Concentration, Memory and Learning, Controlled Fluency, Verbal Fluency, and Visuospatial Organization. Meta-analysis was conducted on three cognitive assessments among 12 studies to evaluate relations with speech recognition outcomes. Subgroup analyses were performed to identify whether speech recognition testing in quiet versus in background noise impacted its association with cognitive performance.
Results: Significant associations between cognition and speech recognition in a background of quiet or noise were found in 69% of studies. Tests of Global Cognition and Inhibition-Concentration skills resulted in the highest overall frequency of significant associations with speech recognition (45% and 57%, respectively). Despite the modest proportion of significant associations reported, pooling effect sizes across samples through meta-analysis revealed a moderate positive correlation between tests of Global Cognition (r = +0.37, p < 0.01) as well as Verbal Fluency (r = +0.44, p < 0.01) and postoperative speech recognition skills. Tests of Memory and Learning are most frequently utilized in the setting of CI (in 26 of 39 included studies), yet meta-analysis revealed nonsignificant associations with speech recognition performance in a background of quiet (r = +0.30, p = 0.18), and noise (r = −0.06, p = 0.78).
Conclusions: Background conditions of speech recognition testing may influence the relation between speech recognition outcomes and cognition. The magnitude of this effect of testing conditions on this relationship appears to vary depending on the cognitive construct being assessed. Overall, Global Cognition and Inhibition-Concentration skills are potentially useful in explaining speech recognition skills following cochlear implantation. Future work should continue to evaluate these relations to appropriately unify cognitive testing opportunities in the setting of cochlear implantation.

Pubmed PDF Web

Infant Cervical Vestibular Evoked Myogenic Potentials: A Scoping Review

Bassett, Alaina M.; Suresh, Chandan

Publication date 05-08-2024


Objectives: Children diagnosed with hearing loss typically demonstrate increased rates of vestibular loss as compared with their peers, with hearing within normal limits. Decreased vestibular function is linked with delays in gross motor development, acquisition of gross motor skills, and academic challenges. Timely development of sitting and walking gross motor skills aids in the progress of environmental exploratory activities, which have been tied to cognitive, language, and vocabulary development. Considering the time-sensitive development of gross motor skills and cognitive, language, and vocabulary development, identifying vestibular loss in infancy can support early intervention. This scoping review analyzes stimulus, recording, and participant factors relevant to assessing cervical vestibular evoked myogenic potentials (cVEMPs) in the infant population.
Design: The scoping literature review was conducted on literature published between 2000 and 2023, focusing on articles assessing cVEMPs in infants. Two authors independently followed Preferred Reporting Items for Systematic and Meta-Analysis guidelines for title and abstract screening, full-text review, data extraction, and quality assessments. Sixteen articles meeting the inclusion criteria were included in the analysis.
Results: The existing literature lacks consensus regarding stimulus and recording parameters for measuring infant cVEMPs. In addition, the review reveals a decrease in cVEMP response occurrence rates with the severity of hearing loss, especially in cases of severe to profound hearing loss, compared with mild to moderate sensorineural hearing loss in infants.
Conclusions: This scoping review demonstrates the increasing use of cVEMP as a reliable tool for objectively assessing infant vestibular function. The lack of consensus in stimulus and recording parameters emphasizes the need for systematic research to establish an evidence-based protocol for cVEMP measurements in infants. Such a protocol will ensure the reliable measurement of cVEMPs in infants and enhance the effectiveness of cVEMP as part of the infant vestibular test battery. In addition, there is a necessity for a comprehensive large-scale study to evaluate the practicality and feasibility of implementing vestibular screening protocols for infants diagnosed with sensorineural hearing loss in the United States.

Pubmed PDF Web

Validation of the Chinese Version of the Speech, Spatial, and Qualities of Hearing Scale for Parents and Children

Fang, Te-Yung; Lin, Pei-Hsuan; Ko, Yu; Wu, Chen-Chi; Wang, Han; Liao, Wan-Cian; Wang, Pa-Chun

Publication date 04-06-2024


Objectives: To translate and validate the Chinese version of the Speech, Spatial, and Qualities of Hearing Scale (SSQ) for children with hearing impairment (C-SSQ-C) and for their parents (C-SSQ-P).
Design: We translated the SSQ for children into Chinese and verified its readability and comprehensibility. A total of 105 participants with moderate-to-profound hearing loss (HL) and 54 with normal hearing were enrolled in the validation process. The participants with HL were fitted with bilateral hearing aids, bimodal hearing, or bilateral cochlear implants. The C-SSQ-P was administered to the parents of participants aged 3 to 6.9 years, and the C-SSQ-C was administered to participants aged 7 to 18 years. The internal consistency, test-retest reliability, and validity were evaluated for both questionnaires.
Results: Both C-SSQ-P and C-SSQ-C demonstrated high internal consistency (Cronbach’s α >0.8) and good validity (generalized linear model revealed significant negative relationships between the C-SSQ-P subscales with aided better-hearing threshold β = −0.08 to −0.12, p ≤ 0.001 and between the C-SSQ-C subscales with worse-hearing threshold β = −0.13 to −0.14, p < 0.001). Among the children with HL, the participants with bilateral cochlear implants had demonstrated better performance than those with bimodal hearing and bilateral hearing aids, as evidenced by the highest mean scores in three subscales.
Conclusions: Both C-SSQ-P and C-SSQ-C are reliable and valid for assessing HL in children and adolescents. The C-SSQ-P is applicable in evaluating young children aged 3 to 6.9 years after a 7-day observation period, while the C-SSQ-C is appropriate for children and adolescents aged 7 to 18 years.

Pubmed PDF Web

Trajectories of Hearing From Childhood to Adulthood

Leung, Joan H.; Thorne, Peter R.; Purdy, Suzanne C.; Cheyne, Kirsten; Steptoe, Barbara; Ambler, Antony; Hogan, Sean; Ramrakha, Sandhya; Caspi, Avshalom; Moffitt, Terrie E.; Poulton, Richie

Publication date 20-06-2024


Objectives: The Dunedin Multidisciplinary Health and Development Study provides a unique opportunity to document the progression of ear health and hearing ability within the same cohort of individuals from birth. This investigation draws on hearing data from 5 to 13 years and again at 45 years of age, to explore the associations between childhood hearing variables and hearing and listening ability at age 45.
Design: Multiple linear regression analyses were used to assess associations between childhood hearing (otological status and mid-frequency pure-tone average) and (a) age 45 peripheral hearing ability (mid-frequency pure-tone average and high-frequency pure-tone average), and (b) age 45 listening ability (listening in spatialized noise and subjective questionnaire on listening experiences). Sex, childhood socioeconomic status, and adult IQ were included in the model as covariates.
Results: Peripheral hearing and listening abilities at age 45 were consistently associated with childhood hearing acuity at mid-frequencies. Otological status was a moderate predicting factor for high-frequency hearing and utilization of spatial listening cues in adulthood.
Conclusions: We aim to use these findings to develop a foundational model of hearing trajectories. This will form the basis for identifying precursors, to be investigated in a subsequent series of analyses, that may protect against or exacerbate hearing-associated cognitive decline in the Dunedin Study cohort as they progress from mid-life to older age.

Pubmed PDF Web

Task-Specific Rapid Auditory Perceptual Learning in Adult Cochlear Implant Recipients: What Could It Mean for Speech Recognition

Khayr, Ranin; Khnifes, Riyad; Shpak, Talma; Banai, Karen

Publication date 29-05-2024


Objectives: Speech recognition in cochlear implant (CI) recipients is quite variable, particularly in challenging listening conditions. Demographic, audiological, and cognitive factors explain some, but not all, of this variance. The literature suggests that rapid auditory perceptual learning explains unique variance in speech recognition in listeners with normal hearing and those with hearing loss. The present study focuses on the early adaptation phase of task-specific rapid auditory perceptual learning. It investigates whether adult CI recipients exhibit this learning and, if so, whether it accounts for portions of the variance in their recognition of fast speech and speech in noise.
Design: Thirty-six adult CI recipients (ages = 35 to 77, M = 55) completed a battery of general speech recognition tests (sentences in speech-shaped noise, four-talker babble noise, and natural-fast speech), cognitive measures (vocabulary, working memory, attention, and verbal processing speed), and a rapid auditory perceptual learning task with time-compressed speech. Accuracy in the general speech recognition tasks was modeled with a series of generalized mixed models that accounted for demographic, audiological, and cognitive factors before accounting for the contribution of task-specific rapid auditory perceptual learning of time-compressed speech.
Results: Most CI recipients exhibited early task-specific rapid auditory perceptual learning of time-compressed speech within the course of the first 20 sentences. This early task-specific rapid auditory perceptual learning had unique contribution to the recognition of natural-fast speech in quiet and speech in noise, although the contribution to natural-fast speech may reflect the rapid learning that occurred in this task. When accounting for demographic and cognitive characteristics, an increase of 1 SD in the early task-specific rapid auditory perceptual learning rate was associated with ~52% increase in the odds of correctly recognizing natural-fast speech in quiet, and ~19% to 28% in the odds of correctly recognizing the different types of speech in noise. Age, vocabulary, attention, and verbal processing speed also had unique contributions to general speech recognition. However, their contribution varied between the different general speech recognition tests.
Conclusions: Consistent with previous findings in other populations, in CI recipients, early task-specific rapid auditory perceptual, learning also accounts for some of the individual differences in the recognition of speech in noise and natural-fast speech in quiet. Thus, across populations, the early rapid adaptation phase of task-specific rapid auditory perceptual learning might serve as a skill that supports speech recognition in various adverse conditions. In CI users, the ability to rapidly adapt to ongoing acoustical challenges may be one of the factors associated with good CI outcomes. Overall, CI recipients with higher cognitive resources and faster rapid learning rates had better speech recognition.

Pubmed PDF Web

Effects of Tympanic Membrane Electrodes on Sound Transmission From the Ear Canal to the Middle and Inner Ears

Hannon, Cailin; Lewis, James D.

Publication date 20-05-2024


Objectives: The first objective of the study was to compare approaches to eardrum electrode insertion as they relate to the likelihood of introducing an acoustic leak between the ear canal and eartip. A common method for placing a tympanic membrane electrode involves securing the electrode in the canal by routing it underneath a foam eartip. This method is hypothesized to result in a slit leak between the canal and foam tip due to the added bulk of the electrode wire. An alternative approach involves creating a bore in the wall of the foam tip that the electrode can be threaded through. This method is hypothesized to reduce the likelihood of a slit leak before the electrode wire is integrated into the foam tip. The second objective of the study was to investigate how sound transmission in the ear is affected by placing an electrode on the eardrum. It was hypothesized that an electrode in contact with the eardrum increases the eardrum’s mass, with the potential to reduce sound transmission at high frequencies.
Design: Wideband acoustic immittance and distortion product otoacoustic emissions (DPOAEs) were measured in eight human ears.
Measurements were completed for five different conditions: (1) baseline with no electrode in the canal, (2) dry electrode in the canal but not touching the eardrum, secured underneath the eartip, (3) dry electrode in the canal not touching the eardrum, secured through a bore in the eartip (subsequent conditions were completed using this method), (4) hydrated electrode in the canal but not touching the eardrum, and (5) hydrated electrode touching the eardrum. To create the bore, a technique was developed in which a needle is heated and pushed through the foam eartip. The electrode is then thread through the bore and advanced slowly by hand until contacting the eardrum. Analysis included comparing absorbance, admittance phase angle, and DPOAE levels between measurement conditions.
Results: Comparison of the absorbance and admittance phase angle measurements between the electrode placement methods revealed significantly higher absorbance and lower admittance phase angle from 0.125 to 1 k Hz when the electrode is routed under the eartip. Absorbance and admittance phase angle were minimally affected when the electrode was inserted through a bore in the eartip. DPOAE levels across the different conditions showed changes approximating test-retest variability. Upon contacting the eardrum, the absorbance tended to decrease below 1 k Hz and increase above 1 k Hz. However, changes were within the range of test-retest variability. There was evidence of reduced levels below 1 k Hz and increased levels above 1 k Hz upon the electrode contacting the eardrum. However, differences between conditions approximated test-retest variability.
Conclusions: Routing the eardrum electrode through the foam tip reduces the likelihood of incurring an acoustic leak between the canal walls and eartip, compared with routing the electrode under the eartip. Changes in absorbance and DPOAE levels resulting from electrode contact with the eardrum implicate potential stiffening of eardrum; however, the magnitude of changes suggests minimal effect of the electrode on sound transmission in the ear.

Pubmed PDF Web

Electrocochleography-Based Tonotopic Map: II. Frequency-to-Place Mismatch Impacts Speech-Perception Outcomes in Cochlear Implant Recipients

Walia, Amit; Shew, Matthew A.; Varghese, Jordan; Lefler, Shannon M.; Bhat, Amrita; Ortmann, Amanda J.; Herzog, Jacques A.; Buchman, Craig A.

Publication date 17-06-2024


Objectives: Modern cochlear implants (CIs) use varying-length electrode arrays inserted at varying insertion angles within variably sized cochleae. Thus, there exists an opportunity to enhance CI performance, particularly in postlinguistic adults, by optimizing the frequency-to-place allocation for electrical stimulation, thereby minimizing the need for central adaptation and plasticity. There has been interest in applying Greenwood or Stakhovskaya et al. function (describing the tonotopic map) to postoperative imaging of electrodes to improve frequency allocation and place coding. Acoustically-evoked electrocochleography (ECochG) allows for electrophysiologic best-frequency (BF) determination of CI electrodes and the potential for creating a personalized frequency allocation function. The objective of this study was to investigate the correlation between early speech-perception performance and frequency-to-place mismatch.
Design: This retrospective study included 50 patients who received a slim perimodiolar electrode array. Following electrode insertion, five acoustic pure-tone stimuli ranging from 0.25 to 2 k Hz were presented, and electrophysiological measurements were collected across all 22 electrode contacts. Cochlear microphonic tuning curves were subsequently generated for each stimulus frequency to ascertain the BF electrode or the location corresponding to the maximum response amplitude. Subsequently, we calculated the difference between the stimulus frequency and the patient’s CI map’s actual frequency allocation at each BF electrode, reflecting the frequency-to-place mismatch. BF electrocochleography-total response (BF-ECochG-TR), a measure of cochlear health, was also evaluated for each subject to control for the known impact of this measure on performance.
Results: Our findings showed a moderate correlation (r = 0.51; 95% confidence interval: 0.23 to 0.76) between the cumulative frequency-to-place mismatch, as determined using the ECochG-derived BF map (utilizing 500, 1000, and 2000 Hz), and 3-month performance on consonant-nucleus-consonant words (N = 38). Larger positive mismatches, shifted basal from the BF map, led to enhanced speech perception. Incorporating BF-ECochG-TR, total mismatch, and their interaction in a multivariate model explained 62% of the variance in consonant-nucleus-consonant word scores at 3 months. BF-ECochG-TR as a standalone predictor tended to overestimate performance for subjects with larger negative total mismatches and underestimated the performance for those with larger positive total mismatches. Neither cochlear diameter, number of cochlear turns, nor apical insertion angle accounted for the variability in total mismatch.
Conclusions: Comparison of ECochG-BF derived tonotopic electrode maps to the frequency allocation tables reveals substantial mismatch, explaining 26.0% of the variability in CI performance in quiet. Closer examination of the mismatch shows that basally shifted maps at high frequencies demonstrate superior performance at 3 months compared with those with apically shifted maps (toward Greenwood and Stakhovskaya et al.). The implications of these results suggest that electrophysiological-based frequency reallocation might lead to enhanced speech-perception performance, especially when compared with conventional manufacturer maps or anatomic-based mapping strategies. Future research, exploring the prospective use of ECochG-based mapping techniques for frequency allocation is underway.

Pubmed PDF Web

International Consensus Statements on Intraoperative Testing for Cochlear Implantation Surgery

Alzhrani, Farid; Aljazeeri, Isra; Abdelsamad, Yassin; Alsanosi, Abdulrahman; Kim, Ana H.; Ramos-Macias, Angel; Ramos-de-Miguel, Angel; Kurz, Anja; Lorens, Artur; Gantz, Bruce; Buchman, Craig A.; Távora-Vieira, Dayse; Sprinzl, Georg; Mertens, Griet; Saunders, James E.; Kosaner, Julie; Telmesani, Laila M.; Lassaletta, Luis; Bance, Manohar; Yousef, Medhat; Holcomb, Meredith A.; Adunka, Oliver; Thomasen, Per Cayé-; Skarzynski, Piotr H.; Rajeswaran, Ranjith; Briggs, Robert J.; Oh, Seung-Ha; Plontke, Stefan; O’Leary, Stephen J.; Agrawal, Sumit; Yamasoba, Tatsuya; Lenarz, Thomas; Wesarg, Thomas; Kutz, Walter; Connolly, Patrick; Anderson, Ilona; Hagr, Abdulrahman

Publication date 25-06-2024


Objectives: A wide variety of intraoperative tests are available in cochlear implantation. However, no consensus exists on which tests constitute the minimum necessary battery. We assembled an international panel of clinical experts to develop, refine, and vote upon a set of core consensus statements.
Design: A literature review was used to identify intraoperative tests currently used in the field and draft a set of provisional statements. For statement evaluation and refinement, we used a modified Delphi consensus panel structure. Multiple interactive rounds of voting, evaluation, and feedback were conducted to achieve convergence.
Results: Twenty-nine provisional statements were included in the original draft. In the first voting round, consensus was reached on 15 statements. Of the 14 statements that did not reach consensus, 12 were revised based on feedback provided by the expert practitioners, and 2 were eliminated. In the second voting round, 10 of the 12 revised statements reached a consensus. The two statements which did not achieve consensus were further revised and subjected to a third voting round. However, both statements failed to achieve consensus in the third round. In addition, during the final revision, one more statement was decided to be deleted due to overlap with another modified statement.
Conclusions: A final core set of 24 consensus statements was generated, covering wide areas of intraoperative testing during CI surgery. These statements may provide utility as evidence-based guidelines to improve quality and achieve uniformity of surgical practice.

Pubmed PDF Web

Extended High-Frequency Thresholds: Associations With Demographic and Risk Factors, Cognitive Ability, and Hearing Outcomes in Middle-Aged and Older Adults

Helfer, Karen S.; Maldonado, Lizmarie; Matthews, Lois J.; Simpson, Annie N.; Dubno, Judy R.

Publication date 11-07-2024


Objectives: This study had two objectives: to examine associations between extended high-frequency (EHF) thresholds, demographic factors (age, sex, race/ethnicity), risk factors (cardiovascular, smoking, noise exposure, occupation), and cognitive abilities; and to determine variance explained by EHF thresholds for speech perception in noise, self-rated workload/effort, and self-reported hearing difficulties.
Design: This study was a retrospective analysis of a data set from the MUSC Longitudinal Cohort Study of Age-related Hearing Loss. Data from 347 middle-aged adults (45 to 64 years) and 694 older adults (≥ 65 years) were analyzed for this study. Speech perception was quantified using low-context Speech Perception In Noise (SPIN) sentences. Self-rated workload/effort was measured using the effort prompt from the National Aeronautics and Space Administration-Task Load Index. Self-reported hearing difficulty was assessed using the Hearing Handicap Inventory for the Elderly/Adults. The Wisconsin Card Sorting Task and the Stroop Neuropsychological Screening Test were used to assess selected cognitive abilities. Pure-tone averages representing conventional and EHF thresholds between 9 and 12 k Hz (PTA(9 - 12 k Hz)) were utilized in simple linear regression analyses to examine relationships between thresholds and demographic and risk factors or in linear regression models to assess the contributions of PTA(9 - 12 k Hz) to the variance among the three outcomes of interest. Further analyses were performed on a subset of individuals with thresholds ≤ 25 dB HL at all conventional frequencies to control for the influence of hearing loss on the association between PTA(9 - 12 k Hz) and outcome measures.
Results: PTA(9 - 12 k Hz) was higher in males than females, and was higher in White participants than in racial Minority participants. Linear regression models showed the associations between cardiovascular risk factors and PTA(9 - 12 k Hz) were not statistically significant. Older adults who reported a history of noise exposure had higher PTA(9 - 12 k Hz) than those without a history, while associations between noise history and PTA(9 - 12 k Hz) did not reach statistical significance for middle-aged participants. Linear models adjusting for age, sex, race and noise history showed that higher PTA(9 - 12 k Hz) was associated with greater self-perceived hearing difficulty and poorer speech recognition scores in noise for both middle-aged and older participants. Workload/effort was significantly related to PTA(9 - 12 k Hz) for middle-aged, but not older, participants, while cognitive task performance was correlated with PTA(9 - 12 k Hz) only for older participants. In general, PTA(9 - 12 k Hz)did not account for additional variance in outcome measures as compared to conventional pure-tone thresholds, with the exception of self-reported hearing difficulties in older participants. Linear models adjusting for age and accounting for subject-level correlations in the subset analyses revealed no association between PTA(9 - 12 k Hz)and outcomes of interest.
Conclusions: EHF thresholds show age-, sex-, and race-related patterns of elevation that are similar to what is observed for conventional thresholds. The current results support the need for more research to determine the utility of adding EHF thresholds to routine audiometric assessment with middle-aged and older adults.

Pubmed PDF Web

The Optimal Speech-to-Background Ratio for Balancing Speech Recognition With Environmental Sound Recognition

Johnson, Eric M.; Healy, Eric W.

Publication date 31-05-2024


Objectives: This study aimed to determine the speech-to-background ratios (SBRs) at which normal-hearing (NH) and hearing-impaired (HI) listeners can recognize both speech and environmental sounds when the two types of signals are mixed. Also examined were the effect of individual sounds on speech recognition and environmental sound recognition (ESR), and the impact of divided versus selective attention on these tasks.
Design: In Experiment 1 (divided attention), 11 NH and 10 HI listeners heard sentences mixed with environmental sounds at various SBRs and performed speech recognition and ESR tasks concurrently in each trial. In Experiment 2 (selective attention), 20 NH listeners performed these tasks in separate trials. Psychometric functions were generated for each task, listener group, and environmental sound. The range over which speech recognition and ESR were both high was determined, as was the optimal SBR for balancing recognition with ESR, defined as the point of intersection between each pair of normalized psychometric functions.
Results: The NH listeners achieved greater than 95% accuracy on concurrent speech recognition and ESR over an SBR range of approximately 20 dB or greater. The optimal SBR for maximizing both speech recognition and ESR for NH listeners was approximately +12 dB. For the HI listeners, the range over which 95% performance was observed on both tasks was far smaller (span of 1 dB), with an optimal value of +5 dB. Acoustic analyses indicated that the speech and environmental sound stimuli were similarly audible, regardless of the hearing status of the listener, but that the speech fluctuated more than the environmental sounds. Divided versus selective attention conditions produced differences in performance that were statistically significant yet only modest in magnitude. In all conditions and for both listener groups, recognition was higher for environmental sounds than for speech when presented at equal intensities (i.e., 0 dB SBR), indicating that the environmental sounds were more effective maskers of speech than the converse. Each of the 25 environmental sounds used in this study (with one exception) had a span of SBRs over which speech recognition and ESR were both higher than 95%. These ranges tended to overlap substantially.
Conclusions: A range of SBRs exists over which speech and environmental sounds can be simultaneously recognized with high accuracy by NH and HI listeners, but this range is larger for NH listeners. The single optimal SBR for jointly maximizing speech recognition and ESR also differs between NH and HI listeners. The greater masking effectiveness of the environmental sounds relative to the speech may be related to the lower degree of fluctuation present in the environmental sounds as well as possibly task differences between speech recognition and ESR (open versus closed set). The observed differences between the NH and HI results may possibly be related to the HI listeners’ smaller fluctuating masker benefit. As noise-reduction systems become increasingly effective, the current results could potentially guide the design of future systems that provide listeners with highly intelligible speech without depriving them of access to important environmental sounds.

Pubmed PDF Web

Listening Effort Measured With Pupillometry in Cochlear Implant Users Depends on Sound Level, But Not on the Signal to Noise Ratio When Using the Matrix Test

Stronks, Hendrik Christiaan; Tops, Annemijn Laura; Quach, Kwong Wing; Briaire, Jeroen Johannes; Frijns, Johan Hubertus Maria

Publication date 18-06-2024


Objectives: We investigated whether listening effort is dependent on task difficulty for cochlear implant (CI) users when using the Matrix speech-in-noise test. To this end, we measured peak pupil dilation (PPD) at a wide range of signal to noise ratios (SNR) by systematically changing the noise level at a constant speech level, and vice versa.
Design: A group of mostly elderly CI users performed the Dutch/Flemish Matrix test in quiet and in multitalker babble at different SNRs. SNRs were set relative to the speech-recognition threshold (SRT), namely at SRT, and 5 and 10 dB above SRT (0 dB, +5 dB, and +10 dB re SRT). The latter 2 conditions were obtained by either varying speech level (at a fixed noise level of 60 dBA) or by varying noise level (with a fixed speech level). We compared these PPDs with those of a group of typical hearing (TH) listeners. In addition, listening effort was assessed with subjective ratings on a Likert scale.
Results: PPD for the CI group did not significantly depend on SNR, whereas SNR significantly affected PPDs for TH listeners. Subjective effort ratings depended significantly on SNR for both groups. For CI users, PPDs were significantly larger, and effort was rated higher when speech was varied, and noise was fixed for CI users. By contrast, for TH listeners effort ratings were significantly higher and performance scores lower when noise was varied, and speech was fixed.
Conclusions: The lack of a significant effect of varying SNR on PPD suggests that the Matrix test may not be a feasible speech test for measuring listening effort with pupillometric measures for CI users. A rating test appeared more promising in this population, corroborating earlier reports that subjective measures may reflect different dimensions of listening effort than pupil dilation. Establishing the SNR by varying speech or noise level can have subtle, but significant effects on measures of listening effort, and these effects can differ between TH listeners and CI users.

Pubmed PDF Web

Quick Estimation of Minimum Hearing Levels Using a Binaural Multifrequency Stimulus Paradigm: Proof of Concept

Gargeshwari, Aditi; Krishnan, Ananthanarayan; Delgado, Rafael E.

Publication date 03-06-2024


Objectives: Objective estimation of minimum hearing levels using auditory brainstem responses (ABRs) elicited by single frequency tone-bursts presented monaurally is currently considered the gold standard. However, the data acquisition time to estimate thresholds (for both ears across four audiometric frequencies) using this method usually exceeds the sleep time (ranging between 35 and 49 minutes) in infants below 4 months, thus providing incomplete information of hearing status which in turn delays timely clinical intervention. Alternate approaches using faster rate, or tone-burst trains have not been readily accepted due to additional hardware and software requirements. We propose here a novel binaural multifrequency stimulation paradigm wherein several stimuli of different frequencies are presented binaurally in an interleaved manner. The rationale here is that the proposed paradigm will increase acquisition efficiency, significantly reduce test time, and improve accuracy by incorporating an automatic wave V detection algorithm. It is important to note that this paradigm can be easily implemented in most commercial ABR systems currently used by most clinicians.
Design: Using this binaural multifrequency paradigm, ear specific ABRs were recorded in 30 normal-hearing young adults to both tone-bursts, and narrow-band (NB) i Chirps at 500, 1000, 2000, and 4000 Hz. Comparison of ABRs elicited by tone-bursts and narrow-band chirps allowed us to determine if NB i Chirps elicited a more robust wave V component compared with the tone-bursts. ABR data were characterized by measures of minimum hearing levels; wave V amplitude; and response detectability for two electrode configurations (high forehead-C7; and high forehead-linked mastoids).
Results: Consistent with the research literature, wave V response amplitudes were relatively more robust for NB i Chirp stimuli compared with tone-burst stimuli. The easier identification and better detectability of wave V for the NB i Chirps at lower stimulus levels contributed to their better thresholds compared with tone-burst elicited responses. It is important to note that binaural multifrequency hearing levels close to minimum hearing levels were determined in approximately 22 minutes using this paradigm—appreciably quicker than the 45 to 60 minutes or longer time required for threshold determination using the conventional single frequency method.
Conclusions: Our novel and simple paradigm using either NB i Chirps or tone-bursts provides a reliable method to rapidly estimate the minimum hearing levels across audiometric frequencies for both ears. Incorporation of an automatic wave V detection algorithm increases objectivity and further reduce test time and facilitate early hearing identification and intervention.

Pubmed PDF Web

Subjective Speech Intelligibility Drives Noise-Tolerance Domain Use During the Tracking of Noise-Tolerance Test

Kuk, Francis; Slugocki, Christopher; Korhonen, Petri

Publication date 17-06-2024


Objectives: Recently, the Noise-Tolerance Domains Test (NTDT) was applied to study the noise-tolerance domains used by young normal-hearing (NH) listeners during noise acceptance decisions. In this study, we examined how subjective speech intelligibility may drive noise acceptance decisions by applying the NTDT on NH and hearing-impaired (HI) listeners at the signal to noise ratios (SNRs) around the Tracking of Noise-Tolerance (TNT) thresholds.
Design: A single-blind, within-subjects design with 22 NH and 17 HI older adults was followed. Listeners completed the TNT to determine the average noise acceptance threshold (TNTAve). Then, listeners completed the NTDT at the SNRs of 0, ±3 dB (re: TNTAve) to estimate the weighted noise-tolerance domain ratings (WNTDRs) for each domain criterion. Listeners also completed the Objective and Subjective Intelligibility Difference (OSID) Test to establish the individual intelligibility performance-intensity (P-I) functions of the TNT materials. All test measures were conducted at 75 and 82 dB SPL speech input levels. NH and HI listeners were tested in the unaided mode. The HI listeners were also tested using a study hearing aid. The WNTDRs were plotted against subjective speech intelligibilities extrapolated from individual P-I of the OSID at the SNRs corresponding to NTDT test conditions. Listeners were grouped according to their most heavily weighed domain and a regression analysis was performed against listener demographics as well as TNT and OSID performances to determine which variable(s) affected listener grouping.
Results: Three linear mixed effects (LMEs) models were used to examine whether WNTDRs changed with subjective speech intelligibility. All three LMEs found significant fixed effects of domain criteria, subjective intelligibility, and speech input level on WNTDRs. In general, heavier weights were assigned to speech interference and loudness domains at poorer intelligibility levels (80%). The comparison between NH and HI-unaided showed that NH listeners assigned greater weights to loudness than the HI-unaided listeners. The comparison between NH and HI-aided groups showed similar weights between groups. The comparison between HI-unaided and HI-aided found that HI listeners assigned lower weights to speech interference and greater weights to loudness when tested in aided compared with unaided modes. In all comparisons, loudness was weighed heavier at the 82 dB SPL input level than at the 75 dB SPL input level with greater weights to annoyance in the NH versus HI-unaided comparison and lower weights to distraction in the HI-aided versus HI-unaided comparison. A generalized linear model determined that listener grouping was best accounted for by subjective speech intelligibility estimated at TNTAve.
Conclusions: The domain criteria used by listeners were driven by their subjective speech intelligibility regardless of their hearing status (i.e., NH versus HI). In general, when subjective intelligibility was poor, the domains of speech interference and loudness were weighed the heaviest. As subjective intelligibility improved, the weightings on annoyance and distraction increased. Furthermore, a listener’s criterion for >90% subjective speech understanding at the TNTAve may allow one to profile the listener.

Pubmed PDF Web

Barriers to Meeting National Early Hearing Detection and Intervention Guidelines in a Diverse Patient Cohort

Jaradeh, Katrin; Liao, Elizabeth N.; Lindeborg, Michael; Chan, Dylan K.; Weinstein, Jacqueline E.

Publication date 20-06-2024


Objectives: To determine our audiology clinics status in meeting the Joint Committee on Infant Hearing recommended 1-3-6 benchmarks for identification and intervention for congenital sensorineural hearing loss and identify those factors contributing to delay in identification and intervention.
Design: This is a retrospective case series. Children with sensorineural hearing loss who underwent auditory brainstem response (ABR) testing, hearing aid evaluation, or cochlear implant mapping at our tertiary pediatric medical center between January 2018 and December 2021 were included. Simple and multiple linear regression analyses were used to identify social, demographic, and health factors associated with primary outcomes, defined as age at hearing loss identification, age at intervention (here defined as amplification start), and interval between identification and intervention.
Results: Of 132 patients included, mean age was 2.4 years, 48% were male, and 51% were Hispanic. There was significant association between each Hispanic ethnicity (p = 0.005, p = 0.04, respectively), insurance type (p = 0.02, p = 0.001, respectively), and later age at identification and intervention. In multivariable analyses, Hispanic ethnicity was significantly associated with both delays in identification and intervention (p = 0.03 and p = 0.03, respectively), and public insurance was associated with delays in intervention (p = 0.01). In addition, the total number of ABRs was significantly associated with both older age of identification and intervention (p < 0.001, p < 0.001, respectively). Mediator analysis demonstrated that the effect of ethnicity on age at identification is mediated by the total number of ABRs performed.
Conclusions: A significant association between total number of ABRs and age at identification and intervention for children with hearing loss exists. Hispanic ethnicity was associated with delays in meeting milestones, further mediated by the number of ABRs, providing a potential avenue for intervention in addressing this disparity.

Pubmed PDF Web

Effectiveness of the HEAR-Aware App for Adults Not Ready for Hearing Aids, but Open to Self-Management Support: Results of a Randomized Controlled Trial

Feenstra-Kikken, Vanessa; Van de Ven, Sjors; Lissenberg-Witte, Birgit I.; Pronk, Marieke; Smits, Cas; Timmer, Barbra H. B.; Polleunis, C.; Besser, Jana; Kramer, Sophia E.

Publication date 04-06-2024


Introduction: Recently, the HEAR-aware app was developed to support adults who are eligible for hearing aids (HAs) but not yet ready to use them. The app serves as a self-management tool, offering assistance for a range of target behaviors (TBs), such as communication strategies and emotional coping. Using ecological momentary assessment and intervention, the app prompts users to complete brief surveys regarding challenging listening situations they encounter in their daily lives (ecological momentary assessment). In response, users receive educational content in the form of “snippets” (videos, texts, web links) on the TBs, some of which are customized based on the reported acoustic environmental characteristics (ecological momentary intervention). The primary objective of this study was to assess the effectiveness of the HEAR-aware app in enhancing readiness to take action on various TBs and evaluate its impact on secondary outcomes. The secondary objective was to examine the app’s usability, usefulness, and user satisfaction.
Methods: A randomized controlled trial design with two arms was used. Participants with hearing loss aged 50 years and over were recruited via an HA retailer and randomly assigned to the intervention group (n = 42, mean age = 65 years SD = 9.1) or the control group (n = 45, mean age = 68 years SD 8.7). The intervention group used the app during 4 weeks. The control group received no intervention. All participants completed online questionnaires at baseline (T0), after 4 weeks (T1), and again 4 weeks later (T2). Participants’ readiness to take action on five TBs was measured with The Line Composite. A list of secondary outcomes was used. Intention-to-treat analyses were performed using Linear Mixed effect Models including group (intervention/control), time (T0/T1/T2), and Group × Time Interactions. In addition, a per protocol analysis was carried out to explore whether effects depended on app usage. For the secondary aim the System Usability Scale (SUS), the Intrinsic Motivation Inventory, item 4 of the International Outcome Inventory-Alternative Intervention (IOI-AI), and a recommendation item were used (intervention group only at T1).
Results: For objective 1, there was no significant group difference for The Line Composite over the course of T0, T1, and T2. However, a significant (p = 0.033) Group × Time Interaction was found for The Line Emotional coping, with higher increase in readiness to take action on emotional coping in the intervention group than in the control group. The intention-to-treat analyses revealed no other significant group differences, but the per protocol analyses showed that participants in the intervention group were significantly more ready to take up Assistive Listening Devices (The Line Assistive Listening Devices) and less ready to take up HAs (Staging Algorithm HAs) than the control group (p = 0.049). Results for objective 2 showed that on average, participants rated the app as moderately useful (mean Intrinsic Motivation Inventory score 5 out of 7) and its usability as “marginal” (mean SUS score 68 out of 100) with about half of the participants rating the app as “good” (SUS score >70) and a minority rating is as “unacceptable” (SUS score ≤50).
Conclusions: This study underscores the potential of self-management support tools like the HEAR-aware app in the rehabilitation of adults with hearing loss who are not yet ready for HAs. The range in usability scores suggest that it may not be a suitable intervention for everyone.

Pubmed PDF Web

The Lifelines Cohort Study: Prevalence of Tinnitus Associated Suffering and Behavioral Outcomes in Children and Adolescents

Meijers, Sebastiaan M.; de Ruijter, Jessica H. J.; Stokroos, Robert J.; Smit, Adriana L.; Stegeman, Inge

Publication date 10-07-2024


Objectives: Tinnitus in children and adolescents is relatively unexplored territory. The available literature is limited and the reported prevalence of tinnitus suffering varies widely due to the absence of a definition for pediatric tinnitus. The impact on daily life seems to be lower than in the adult population. It is unclear if children who suffer from tinnitus, like adults, also experience psychological distress like anxiety or depressive symptoms. A better understanding of tinnitus in children and its impact on daily life could provide more insight into the actual size of the problem and could give direction for future studies to investigate the cause of progression of tinnitus.
Design: A cross-sectional study was performed using the Dutch Lifelines population-based cohort of people living in the north of the Netherlands. A total of 4964 children (4 to 12 years of age) and 2506 adolescents (13 to 17 years of age) were included. The presence of tinnitus suffering and behavioral outcomes were assessed with a single-item question and the Child Behavioral Checklist or the Youth Self Report questionnaire respectively. The associations of behavioral outcomes and tinnitus suffering were analyzed using univariate binary regressions.
Results: The prevalence of tinnitus suffering in children was 3.3 and 12.8% in adolescents. Additionally, 0.3% of the children and 1.9% of the adolescents suffered a lot or extremely of their tinnitus. Externalizing and internalizing problems were associated with tinnitus in adolescents. Internalizing problems were associated with tinnitus in children.
Conclusions: The prevalence of tinnitus suffering in this sample of the general population is comparable to other population-based studies. A low percentage of children (0.3%) or adolescents (1.9%) suffered a lot or extremely of their tinnitus. Tinnitus suffering is associated with all behavioral outcome subscales in adolescents and with internalizing problems in children, although the effect sizes were very small. Future research should focus on achieving a consensus for the definition of pediatric tinnitus and on the development of a validated outcome measure.

Pubmed PDF Web

The Effort of Repairing a Misperceived Word Can Impair Perception of Following Words, Especially for Listeners With Cochlear Implants

Winn, Matthew B.

Publication date 18-06-2024


Objectives: In clinical and laboratory settings, speech recognition is typically assessed in a way that cannot distinguish accurate auditory perception from misperception that was mentally repaired or inferred from context. Previous work showed that the process of repairing misperceptions elicits greater listening effort, and that this elevated effort lingers well after the sentence is heard. That result suggests that cognitive repair strategies might appear successful when testing a single utterance but fail for everyday continuous conversational speech. The present study tested the hypothesis that the effort of repairing misperceptions has the consequence of carrying over to interfere with perception of later words after the sentence.
Design: Stimuli were open-set coherent sentences that were presented intact or with a word early in the sentence replaced with noise, forcing the listener to use later context to mentally repair the missing word. Sentences were immediately followed by digit triplets, which served to probe carryover effort from the sentence. Control conditions allowed for the comparison to intact sentences that did not demand mental repair, as well as to listening conditions that removed the need to attend to the post-sentence stimuli, or removed the post-sentence digits altogether. Intelligibility scores for the sentences and digits were accompanied by time-series measurements of pupil dilation to assess cognitive load during the task, as well as subjective rating of effort. Participants included adults with cochlear implants (CIs), as well as an age-matched group and a younger group of listeners with typical hearing for comparison.
Results: For the CI group, needing to repair a missing word during a sentence resulted in more errors on the digits after the sentence, especially when the repair process did not result in a coherent sensible perception. Sentences that needed repair also contained more errors on the words that were unmasked. All groups showed substantial increase of pupil dilation when sentences required repair, even when the repair was successful. Younger typical hearing listeners showed clear differences in moment-to-moment allocation of effort in the different conditions, while the other groups did not.
Conclusions: For CI listeners, the effort of needing to repair misperceptions in a sentence can last long enough to interfere with words that follow the sentence. This pattern could pose a serious problem for regular communication but would go overlooked in typical testing with single utterances, where a listener has a chance to repair misperceptions before responding. Carryover effort was not predictable by basic intelligibility scores, but can be revealed in behavioral data when sentences are followed immediately by extra probe words such as digits.

Pubmed PDF Web

Long-Term Outcomes of Cochlear Implantation in Usher Syndrome

Fehrmann, Mirthe L. A.; Lanting, Cris P.; Haer-Wigman, Lonneke; Yntema, Helger G.; Mylanus, Emmanuel A. M.; Huinck, Wendy J.; Pennings, Ronald J. E.

Publication date 11-07-2024


Objectives: Usher syndrome (USH), characterized by bilateral sensorineural hearing loss (SNHL) and retinitis pigmentosa (RP), prompts increased reliance on hearing due to progressive visual deterioration.
It can be categorized into three subtypes: USH type 1 (USH1), characterized by severe to profound congenital SNHL, childhood-onset RP, and vestibular areflexia; USH type 2 (USH2), presenting with moderate to severe progressive SNHL and RP onset in the second decade, with or without vestibular dysfunction; and USH type 3 (USH3), featuring variable progressive SNHL beginning in childhood, variable RP onset, and diverse vestibular function. Previous studies evaluating cochlear implant (CI) outcomes in individuals with USH used varying or short follow-up durations, while others did not evaluate outcomes for each subtype separately. This study evaluates long-term CI performance in subjects with USH, at both short-term and long-term, considering each subtype separately.
Design: This retrospective, observational cohort study identified 36 CI recipients (53 ears) who were categorized into four different groups: early-implanted USH1 (first CI at ≤7 years of age), late-implanted USH1 (first CI at ≥8 years of age), USH2 and USH3. Phoneme scores at 65 dB SPL with CI were evaluated at 1 year, ≥2 years (mid-term), and ≥5 years postimplantation (long-term). Each subtype was analyzed separately due to the significant variability in phenotype observed among the three subtypes.
Results: Early-implanted USH1-subjects (N = 23 ears) achieved excellent long-term phoneme scores (100% interquartile ranges {IQR} = 95 to 100), with younger age at implantation significantly correlating with better CI outcomes. Simultaneously implanted subjects had significantly better outcomes than sequentially implanted subjects (p = 0.028). Late-implanted USH1 subjects (N = 3 ears) used CI solely for sound detection and showed a mean phoneme discrimination score of 12% (IQR = 0 to 12), while still expressing satisfaction with ambient sound detection. In the USH2 group (N = 23 ears), a long-term mean phoneme score of 85% (IQR = 81 to 95) was found. Better outcomes were associated with younger age at implantation and higher preimplantation speech perception scores. USH3-subjects (N = 7 ears) achieved a mean postimplantation phoneme score of 71% (IQR = 45 to 91).
Conclusions: This study is currently one of the largest and most comprehensive studies evaluating CI outcomes in individuals with USH, demonstrating that overall, individuals with USH benefit from CI at both short- and long-term follow-up. Due to the considerable variability in phenotype observed among the three subtypes, each subtype was analyzed separately, resulting in smaller sample sizes. For USH1 subjects, optimal CI outcomes are expected with early simultaneous bilateral implantation. Late implantation in USH1 provides signaling function, but achieved speech recognition is insufficient for oral communication. In USH2 and USH3, favorable CI outcomes are expected, especially if individuals exhibit sufficient speech recognition with hearing aids and receive ample auditory stimulation preimplantation. Early implantation is recommended for USH2, given the progressive nature of hearing loss and concomitant severe visual impairment. In comparison with USH2, predicting outcomes in USH3 remains challenging due to the variability found. Counseling for USH2 and USH3 should highlight early implantation benefits and encourage hearing aid use.

Pubmed PDF Web

Chronic Electro-Acoustic Stimulation May Interfere With Electric Threshold Recovery After Cochlear Implantation in the Aged Guinea Pig

Reiss, Lina A. J.; Lawrence, Melissa B.; Omelchenko, Irina A.; He, Wenxuan; Kirk, Jonathon R.

Publication date 12-07-2024


Objectives: Electro-acoustic stimulation (EAS) combines electric stimulation via a cochlear implant (CI) with residual low-frequency acoustic hearing, with benefits for music appreciation and speech perception in noise. However, many EAS CI users lose residual acoustic hearing, reducing this benefit. The main objectives of this study were to determine whether chronic EAS leads to more hearing loss compared with CI surgery alone in an aged guinea pig model, and to assess the relationship of any hearing loss to histology measures. Conversely, it is also important to understand factors impacting efficacy of electric stimulation. If one contributor to CI-induced hearing loss is damage to the auditory nerve, both acoustic and electric thresholds will be affected. Excitotoxicity from EAS may also affect electric thresholds, while electric stimulation is osteogenic and may increase electrode impedances. Hence, secondary objectives were to assess how electric thresholds are related to the amount of residual hearing loss after CI surgery, and how EAS affects electric thresholds and impedances over time.
Design: Two groups of guinea pigs, aged 9 to 21 months, were implanted with a CI in the left ear. Preoperatively, the animals had a range of hearing losses, as expected for an aged cohort. At 4 weeks after surgery, the EAS group (n = 5) received chronic EAS for 8 hours a day, 5 days a week, for 20 weeks via a tether system that allowed for free movement during stimulation. The nonstimulated group (NS; n = 6) received no EAS over the same timeframe. Auditory brainstem responses (ABRs) and electrically evoked ABRs (EABRs) were recorded at 3 to 4 week intervals to assess changes in acoustic and electric thresholds over time. At 24 weeks after surgery, cochlear tissue was harvested for histological evaluation, only analyzing animals without electrode extrusions (n = 4 per ear).
Results: Cochlear implantation led to an immediate worsening of ABR thresholds peaking between 3 and 5 weeks after surgery and then recovering and stabilizing by 5 and 8 weeks. Significantly greater ABR threshold shifts were seen in the implanted ears compared with contralateral, non-implanted control ears after surgery. After EAS and termination, no significant additional ABR threshold shifts were seen in the EAS group compared with the NS group. A surprising finding was that NS animals had significantly greater recovery in EABR thresholds over time, with decreases (improvements) of −51.8 ± 33.0 and −39.0 ± 37.3 c.u. at 12 and 24 weeks, respectively, compared with EAS animals with EABR threshold increases (worsening) of +1.0 ± 25.6 and 12.8 ± 44.3 c.u. at 12 and 24 weeks. Impedance changes over time did not differ significantly between groups. After exclusion of cases with electrode extrusion or significant trauma, no significant correlations were seen between ABR and EABR thresholds, or between ABR thresholds with histology measures of inner/outer hair cell counts, synaptic ribbon counts, stria vascularis capillary diameters, or spiral ganglion cell density.
Conclusions: The findings do not indicate that EAS significantly disrupts acoustic hearing, although the small sample size limits this interpretation. No evidence of associations between hair cell, synaptic ribbon, spiral ganglion cell, or stria vascularis with hearing loss after cochlear implantation was seen when surgical trauma is minimized. In cases of major trauma, both acoustic thresholds and electric thresholds were elevated, which may explain why CI-only outcomes are often better when trauma and hearing loss are minimized. Surprisingly, chronic EAS (or electric stimulation alone) may negatively impact electric thresholds, possibly by prevention of recovery of the auditory nerve after CI surgery. More research is needed to confirm the potentially negative impact of chronic EAS on electric threshold recovery.

Pubmed PDF Web

Speech Recognition and Spatial Hearing in Young Adults With Down Syndrome: Relationships With Hearing Thresholds and Auditory Working Memory

Anshu, Kumari; Kristensen, Kayla; Godar, Shelly P.; Zhou, Xin; Hartley, Sigan L.; Litovsky, Ruth Y.

Publication date 02-08-2024


Objectives: Individuals with Down syndrome (DS) have a higher incidence of hearing loss (HL) compared with their peers without developmental disabilities. Little is known about the associations between HL and functional hearing for individuals with DS. This study investigated two aspects of auditory functions, “what” (understanding the content of sound) and “where” (localizing the source of sound), in young adults with DS. Speech reception thresholds in quiet and in the presence of interferers provided insight into speech recognition, that is, the “what” aspect of auditory maturation. Insights into “where” aspect of auditory maturation were gained from evaluating speech reception thresholds in colocated versus separated conditions (quantifying spatial release from masking) as well as right versus left discrimination and sound location identification. Auditory functions in the “where” domain develop during earlier stages of cognitive development in contrast with the later developing “what” functions. We hypothesized that young adults with DS would exhibit stronger “where” than “what” auditory functioning, albeit with the potential impact of HL. Considering the importance of auditory working memory and receptive vocabulary for speech recognition, we hypothesized that better speech recognition in young adults with DS, in quiet and with speech interferers, would be associated with better auditory working memory ability and receptive vocabulary.
Design: Nineteen young adults with DS (aged 19 to 24 years) participated in the study and completed assessments on pure-tone audiometry, right versus left discrimination, sound location identification, and speech recognition in quiet and with speech interferers that were colocated or spatially separated. Results were compared with published data from children and adults without DS and HL, tested using similar protocols and stimuli. Digit Span tests assessed auditory working memory. Receptive vocabulary was examined using the Peabody Picture Vocabulary Test Fifth Edition.
Results: Seven participants (37%) had HL in at least 1 ear; 4 individuals had mild HL, and 3 had moderate HL or worse. Participants with mild or no HL had ≥75% correct at 5° separation on the discrimination task and sound localization root mean square errors (mean ± SD: 8.73° ± 2.63°) within the range of adults in the comparison group. Speech reception thresholds in young adults with DS were higher than all comparison groups. However, spatial release from masking did not differ between young adults with DS and comparison groups. Better (lower) speech reception thresholds were associated with better hearing and better auditory working memory ability. Receptive vocabulary did not predict speech recognition.
Conclusions: In the absence of HL, young adults with DS exhibited higher accuracy during spatial hearing tasks as compared with speech recognition tasks. Thus, auditory processes associated with the “where” pathways appear to be a relative strength than those associated with “what” pathways in young adults with DS. Further, both HL and auditory working memory impairments contributed to difficulties in speech recognition in the presence of speech interferers. Future larger-sized samples are needed to replicate and extend our findings.

Pubmed PDF Web

The Role of Early Intact Auditory Experience on the Perception of Spoken Emotions, Comparing Prelingual to Postlingual Cochlear Implant Users

Taitelbaum-Swead, Riki; Ben-David, Boaz M.

Publication date 15-07-2024


Objectives: Cochlear implants (CI) are remarkably effective, but have limitations regarding the transformation of the spectro-temporal fine structures of speech. This may impair processing of spoken emotions, which involves the identification and integration of semantic and prosodic cues. Our previous study found spoken-emotions-processing differences between CI users with postlingual deafness (postlingual CI) and normal hearing (NH) matched controls (age range, 19 to 65 years). Postlingual CI users over-relied on semantic information in incongruent trials (prosody and semantics present different emotions), but rated congruent trials (same emotion) similarly to controls. Postlingual CI’s intact early auditory experience may explain this pattern of results. The present study examined whether CI users without intact early auditory experience (prelingual CI) would generally perform worse on spoken emotion processing than NH and postlingual CI users, and whether CI use would affect prosodic processing in both CI groups. First, we compared prelingual CI users with their NH controls. Second, we compared the results of the present study to our previous study (Taitlebaum-Swead et al. 2022; postlingual CI).
Design: Fifteen prelingual CI users and 15 NH controls (age range, 18 to 31 years) listened to spoken sentences composed of different combinations (congruent and incongruent) of three discrete emotions (anger, happiness, sadness) and neutrality (performance baseline), presented in prosodic and semantic channels (Test for Rating of Emotions in Speech paradigm). Listeners were asked to rate (six-point scale) the extent to which each of the predefined emotions was conveyed by the sentence as a whole (integration of prosody and semantics), or to focus only on one channel (rating the target emotion RTE) and ignore the other (selective attention). In addition, all participants performed standard tests of speech perception. Performance on the Test for Rating of Emotions in Speech was compared with the previous study (postlingual CI).
Results: When asked to focus on one channel, semantics or prosody, both CI groups showed a decrease in prosodic RTE (compared with controls), but only the prelingual CI group showed a decrease in semantic RTE. When the task called for channel integration, both groups of CI users used semantic emotional information to a greater extent than their NH controls. Both groups of CI users rated sentences that did not present the target emotion higher than their NH controls, indicating some degree of confusion. However, only the prelingual CI group rated congruent sentences lower than their NH controls, suggesting reduced accumulation of information across channels. For prelingual CI users, individual differences in identification of monosyllabic words were significantly related to semantic identification and semantic-prosodic integration.
Conclusions: Taken together with our previous study, we found that the degradation of acoustic information by the CI impairs the processing of prosodic emotions, in both CI user groups. This distortion appears to lead CI users to over-rely on the semantic information when asked to integrate across channels. Early intact auditory exposure among CI users was found to be necessary for the effective identification of semantic emotions, as well as the accumulation of emotional information across the two channels. Results suggest that interventions for spoken-emotion processing should not ignore the onset of hearing loss.

Pubmed PDF Web

Parent-Reported Ease of Listening in Preschool-Aged Children With Bilateral and Unilateral Hearing Loss

Easwar, Vijayalakshmi; Hou, Sanna; Zhang, Vicky W

Publication date 09-08-2024


Objectives: Evidence from school-aged children suggests that the ease with which children listen varies with the presence of hearing loss and the acoustic environment despite the use of devices like hearing aids. However, little is known about the ease of listening in preschool-aged children with hearing loss—an age at which rapid learning occurs and increased listening difficulty or effort may diminish the required capacity to learn new skills. To this end, the objectives of the present study were to (i) assess parent-reported aided ease of listening as a function of hearing loss configuration (hearing loss in one versus both ears) and device configuration among children with hearing loss in one ear (unilateral hearing loss), and (ii) investigate factors that influence children’s ease of listening.
Design: Parents of 83 children with normal hearing, 54 aided children with bilateral hearing loss (hearing loss in both ears), and 139 children with unilateral hearing loss participated in the study. Of the 139 children with unilateral loss, 72 were unaided, 54 were aided with a device on the ear with hearing loss (direct aiding) and 13 were aided with a device that routed signals to the contralateral normal hearing ear (indirect aiding). Mean age of children was 40.2 months (1 SD = 2.5; range: 36 to 51). Parents completed the two subscales of the Parents’ Evaluation of Aural/Oral Performance of Children+ (PEACH+) questionnaire, namely functional listening and ease of listening. Individual percent scores were computed for quiet and noisy situations. Linear mixed-effects models were used to assess the effect of hearing loss configuration and device configuration in children with unilateral hearing loss. Multiple regression was used to assess factors that influenced ease of listening. Factors included hearing thresholds, age at first device fit, consistency in device use, condition (quiet/noise), presence of developmental disabilities, and functional listening abilities.
Results: Children with direct aiding for their hearing loss, either unilateral or bilateral, had similarly lower functional listening skills and ease of listening than their normal hearing peers. Unaided children with unilateral hearing loss had lower functional listening skills and ease of listening than their normal hearing peers in noise but not in quiet. All aided children with unilateral hearing loss, irrespective of direct or indirect aiding had lower functional listening skills and ease of listening relative to normal hearing children in both quiet and noise. Furthermore, relative to unaided children with unilateral hearing loss, those with indirect aiding had lower functional listening and ease of listening. Regression analyses revealed functional listening as a significant predictor of ease of listening in all children with hearing loss. In addition, worse degrees of hearing loss and presence of noise reduced ease of listening in unaided children with unilateral hearing loss.
Conclusions: Bilateral hearing loss is associated with poorer-than-typical ease of listening in preschoolers even when aided. The impact of unilateral hearing loss on ease of listening is similar to that observed in children with bilateral hearing loss, despite good hearing in one ear and aiding. Given increased difficulties experienced by children with unilateral loss, with or without a device, additional strategies to facilitate communication abilities in noise should be a priority.

Pubmed PDF Web

Goggle Versus Remote-Camera Video Head Impulse Test Device Comparison

Janky, Kristen L.; Patterson, Jessie N.; Vandervelde, Casey

Publication date 05-07-2024


Objectives: This study compared remote versus goggle video head impulse testing (vHIT) outcomes to validate remote-camera vHIT, which is gaining popularity in difficult to test populations.
Design: Seventeen controls and 10 individuals with vestibular dysfunction participated. Each participant completed remote-camera and goggle vHIT. The main outcome parameters were canal gain, frequency of corrective saccades, and a normal versus abnormal rating.
Results: Horizontal and vertical canal vHIT gain was significantly lower in the vestibular compared with the control group; remote-camera gains were significantly lower compared with goggle gain for the vestibular group only. The devices categorized control versus vestibular canals identically except for one vertical canal. In the vestibular group, there was not a significant difference in the percentage of compensatory saccades between devices.
Conclusion: These data provide validation that results obtained with a remote-camera device are similar to those obtained using a standard goggle device.

Pubmed PDF Web

Assessing Neural Synchrony in the Cochlear Nerve to Electrical Stimulation in Children With Auditory Neuropathy Spectrum Disorder

He, Shuman; Chao, Xiuhua; Yuan, Yi; Skidmore, Jeffrey; Uhler, Kristin M.

Publication date 22-07-2024


Objectives: This study reported phase locking values (PLVs) that quantified the trial-to-trial phase coherence of electrically evoked compound action potentials in children with auditory neuropathy spectrum disorders (ANSD) and children with Gap Junction Beta 2 (GJB2) mutations, a patient population without noticeable cochlear nerve damage.
Design: PLVs were measured at three electrode locations in 11 children with ANSD and 11 children with GJB2 mutations. Smaller PLVs indicated poorer neural synchrony. A linear mixed-effects model was used to compare PLVs measured at different electrode locations between participant groups.
Results: After controlling for the stimulation level effect, children with ANSD had smaller PLVs than children with GJB2 mutations at all three electrode locations.
Conclusions: Cochlear-implanted children with ANSD show poorer peripheral neural synchrony than children with GJB2 mutations.

Pubmed PDF Web

Copyright © KNO-T, 2020 | R/Abma