Skip to the content.

Deep learning techniques have shown promising results in the automatic classification of respiratory sounds. However, accurately distinguishing these sounds in real-world noisy conditions poses challenges for clinical deployment. Additionally, predicting signals with only background noise could undermine user trust in the system. In this study, we propose an audio enhancement (AE) pipeline as a pre-processing step before respiratory sound classification, aiming to improve performance in noisy environments. Multiple experiments were conducted using different audio enhancement model structures, demonstrating improved classification performance compared to the baseline method of noise injection data augmentation. Specifically, the integration of the AE pipeline resulted in a 2.59\% increase in the ICBHI classification score on the ICBHI respiratory sound dataset and a 2.51\% improvement on our recently collected Formosa Archive of Breath Sounds (FABS) in multi-class noisy scenarios. Furthermore, a physician validation study assessed the clinical utility of our system. Quantitative analysis revealed enhancements in efficiency, diagnostic confidence, and trust during model-assisted diagnosis with our system compared to raw noisy recordings. Workflows integrating enhanced audio led to an 11.61\% increase in diagnostic sensitivity and facilitated high-confidence diagnoses. Our findings demonstrate that incorporating an audio enhancement algorithm significantly enhances robustness and clinical utility.

Normal:

We recommend using headphones for this section.

  Noisy Wave-U-Net PHASEN MANNER CMGAN Target
 
0
 
 
1
 
 
2
 

Crackles:

We recommend using headphones for this section.

  Noisy Wave-U-Net PHASEN MANNER CMGAN Target
 
0
 
 
1
 
 
2
 

Wheezes:

We recommend using headphones for this section.

  Noisy Wave-U-Net PHASEN MANNER CMGAN Target
 
0
 
 
1
 
 
2