Details of Research Outputs

TitleEvaluation of joint auditory attention decoding and adaptive binaural beamforming approach for hearing devices with attention switching
Author (Name in English or Pinyin)
Pu, W.1; Zan, P.2; Xiao, J.3; Zhang, T.3; Luo, Z.-Q.1
Date Issued2020-05-04
Conference NameICASSP 2020 - 2020 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)
Source PublicationICASSP, IEEE International Conference on Acoustics, Speech and Signal Processing - Proceedings
Conference PlaceBarcelona, Spain
DOI10.1109/ICASSP40776.2020.9054592
Education discipline科技类
Published range国外学术期刊
Volume Issue Pages卷: 2020-May 页: 8728-8732
References
[1] W. Pu, J. Xiao, T. Zhang, and Z. Luo, “A joint auditory attention decoding and adaptive binaural beamforming algorithm for hearing devices,” in ICASSP 2019 - 2019 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), May 2019, pp. 311-315.
[2] S. Doclo, S. Gannot, M. Moonen, and A. Spriet, “Acoustic beamforming for hearing aid applications,” Handbook on array processing and sensor networks, pp. 269-302, 2008.
[3] S. Doclo, W. Kellermann, S. Makino, and S. E. Nordholm, “Multichannel signal enhancement algorithms for assisted listening devices: Exploiting spatial diversity using multiple microphones,” IEEE Signal Processing Magazine, vol. 32, no. 2, pp. 18-30, March 2015.
[4] D. Marquardt, V. Hohmann, and S. Doclo, “Interaural coherence preservation in multi-channel wiener filtering-based noise reduction for binaural hearing aids,” IEEE/ACM Transactions on Audio, Speech and Language Processing (TASLP), vol. 23, no. 12, pp. 2162-2176, 2015.
[5] E. Hadad, D. Marquardt, S. Doclo, and S. Gannot, “Theoretical analysis of binaural transfer function mvdr beamformers with interference cue preservation constraints,” IEEE/ACM Transactions on Audio, Speech, and Language Processing, vol. 23, no. 12, pp. 2449-2464, Dec 2015.
[6] W. Pu, J. Xiao, T. Zhang, and Z. Luo, “A penalized inequality-constrained minimum variance beamformer with applications in hearing aids,” in 2017 IEEE Workshop on Applications of Signal Processing to Audio and Acoustics (WASPAA), Oct 2017, pp. 175-179.
[7] S. Haykin and Z. Chen, “The cocktail party problem,” Neural computation, vol. 17, no. 9, pp. 1875-1902, 2005.
[8] J. A. O'Sullivan, A. J. Power, N. Mesgarani, S. Rajaram, J. J. Foxe, B. G. Shinn-Cunningham, M. Slaney, S. A. Shamma, and E. C. Lalor, “Attentional selection in a cocktail party environment can be decoded from single-trial EEG,” Cerebral Cortex, vol. 25, no. 7, pp. 1697-1706, 2015.
[9] A. Aroudi, B. Mirkovic, M. De Vos, and S. Doclo, “Impact of different acoustic components on eeg-based auditory attention decoding in noisy and reverberant conditions,” IEEE Transactions on Neural Systems and Rehabilitation Engineering, vol. 27, no. 4, pp. 652-663, April 2019.
[10] S. Miran, S. Akram, A. Sheikhattar, J. Simon, T. Zhang, and B. Babadi, “Real-time tracking of selective auditory attention from m/eeg: A bayesian filtering approach,” Frontiers in neuroscience, vol. 12, 2018.
[11] A. J. Power, J. J. Foxe, E. Forde, R. B. Reilly, and E. C. Lalor, “At what time is the cocktail party? a late locus of selective attention to natural speech,” European Journal of Neuroscience, vol. 35, no. 9, pp. 1497-1503, 2012.
[12] B. Mirkovic, S. Debener, M. Jaeger, and M. De Vos, “Decoding the attended speech stream with multi-channel EEG: implications for online, daily-life applications,” Journal of neural engineering, vol. 12, no. 4, pp. 046007, 2015.
[13] J. O'Sullivan, Z. Chen, J. Herrero, G. McKhann, S. A. Sheth, A. D. Mehta, and N. Mesgarani, “Neural decoding of attentional selection in multi-speaker environments without access to clean sources.,” Journal of neural engineering, vol. 14 5, pp. 056001, 2017.
[14] N. Das, A. Bertrand, and T. Francart, “EEG-based auditory attention detection: boundary conditions for background noise and speaker positions,” Journal of Neural Engineering, vol. 15, no. 6, pp. 066017, 2018.
[15] W. Biesmans, N. Das, T. Francart, and A. Bertrand, “Auditory-inspired speech envelope extraction methods for improved EEG-based auditory attention detection in a cocktail party scenario,” IEEE Transactions on Neural Systems and Rehabilitation Engineering, vol. 25, no. 5, pp. 402-412, 2017.
[16] A. Aroudi and S. Doclo, “Cognitive-driven binaural lcmv beamformer using eeg-based auditory attention decoding,” in ICASSP 2019 - 2019 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), May 2019, pp. 406-410.
[17] S. A. Fuglsang, T. Dau, and J. Hjortkjær, “Noise-robust cortical tracking of attended speech in real-world acoustic scenes,” Neuroimage, vol. 156, pp. 435-444, 2017.
[18] S. Van Eyndhoven, T. Francart, and A. Bertrand, “EEG-informed attended speaker extraction from recorded speech mixtures with application in neuro-steered hearing prostheses,” IEEE Transactions on Biomedical Engineering, vol. 64, no. 5, pp. 1045-1056, 2017.
[19] A. Spriet, M. Moonen, and J. Wouters, “Robustness analysis of multichannel wiener filtering and generalized sidelobe cancellation for multimicrophone noise reduction in hearing aid applications,” IEEE Transactions on Speech and Audio Processing, vol. 13, no. 4, pp. 487-503, July 2005.
[20] S. V. David, N. Mesgarani, and S. A. Shamma, “Estimating sparse spectro-temporal receptive fields with natural stimuli.,” Network, vol. 18 3, pp. 191-212, 2007.
[21] N. Ding and J. Z. Simon, “Neural coding of continuous speech in auditory cortex during monaural and dichotic listening,” Journal of Neurophysiology, vol. 107, no. 1, pp. 78-89, Jan 2012.
[22] S. Geirnaert, T. Francart, and A. Bertrand, “An interpretable performance metric for auditory attention decoding algorithms in a context of neuro-steered gain control,” IEEE Transactions on Neural Systems and Rehabilitation Engineering, vol. 28, no. 1, pp. 307-317, Jan 2020.
[23] A. Cheveigné and J. Z. Simon, “Denoising based on spatial filtering,” Journal of Neuroscience Methods, vol. 171, no. 2, pp. 331 - 339, 2008.
[24] N. Ding and J. Z. Simon, “Adaptive temporal encoding leads to a background-insensitive cortical representation of speech,” J. Neurosci.; Journal of Neuroscience, vol. 33, no. 13, pp. 5728-5735, 2013.
[25] N. Ding and J. Z. Simon, “Emergence of neural encoding of auditory objects while listening to competing speakers,” PNAS; Proceedings of the National Academy of Sciences, vol. 109, no. 29, pp. 11854-11859, 2012.
[26] D. P. Bertsekas, Nonlinear programming, Athena scientific Belmont, 1999.
Citation statistics
Cited Times [WOS]:0   [WOS Record]     [Related Records in WOS]
Document TypeConference paper
Identifierhttps://irepository.cuhk.edu.cn/handle/3EPUXD0A/1911
CollectionSchool of Science and Engineering
Affiliation
1.Shenzhen Research Institute of Big Data, Chinese University of Hong Kong, Shenzhen, China
2.University of Maryland, College Park, MD, United States
3.Starkey Hearing Technologies, Eden Prairie, MN, United States
First Author AffilicationShenzhen Research Institute of Big Data
Recommended Citation
GB/T 7714
Pu, W.,Zan, P.,Xiao, J.et al. Evaluation of joint auditory attention decoding and adaptive binaural beamforming approach for hearing devices with attention switching[C],2020.
Files in This Item:
There are no files associated with this item.
Related Services
Usage statistics
Google Scholar
Similar articles in Google Scholar
[Pu, W.]'s Articles
[Zan, P.]'s Articles
[Xiao, J.]'s Articles
Baidu academic
Similar articles in Baidu academic
[Pu, W.]'s Articles
[Zan, P.]'s Articles
[Xiao, J.]'s Articles
Bing Scholar
Similar articles in Bing Scholar
[Pu, W.]'s Articles
[Zan, P.]'s Articles
[Xiao, J.]'s Articles
Terms of Use
No data!
Social Bookmark/Share
All comments (0)
No comment.
 

Items in the repository are protected by copyright, with all rights reserved, unless otherwise indicated.