洪志偉 - 學術表現

洪志偉

洪志偉

教授

學歷:國立台灣大學電機博士

研究領域:(1)數位信號處理(2)數位語音處理(3)語音處理應用

電話:+886-49-2910960 ext.4802

Email:jwhung@ncnu.edu.tw

辦公室:科一館415

個人網站

學術研究

學術表現:

 

期刊論文

[1]

Y.-T. Chen, Z.-T. Wu, and J.-W. Hung, "Cross-Domain Conv-TasNet Speech Enhancement Model with Two-Level Bi-Projection Fusion of Discrete Wavelet Transform," Applied Sciences, vol. 13, no. 10, p. 5992, 2023.

[2]

Y. J. Lu et al., "Improving Speech Enhancement Performance by Leveraging Contextual Broad Phonetic Class Information," IEEE/ACM Transactions on Audio, Speech, and Language Processing, vol. 31, pp. 2738-2750, 2023

[3]

L.-C. Chang and J.-W. Hung, "A Preliminary Study of Robust Speech Feature Extraction Based on Maximizing the Probability of States in Deep Acoustic Models," Applied System Innovation, vol. 5, no. 4, p. 71, 2022.

[4]

F. A. Chao, J. W. Hung, T. Sheu, and B. Chen, "Time-Reversal Enhancement Network With Cross-Domain Information for Noise-Robust Speech Recognition," IEEE MultiMedia, vol. 29, no. 1, pp. 114-124, 2022

[5]

Y.-T. Chen and J.-w. Hung, "使用低通時序列語音特徵訓練理想比率遮罩法之語音強化 (Employing Low-Pass Filtered Temporal Speech Features for the Training of Ideal Ratio Mask in Speech Enhancement)," International Journal of Computational Linguistics and Chinese Language Processing, vol. 26, no. 2, 2021.

[6]

Y.-J. Lu, C.-Y. Chang, Y. Tsao, and J.-w. Hung, "Speech enhancement guided by contextual articulatory information," arXiv preprint arXiv:2011.07442, 2020.

[7]

C. Yu, K.-H. Hung, I.-F. Lin, S.-W. Fu, Y. Tsao, and J.-w. Hung, "Waveform-based voice activity detection exploiting fully convolutional networks with multi-branched encoders," arXiv preprint arXiv:2006.11139, 2020.

[8]

C. Yu, K-H. Hung, S-S. Wang, Y. Tsao and J-W. Hung, “Time-domain multi-modal bone/air conducted speech enhancement,” IEEE Signal Processing Letters, 2020.

[9]

S-S. Wang, P. Lin, Y. Tsao, J-W. Hung, B. Su, “Suppression by selecting wavelets for feature compression in distributed speech recognition,” IEEE/ACM Trans. on Audio, Speech, and Language Processing, March 2018 (SCI)

[10]

J-W. Hung, J-S. Lin and P-J. Wu, “Employing robust principal component analysis for noise-robust speech feature extraction in automatic speech recognition with the structure of a deep neural network,” Applied System Innovation, Aug 2018

[11]

S-K. Lee, J-W. Hung, “An evaluation study of using various SNR-level training data in the denoising autoencoder (DAE) technique for speech enhancement,” International Journal of Electrical, Electronics and Data Communication, Apr 2018

[12]

S-S. Wang, A. Chern, Y. Tsao, J-W. Hung, X. Lu, Y-H. Lai and B. Su, “Wavelet speech enhancement based on nonnegative matrix factorization,” IEEE Signal Processing Letters, May 2016 (SCI)

[13]

J-W. Hung, H-J. Hsieh and B. Chen, “Robust speech recognition via enhancing the complex-valued acoustic spectrum in modulation Domain,” IEEE/ACM Trans. on Audio, Speech, and Language Processing, Feb 2016 (SCI)

[14]

Y-D. Wang, J-H. Jheng, H-J. Hsieh and J-W. Hung, “An evaluation study of speaker and noise adaptation for nonnegative matrix factorization based speech enhancement,” International Journal of Electrical, Electronics and Data Communication, Nov 2015

[15]

H-J. Hsieh, H-T. Fan and J-W. Hung, “Leveraging jointly spatial, temporal and modulation enhancement in creating noise-robust features for speech recognition,” International Journal of Electrical, Electronics and Data Communication, Nov 2015

 

國際會議論文

[1]

C. E. Dai, J. X. Zeng, W. L. Zeng, E. S. Li, and J. W. Hung, "Improving the performance of CMGAN in speech enhancement with the phone fortified perceptual loss," in 2023 International Conference on Consumer Electronics - Taiwan (ICCE-Taiwan), 17-19 July 2023, pp. 459-460

[2]

C. E. Dai, W. L. Zeng, J. X. Zeng, and J. W. Hung, "Leveraging the Objective Intelligibility and Noise Estimation to Improve Conformer-Based MetricGAN," in 2023 9th International Conference on Applied System Innovation (ICASI), 21-25 April 2023, pp. 139-141

[3]

K. H. Ho, J. W. Hung, and B. Chen, "ConSep: a Noise- and Reverberation-Robust Speech Separation Framework by Magnitude Conditioning," in 2023 24th International Conference on Digital Signal Processing (DSP), 11-13 June 2023, pp. 1-5

[4]

K. H. Ho, E. L. Yu, J. W. Hung, and B. Chen, "NAaLOSS: Rethinking the Objective of Speech Enhancement," in 2023 IEEE 33rd International Workshop on Machine Learning for Signal Processing (MLSP), 17-20 Sept. 2023, pp. 1-6

[5]

P. F. Li, P. C. Wu, and J. W. Hung, "Improving the wavelet transform based adaptive FullSubNet+ with Huber loss," in IET International Conference on Engineering Technologies and Applications (ICETA 2023), 21-23 Oct. 2023, vol. 2023, pp. 164-165

[6]

C.-W. Liao, A. N. Aung, and J.-w. Hung, "ESC MA-SD Net: Effective Speaker Separation through Convolutional Multi-View Attention and SudoNet," in Proceedings of the 35th Conference on Computational Linguistics and Speech Processing (ROCLING 2023), 2023, pp. 157-161.

[7]

Y. s. Tsao, K. H. Ho, J. w. Hung, and B. Chen, "Adaptive-FSN: Integrating Full-Band Extraction and Adaptive Sub-Band Encoding for Monaural Speech Enhancement," in 2022 IEEE Spoken Language Technology Workshop (SLT), 9-12 Jan. 2023, pp. 458-464

[8]

P. C. Wu, P. F. Li, Z. T. Wu, and J. W. Hung, "The Study of Improving the Adaptive FullSubNet+ Speech Enhancement Framework with Selective Wavelet Packet Decomposition Sub-Band Features," in 2023 9th International Conference on Applied System Innovation (ICASI), 21-25 April 2023, pp. 130-132

[9]

Z. T. Wu, P. F. Li, P. C. Wu, E. S. Li, and J. W. Hung, "Exploiting Discrete Wavelet Transform Features in Speech Enhancement Technique Adaptive FullSubNet+," in 2023 International Conference on Consumer Electronics - Taiwan (ICCE-Taiwan), 17-19 July 2023, pp. 461-462

[10]

Y.-T. Chen, Z.-T. Wu, and J.-W. Hung, "A Preliminary Study of the Application of Discrete Wavelet Transform Features in Conv-TasNet Speech Enhancement Model," Taipei, Taiwan, November 2022: The Association for Computational Linguistics and Chinese Language Processing (ACLCLP), in Proceedings of the 34th Conference on Computational Linguistics and Speech Processing (ROCLING 2022), pp. 92-99.

[11]

C.-E. Dai, Q.-W. Hong, and J.-W. Hung, "Exploiting the compressed spectral loss for the learning of the DEMUCS speech enhancement network," Taipei, Taiwan, November 2022: The Association for Computational Linguistics and Chinese Language Processing (ACLCLP), in Proceedings of the 34th Conference on Computational Linguistics and Speech Processing (ROCLING 2022), pp. 100-106.

[12]

K. H. Ho, J. w. Hung, and B. Chen, "Bi-Sep: A Multi-Resolution Cross-Domain Monaural Speech Separation Framework," in 2022 International Conference on Technologies and Applications of Artificial Intelligence (TAAI), 1-3 Dec. 2022, pp. 72-77

[13]

Q. W. Hong, C. E. Dai, H. C. Hsu, Z. T. Wu, and J. W. Hung, "Leveraging the perceptual metric loss to improve the DEMUCS system in speech enhancement," in 2022 8th International Conference on Applied System Innovation (ICASI), 22-23 April 2022, pp. 76-79

[14]

Y. Y. Hsiao, M. H. Wu, K. Y. Tsai, and J. W. Hung, "The preliminary study of improving the DPTNet speech enhancement system by adjusting its encoder and loss function," in 2022 8th International Conference on Applied System Innovation (ICASI), 22-23 April 2022, pp. 64-67

[15]

C. W. Liao, P. C. Wu, and J. w. Hung, "A Preliminary Study of Employing Lowpass-Filtered and Time-Reversed Feature Sequences as Data Augmentation for Speech Enhancement Deep Networks," in 2022 International Symposium on Intelligent Signal Processing and Communication Systems (ISPACS), 22-25 Nov. 2022, pp. 1-4

[16]

Y. J. Tang, P. Y. Hsieh, M. H. Tsai, Y. T. Chen, and J. W. Hung, "Improving the efficiency of Dual-path Transformer Network for speech enhancement by reducing the input feature dimensionality," in 2022 8th International Conference on Applied System Innovation (ICASI), 22-23 April 2022, pp. 80-83

[17]

Y. S. Tsao, B. Chen, and J. W. Hung, "Exploiting Discrete Cosine Transform Features in Speech Enhancement Technique FullSubNet+," in 2022 IET International Conference on Engineering Technologies and Applications (IET-ICETA), 14-16 Oct. 2022, pp. 1-2

[18]

Y. s. Tsao, J. w. Hung, K. H. Ho, and B. Chen, "Investigating Low-Distortion Speech Enhancement with Discrete Cosine Transform Features for Robust Speech Recognition," in 2022 Asia-Pacific Signal and Information Processing Association Annual Summit and Conference (APSIPA ASC), 7-10 Nov. 2022, pp. 131-136

[19]

Z. T. Wu, Y. T. Chen, and J. w. Hung, "Improving the performance of DEMUCS in speech enhancement with the perceptual metric loss," in 2022 IEEE International Conference on Consumer Electronics - Taiwan, 6-8 July 2022, pp. 267-268

[20]

F. A. Chao, J. w. Hung, and B. Chen, "Cross-Domain Single-Channel Speech Enhancement Model with BI-Projection Fusion Module for Noise-Robust ASR," in 2021 IEEE International Conference on Multimedia and Expo (ICME), 5-9 July 2021, pp. 1-6

[21]

F. A. Chao, S. W. F. Jiang, B. C. Yan, J. w. Hung, and B. Chen, "TENET: A Time-Reversal Enhancement Network for Noise-Robust ASR," in 2021 IEEE Automatic Speech Recognition and Understanding Workshop (ASRU), 13-17 Dec. 2021, pp. 55-61

[22]

Y.-T. Chen, Z.-Q. Lin, and J.-W. Hung, "Employing low-pass filtered temporal speech features for the training of ideal ratio mask in speech enhancement," Taoyuan, Taiwan, October 2021: The Association for Computational Linguistics and Chinese Language Processing (ACLCLP), in Proceedings of the 33rd Conference on Computational Linguistics and Speech Processing (ROCLING 2021), pp. 236-242.

[23]

Y. T. Chen, S. T. Tsai, and J. W. Hung, "The effect of reducing the acoustic-frequency resolution for spectrograms used in deep denoising auto-encoder," in 2021 IEEE International Conference on Consumer Electronics-Taiwan (ICCE-TW), 15-17 Sept. 2021 2021, pp. 1-2

[24]

J. W. Hung, J. R. Lin, and L. Y. Zhuang, "The Evaluation Study of the Deep Learning Model Transformer in Speech Translation," in 2021 7th International Conference on Applied System Innovation (ICASI), 24-25 Sept. 2021, pp. 30-33

[25]

J. W. Hung, S. T. Tsai, and Y. T. Chen, "Exploiting the Non-Uniform Frequency-Resolution Spectrograms to Improve the Deep Denoising Auto-Encoder for Speech Enhancement," in 2021 7th International Conference on Applied System Innovation (ICASI), 24-25 Sept. 2021, pp. 26-29

[26]

F.-A. Chao, J.-w. Hung, and B. Chen, "Multi-view Attention-based Speech Enhancement Model for Noise-robust Automatic Speech Recognition," in Proceedings of the 32nd Conference on Computational Linguistics and Speech Processing (ROCLING 2020), 2020, pp. 120-135.

[27]

F. A. Chao, J. W. Hung, and B. Chen, "基於多視角注意力機制語音增強模型於強健性自動語音辨識," in 32nd Conference on Computational Linguistics and Speech Processing, ROCLING 2020, 2020: The Association for Computational Linguistics and Chinese Language …, pp. 120-135.

[28]

Y-J. Lu, C-F. Liao, X. Lu, J-W. Hung, Y. Tsao, “Incorporating Broad Phonetic Information for Speech Enhancement”, Interspeech 2020

[29]

C-L. Lin, Z-Q. Lin, S-S. Wang, Y. Tsao and J-W. Hung, “Exponentiated magnitude spectrogram-based relative-to-maximum masking for speech enhancement in adverse environments,” IEEE International Conference on Consumer Electronics –Taiwan, 2020

[30]

Z-Q. Lin, C-L. Lin and J-W. Hung, “Lowpass-filtered relative-to-maximum masking for speech enhancement in noise-corrupted environments,” IEEE International Conference on Consumer Electronics –Taiwan, 2020

[31]

S-K. Lee, S-S. Wang, Y. Tsao, J-W. Hung, “Speech enhancement based on reducing the detail portion of speech spectrograms in modulation domain via discrete wavelet transform,” in Proc. ISCSLP, 2018

[32]

J-W. Hung, J-S. Lin and P-J. Wu, “Employing robust principal component analysis for noise-robust speech feature extraction in automatic speech recognition with the structure of deep neural network,” in Proc. ICASI, 2018

[33]

J-W. Hung, J-S. Lin, L-M Lee, S-Yu Wang, “A study of integrating noise-robustness feature extraction techniques with the reduced frame-rate acoustic models in mobile-device speech recognition,” in Proc. AROB, 2018

[34]

C-L. Wu, H-P. Hsu, S-S. Wang, J-W. Hung, Y-H. Lai, H-M. Wang, Y. Tsao, “Wavelet speech enhancement based on robust principal component analysis,” in Proc. Interspeech, 2017

[35]

J-W. Hung and J-S. Lin, “Enhancing the acoustic spectrogram in modulation domain via sparse nonnegative matrix factorization for speech enhancement,” in Proc. AROB, 2017

[36]

J. C. Yang, S-S. Wang, Y. Tsao and J-W. Hung, “Speech enhancement via ensemble modeling NMF adaptation,” in Proc. ICCE-TW, 2016

[37]

H-J. Hsieh, J-H. Jheng, J-S. Lin and J-W. Hung, “Linear prediction filtering on cepstral time series for noise-robust speech recognition,” in Proc. ICCE-TW, 2016

[38]

S-S. Wang, J. C. Yang; Y. Tsao and J-W. Hung, “Leveraging nonnegative matrix factorization in processing the temporal modulation spectrum for speech enhancement,” in Proc. ICCE-TW, 2016

[39]

J-W. Hung and J-S. Lin, “A study of the noise-robustness algorithms on various types of cepstral feature representation for real-world speech recognition,” in Proc. AROB, 2016