📝 Publications

( * equal contribution, # corresponding author)

2024

ISMIR 2024
sym

The sound demixing challenge 2023–Cinematic demixing track. Stefan Uhlich, Giorgio Fabbro, Masato Hirano, Shusuke Takahashi, Gordon Wichern, Jonathan Le Roux, Dipam Chakraborty, Sharada Mohanty, Kai Li, Yi Luo, Jianwei Yu, Rongzhi Gu, Roman Solovyev, Alexander Stempkovskiy, Tatiana Habruseva, Mikhail Sukhovei, Yuki Mitsufuji. ISMIR 2024.

  • This paper summarizes the cinematic demixing (CDX) track of the Sound Demixing Challenge 2023 (SDX’23). We provide a comprehensive summary of the challenge setup, detailing the structure of the competition and the datasets used.
Arxiv 2024
sym

SPMamba: State-space model is all you need in speech separation. Kai Li, Guo Chen. Arxiv 2024.

  • SPMamba revolutionizes the field of speech separation tasks by leveraging the power of Mamba in conjunction with the robust TF-GridNet infrastructure.

  • We have code released at

ICME 2024
sym

High-Fidelity Lake Extraction via Two-Stage Prompt Enhancement: Establishing a Novel Baseline and Benchmark. Ben Chen, Xuechao Zou, Kai Li, Yu Zhang, Junliang Xing, Pin Tao ICME 2024. Niagra Falls, Canada

  • This paper presents LEPrompter, a two-stage prompt-based framework that enhances lake extraction from remote sensing imagery by using training prompts to improve segmentation accuracy and operating prompt-free during inference for efficient automated extraction.

  • We have code released at

TPAMI 2024
sym

An Audio-Visual Speech Separation Model Inspired by Cortico-Thalamo-Cortical Circuits. Kai Li, Fenghua Xie, Hang Chen, Kexin Yuan, Xiaolin Hu. TPAMI 2024.

ICLR 2024
sym

RTFS-Net: Recurrent time-frequency modelling for efficient audio-visual speech separation. Samuel Pegg*, Kai Li*, Xiaolin Hu. ICLR 2024. Vienna, Austria.

  • The core contribution of this paper is the development of the DiffCR framework, a novel fast conditional diffusion model for high-quality cloud removal from optical satellite images, which significantly outperforms existing models in both synthesis quality and computational efficiency.

  • Demo Page |

TGRS 2024
sym

DiffCR: A Fast Conditional Diffusion Framework for Cloud Removal From Optical Satellite Images. Xuechao Zou*, Kai Li*, Junliang Xing, Yu Zhang, Shiying Wang, Lei Jin, Pin Tao. TGRS 2024.

  • The core contribution of this paper is the development of the DiffCR framework, a novel fast conditional diffusion model for high-quality cloud removal from optical satellite images, which significantly outperforms existing models in both synthesis quality and computational efficiency.

  • Demo Page |

ICASSP 2024
sym

Subnetwork-to-go: Elastic Neural Network with Dynamic Training and Customizable Inference. Kai Li, Yi Luo. ICASSP 2024. Seoul, Korea.

  • A method for training neural networks with dynamic depth and width configurations, enabling flexible extraction of subnetworks during inference without additional training.
ICASSP 2024
sym

LEFormer: A Hybrid CNN-Transformer Architecture for Accurate Lake Extraction from Remote Sensing Imagery. Ben Chen, Xuechao Zou, Yu Zhang, Jiayu Li, Kai Li, Pin Tao. ICASSP 2024. Seoul, Korea.

  • The core contribution of this paper is the introduction of LEFormer, a hybrid CNN-Transformer architecture that effectively combines local and global features for high-precision lake extraction from remote sensing images, achieving state-of-the-art performance and efficiency on benchmark datasets.

2023

ICIST 2023
sym

TDFNet: An Efficient Audio-Visual Speech Separation Model with Top-down Fusion. Samuel Pegg*, Kai Li*, Xiaolin Hu. ICIST 2023. Cairo, Egypt.

  • TDFNet is a cutting-edge method in the field of audio-visual speech separation. It introduces a multi-scale and multi-stage framework, leveraging the strengths of TDANet and CTCNet. This model is designed to address the inefficiencies and limitations of existing multimodal speech separation models, particularly in real-time tasks.

  • We have code released at

ECAI 2023
sym

PMAA: A Progressive Multi-scale Attention Autoencoder Model for High-Performance Cloud Removal from Multi-temporal Satellite Imagery. Xuechao Zou*, Kai Li*, Junliang Xing, Pin Tao#, Yachao Cui. ECAI 2023. Kraków, Poland.

  • A Progressive Multi-scale Attention Autoencoder (PMAA) model for high-performance cloud removal from multi-temporal satellite imagery, which achieves state-of-the-art performance on benchmark datasets.
  • We have code released at
ICLR 2023
sym

An efficient encoder-decoder architecture with top-down attention for speech separation. Kai Li, Runxuan Yang, Xiaolin Hu. ICLR 2023. Kigali, Rwanda.

  • Top-down neural projections are ubiquitous in the brain. We found that this kind of projections are very useful for solving the Cocktail Party Problem.

  • 知乎 | Audio Demo Page |

SDX Workshop 2023
sym

One-page Report for Tencent AI Lab’s CDX 2023 System. Kai Li, Yi Luo. SDX Workshop 2023. Paris, France.

  • We present the system of Tencent AI Lab for the Cinematic Sound Demixing Challenge 2023, which is based on a novel neural network architecture and a new training strategy.
Interspeech 2023
sym

Audio-Visual Speech Separation in Noisy Environments with a Lightweight Iterative Model. Héctor Martel, Julius Richter, Kai Li, Xiaolin Hu and Timo Gerkmann. Interspeech 2023. Dublin, Ireland.

  • An audio-visual lightweight iterative model for speech separation in noisy environments. AVLIT employs Progressive Learning (PL) to decompose the mapping between inputs and outputs into multiple steps, enhancing computational efficiency.

  • We have code released at

Interspeech 2023
sym

A Neural State-Space Model Approach to Efficient Speech Separation. Chenchen, Chao-Han Huck Yang, Kai Li, Yuchen Hu, Pin-Jui Ku and Eng Siong Chng. Interspeech 2023. Dublin, Ireland.

  • We propose a Neural State-Space Model (S4M) for speech separation, which can efficiently separate multiple speakers.

  • We have code released at

ICASSP 2023
sym

On the Design and Training Strategies for RNN-based Online Neural Speech Separation Systems. Kai Li, Yi Luo. ICASSP 2023. Melbourne, Australia.

  • The paper explores converting RNN-based offline neural speech separation systems to online systems with minimal performance degradation.

2022

InterSpeech 2022
sym

On the Use of Deep Mask Estimation Module for Neural Source Separation Systems. Kai Li, Xiaolin Hu, Yi Luo. InterSpeech 2022. Incheon, Korea.

  • We propose a Deep Mask Estimation Module for speech separation, which can improve the performance without additional computional complex.
Neural Computation 2022
sym

Inferring mechanisms of auditory attentional modulation with deep neural networks Ting-Yu Kuo, Yuanda Liao, Kai Li, Bo Hong, Xiaolin Hu. Neural Computation, 2022.

  • With the help of DNNs, we suggest that the projection of top-down attention signals to lower stages within the auditory pathway of the human brain plays a more significant role than the higher stages in solving the “cocktail party problem”.

  • We have code released at

2021

NeurIPS 2021
sym

Speech Separation Using an Asynchronous Fully Recurrent Convolutional Neural Network. Xiaolin Hu*, #, Kai Li$^*$, Weiyi Zhang, Yi Luo, Jean-Marie Lemercier, Timo Gerkmann. NeurIPS 2021. Online.

  • This paper introduces an asynchronous updating scheme in a bio-inspired recurrent neural network architecture, significantly enhancing speech separation efficiency and accuracy compared to traditional synchronous methods.

  • 知乎 | Audio Demo Page | Speech Enhancement Demo |

2020 and Prior

IET Image Processing 2020
sym

A Survey of Single Image Super Resolution Reconstruction. Kai Li, Shenghao Yang, Runting Dong, Jianqiang Huang#, Xiaoying Wang. IET Image Processing 2020.

  • This paper provides a comprehensive survey of single image super-resolution reconstruction methods, including traditional methods and deep learning-based methods.
ISPA 2019
sym

Single Image Super-resolution Reconstruction of Enhanced Loss Function with Multi-GPU Training. Jianqiang Huang*, #, Kai Li$^*$, Xiaoying Wang.

  • This paper proposes a multi-GPU training method for single image super-resolution reconstruction, which can significantly reduce training time and improve performance.