📝 Publications
( * equal contribution, # corresponding author)
2024
IIANet: an intra- and inter-modality attention network for audio-visual speech separation Kai Li, Runxuan Yang, Sun Fuchun, Xiaolin Hu. ICML 2024. Vienna, Austria.
- Inspired by the cross-modal processing mechanism in the brain, we design intra- and inter-attention modules to integrate auditary and visual information for efficient speech separation. The model simulates audio-visual fusion in different levels of sensory cortical areas as well as higher association areas such as parietal cortex.
Towards Robust Pansharpening: A Large-Scale High-Resolution Multi-Scene Dataset and Novel Approach Shiying Wang, Xuechao Zou, Kai Li, Junliang Xing, Tengfei Cao, Pin Tao. Remote Sensing 2024.
- PanBench, a high-resolution multi-scene dataset containing all mainstream satellites and comprising 5898 pairs of samples.
The sound demixing challenge 2023–Cinematic demixing track. Stefan Uhlich, Giorgio Fabbro, Masato Hirano, Shusuke Takahashi, Gordon Wichern, Jonathan Le Roux, Dipam Chakraborty, Sharada Mohanty, Kai Li, Yi Luo, Jianwei Yu, Rongzhi Gu, Roman Solovyev, Alexander Stempkovskiy, Tatiana Habruseva, Mikhail Sukhovei, Yuki Mitsufuji. ISMIR 2024.
- This paper summarizes the cinematic demixing (CDX) track of the Sound Demixing Challenge 2023 (SDX’23). We provide a comprehensive summary of the challenge setup, detailing the structure of the competition and the datasets used.
SPMamba: State-space model is all you need in speech separation. Kai Li, Guo Chen. Arxiv 2024.
High-Fidelity Lake Extraction via Two-Stage Prompt Enhancement: Establishing a Novel Baseline and Benchmark. Ben Chen, Xuechao Zou, Kai Li, Yu Zhang, Junliang Xing, Pin Tao ICME 2024. Niagra Falls, Canada
An Audio-Visual Speech Separation Model Inspired by Cortico-Thalamo-Cortical Circuits. Kai Li, Fenghua Xie, Hang Chen, Kexin Yuan, Xiaolin Hu. TPAMI 2024.
-
A brain-inspired model for audio-visual speech separation. The state-of-the-art model on this task.
RTFS-Net: Recurrent time-frequency modelling for efficient audio-visual speech separation. Samuel Pegg*, Kai Li*, Xiaolin Hu. ICLR 2024. Vienna, Austria.
-
The core contribution of this paper is the development of the DiffCR framework, a novel fast conditional diffusion model for high-quality cloud removal from optical satellite images, which significantly outperforms existing models in both synthesis quality and computational efficiency.
DiffCR: A Fast Conditional Diffusion Framework for Cloud Removal From Optical Satellite Images. Xuechao Zou*, Kai Li*, Junliang Xing, Yu Zhang, Shiying Wang, Lei Jin, Pin Tao. TGRS 2024.
-
The core contribution of this paper is the development of the DiffCR framework, a novel fast conditional diffusion model for high-quality cloud removal from optical satellite images, which significantly outperforms existing models in both synthesis quality and computational efficiency.
Subnetwork-to-go: Elastic Neural Network with Dynamic Training and Customizable Inference. Kai Li, Yi Luo. ICASSP 2024. Seoul, Korea.
- A method for training neural networks with dynamic depth and width configurations, enabling flexible extraction of subnetworks during inference without additional training.
LEFormer: A Hybrid CNN-Transformer Architecture for Accurate Lake Extraction from Remote Sensing Imagery. Ben Chen, Xuechao Zou, Yu Zhang, Jiayu Li, Kai Li, Pin Tao. ICASSP 2024. Seoul, Korea.
- The core contribution of this paper is the introduction of LEFormer, a hybrid CNN-Transformer architecture that effectively combines local and global features for high-precision lake extraction from remote sensing images, achieving state-of-the-art performance and efficiency on benchmark datasets.
2023
TDFNet: An Efficient Audio-Visual Speech Separation Model with Top-down Fusion. Samuel Pegg*, Kai Li*, Xiaolin Hu. ICIST 2023. Cairo, Egypt.
-
TDFNet is a cutting-edge method in the field of audio-visual speech separation. It introduces a multi-scale and multi-stage framework, leveraging the strengths of TDANet and CTCNet. This model is designed to address the inefficiencies and limitations of existing multimodal speech separation models, particularly in real-time tasks.
PMAA: A Progressive Multi-scale Attention Autoencoder Model for High-Performance Cloud Removal from Multi-temporal Satellite Imagery. Xuechao Zou*, Kai Li*, Junliang Xing, Pin Tao#, Yachao Cui. ECAI 2023. Kraków, Poland.
An efficient encoder-decoder architecture with top-down attention for speech separation. Kai Li, Runxuan Yang, Xiaolin Hu. ICLR 2023. Kigali, Rwanda.
-
Top-down neural projections are ubiquitous in the brain. We found that this kind of projections are very useful for solving the Cocktail Party Problem.
-
| Audio Demo Page |
One-page Report for Tencent AI Lab’s CDX 2023 System. Kai Li, Yi Luo. SDX Workshop 2023. Paris, France.
- We present the system of Tencent AI Lab for the Cinematic Sound Demixing Challenge 2023, which is based on a novel neural network architecture and a new training strategy.
Audio-Visual Speech Separation in Noisy Environments with a Lightweight Iterative Model. Héctor Martel, Julius Richter, Kai Li, Xiaolin Hu and Timo Gerkmann. Interspeech 2023. Dublin, Ireland.
A Neural State-Space Model Approach to Efficient Speech Separation. Chenchen, Chao-Han Huck Yang, Kai Li, Yuchen Hu, Pin-Jui Ku and Eng Siong Chng. Interspeech 2023. Dublin, Ireland.
On the Design and Training Strategies for RNN-based Online Neural Speech Separation Systems. Kai Li, Yi Luo. ICASSP 2023. Melbourne, Australia.
- The paper explores converting RNN-based offline neural speech separation systems to online systems with minimal performance degradation.
2022
On the Use of Deep Mask Estimation Module for Neural Source Separation Systems. Kai Li, Xiaolin Hu, Yi Luo. InterSpeech 2022. Incheon, Korea.
- We propose a Deep Mask Estimation Module for speech separation, which can improve the performance without additional computional complex.
Inferring mechanisms of auditory attentional modulation with deep neural networks Ting-Yu Kuo, Yuanda Liao, Kai Li, Bo Hong, Xiaolin Hu. Neural Computation, 2022.
2021
Speech Separation Using an Asynchronous Fully Recurrent Convolutional Neural Network. Xiaolin Hu*, #, Kai Li$^*$, Weiyi Zhang, Yi Luo, Jean-Marie Lemercier, Timo Gerkmann. NeurIPS 2021. Online.
-
This paper introduces an asynchronous updating scheme in a bio-inspired recurrent neural network architecture, significantly enhancing speech separation efficiency and accuracy compared to traditional synchronous methods.
2020 and Prior
A Survey of Single Image Super Resolution Reconstruction. Kai Li, Shenghao Yang, Runting Dong, Jianqiang Huang#, Xiaoying Wang. IET Image Processing 2020.
- This paper provides a comprehensive survey of single image super-resolution reconstruction methods, including traditional methods and deep learning-based methods.
Single Image Super-resolution Reconstruction of Enhanced Loss Function with Multi-GPU Training. Jianqiang Huang*, #, Kai Li$^*$, Xiaoying Wang.
- This paper proposes a multi-GPU training method for single image super-resolution reconstruction, which can significantly reduce training time and improve performance.