Hello! I am Kangning Liu, currently a Research Scientist at Adobe. I earned my Ph.D. at the NYU Center for Data Science, where I was advised by Prof. Carlos Fernandez Granda and Prof. Krzysztof J. Geras. Before that, I earned my M.Sc. in Data Science from ETH Zurich and my B.E. in Electrical Engineering from Tsinghua University.
I am deeply passionate about harnessing the power of machine learning and computer vision to address tangible, real-world challenges. My research particularly centers on learning under imperfect supervision, such as uncertainty-aware fine-tuning of segmentation foundation models (SUM), noise-resilient deep segmentation (ADELE), weakly supervised segmentation (GLAM), and unsupervised/self-supervised learning (ItS2CLR). Beyond this, my expertise extends to video analysis (StrokeRehab) and video synthesis (UVIT & Controllable Face Video Synthesis).
I am currently seeking research interns for the Summer of 2025. Please feel free to drop me an email if you are interested.
For more details, feel free to contact me at kangning.liu[at]nyu.edu. You can also find me on Google Scholar and LinkedIn .
Adobe
San Jose, California, USA (April 2024 - present)
Research Scientist
Adobe
San Jose, California, USA (May 2023 - Nov 2023)
Research Intern | Advisors: Dr. Brian Price, Dr. Jason Kuen, Dr. Yifei Fan, Dr. Zijun Wei, Luis Figueroa, Markus Woodson
Mountain View, California, USA (May 2022 - Sep 2022)
Research Intern | Advisors: Dr. Xuhui Jia, Dr. Yu-Chuan Su, Dr. Ruijin Cang
Center for Data Science, New York University
New York, USA (Sept 2019 - March 2024)
Research Assistant | Advisors: Prof. Carlos Fernandez-Granda, Prof. Krzysztof J. Geras
Computer Vision Laboratory, ETH Zurich
Zurich, Switzerland (Oct 2018 - Aug 2019)
Research Assistant | Advisors: Prof. Luc Van Gool, Prof. Radu Timofte, Prof. Shuhang Gu
See also Google Scholar.
Controllable One-Shot Face Video Synthesis With Semantic Aware Prior
Kangning Liu, Yu-Chuan Su, Wei(Alex) Hong, Ruijin Cang, Xuhui Jia.
We propose a method that leverages rich face prior information to generate face videos with improved semantic consistency and expression preservation. Our model improves the baseline by 7% in average keypoint distance and outperforms the baseline by 15% in average emotion embedding distance, while also providing a convenient interface for highly controllable generation in terms of pose and expression.
Uncertainty-aware Fine-tuning of Segmentation Foundation Models
[Paper link] [Github Link] [Project Website]
Kangning Liu, Brian Price, Jason Kuen, Yifei Fan, Zijun Wei, Luis Figueroa, Krzysztof J. Geras, Carlos Fernandez-Granda.
[NeurIPS 2024]
The Segment Anything Model (SAM) is a large-scale foundation model that has revolutionized segmentation methodology. Despite its impressive generalization ability, the segmentation accuracy of SAM on images with intricate structures is unsatisfactory in many cases. We introduce the Segmentation with Uncertainty Model (SUM), which enhances the accuracy of segmentation foundation models by incorporating an uncertainty-aware training loss and prompt sampling based on the estimated uncertainty of pseudo-labels. We evaluated the proposed SUM on a diverse test set consisting of 22 public benchmarks, where it achieves state-of-the-art results. Notably, our method consistently surpasses SAM by 3-6 points in mean IoU and 4-7 in mean boundary IoU across point-prompt interactive segmentation rounds.
Multiple Instance Learning via Iterative Self-Paced Supervised Contrastive Learning
Kangning Liu*,Weicheng Zhu* (*Equal contribution), Yiqiu Shen, Sheng Liu, Narges Razavian, Krzysztof J. Geras, Carlos Fernandez-Granda.
[CVPR 2023]
In this paper, we introduce Iterative Self-Paced Supervised Contrastive Learning (Its2CLR), a novel method for learning high-quality instance-level representations in Multiple Instance Learning (MIL). Key features of our method: 1) self-paced learning to handle label noise and uncertainty; 2) supervised contrastive learning to learn discriminative instance-level embeddings; 3) iterative refinement of instance labels for robust and accurate classification.
Adaptive early-learning correction for segmentation from noisy annotations
Sheng Liu*, Kangning Liu* (*Equal contribution, order decided by coin flip.), Weicheng Zhu, Yiqiu Shen, Carlos Fernandez-Granda.
[CVPR 2022. Oral 4.2% acceptance rate]
Deep learning in the presence of noisy annotations has been studied extensively in classification, but much less in segmentation tasks. In this project, we study the learning dynamics of deep segmentation networks trained on inaccurately-annotated data and propose a new method for semantic segmentation ADaptive Early-Learning corrEction (ADELE).
StrokeRehab: A Benchmark Dataset for Sub-second Action Identification
Aakash Kaku*, Kangning Liu* (*Equal contribution), Avinash Parnandi*, Haresh Rengaraj Rajamohan, Anita Venkatesan, Audre Wirtanen, Natasha Pandit, Kannan Venkataramanan, Heidi Schambra, Carlos Fernandez-Granda.
[NeurIPS 2022]
We introduce a new benchmark dataset for the identification of subtle and short-duration actions. We also propose a novel seq2seq approach, which outperforms the existing methods on the new as well as standard benchmark datasets.
Are All Losses Created Equal: A Neural Collapse Perspective
Jinxin Zhou, Chong You, Xiao Li, Kangning Liu, Sheng Liu, Qing Qu, Zhihui Zhu.
[NeurIPS 2022]
A broad family of loss functions leads to neural collapse solutions hence are equivalent on training set; moreover, they exhibit largely identical performance on test data as well.
Cramér-Rao bound-informed training of neural networks for quantitative MRI
Xiaoxia Zhang*, Quentin Duchemin*, Kangning Liu* (*Equal contribution), Sebastian Flassbeck, Cem Gultekin, Carlos Fernandez-Granda, Jakob Asslander.
[Magnetic Resonance in Medicine 2021]
To address the parameter estimation problem in heterogeneous parameter spaces, we propose a theoretically well-founded loss function based on the Cramér-Rao bound (CRB), which provides a theoretical lower bound for the variance of an unbiased estimator.
Weakly-supervised High-resolution Segmentation of Mammography Images for Breast Cancer Diagnosis
Kangning Liu, Yiqiu Shen, Nan Wu, Jakub Piotr Chdowski, Carlos Fernandez-Granda, and Krzysztof J. Geras
[Medical Imaging with Deep Learning 2021]
In this study, we unveil a new neural network model for weakly-supervised, high-resolution image segmentation. Focused on breast cancer diagnosis via mammography, our method first identifies regions of interest and then segments them in detail. Validated on a substantial, clinically-relevant dataset, our approach significantly outperforms existing methods, improving lesion localization performance by up to 39.6% and 20.0% for benign and malignant cases, respectively.
Unsupervised Multimodal Video-to-Video Translation via Self-Supervised Learning
Kangning Liu*, Shuhang Gu*(*Equal contribution), Andrs Romero, and Radu Timofte
[WACV 2021]
In this project, we introduce an unsupervised video-to-video translation model that decouples style and content. Leveraging specialized encoder-decoder and bidirectional RNNs, our model excels in propagating inter-frame details. This architecture enables style-consistent translations and offers a user-friendly interface for cross-modality conversion. We also implement a self-supervised training approach using a novel video interpolation loss that captures sequence-based temporal information.
An interpretable classifier for high-resolution breast cancer screening images
Shen, Yiqiu, Nan Wu, Jason Phang, Jungkyu Park, Kangning Liu, Sudarshini Tyagi, Laura Heacock et al.
[Medical image analysis 2021]
Medical images differ from natural images in significantly higher resolutions and smaller regions of interest. Because of these differences, neural network architectures that work well for natural images might not be applicable to medical image analysis. In this work, we propose a novel neural network model to address these unique properties of medical images.
Probability and Statistics for Data Science, NYU, Center for Data Science
Teaching Assistant (Sept - Dec 2021)
Advanced Machine Learning, ETH Zurich, Department of Computer Science
Teaching Assistant (Sept - Dec 2018)
Award
Top Reviewer for NeurIPS 2024
Conference Reviewer for
CVPR, ICCV, ECCV, NeurIPS, ICLR, AISTATS, AAAI, WACV and MIDL
Journal Reviewer for
TPAMI, TNNLS, CVIU
I am just starting out with Lightroom Magic using my Sony A7M4:)
California Style
NYC Cityscapes
Light and Night
SteelStacks
Vermont Fall
Pageviews