Accessibility Mode

High-contrast, large-type, low-motion layout with direct navigation for non-visual and low-vision readers.

Portrait of Ning Zheng

Hi!👋👋👋 I'm Ning Zheng (宁政), currently an Applied Scientist at Amazon Amazon, P13N, building generative recommendation systems and intent understanding algorithms for Amazon homepage. I obtained my Ph.D. in Computer Science and Human-Computer Interaction (HCI) at the University of Notre Dame Notre Dame with Prof. Toby Jia-Jun Li. My Ph.D. research focused on building multimodal human-AI collaboration systems for augmenting user cognitive capabilities.

Previously worked at Adobe adobe_logo and Microsoft Microsoft as a research scientist intern.

Research Prototypes

Selected projects during my Ph.D.

AROMA system thumbnail

AROMA

Mixed-initiative AI assistance for non-visual cooking from videos.

Read the paper
AgentPbD system thumbnail

Agent PbD

Agentic workflow generation from user demonstrations on web browsers.

Read the paper
PEANUT system thumbnail

PEANUT

Human-AI collaboration for multimodal video annotation.

Read the paper
SPICA system thumbnail

SPICA

Interactive audio descriptions for blind and low-vision viewers.

Read the paper
MIMOSA system thumbnail

MIMOSA

Human-AI co-creation of spatial audio effects for video.

Read the paper

Publications

Peer-reviewed papers from my research.

AROMA thumbnail

AROMA: Mixed-Initiative AI Assistance for Non-Visual Cooking by Grounding Multimodal Information Between Reality and Videos

Zheng Ning, Leyang Li, Daniel Killough, JooYoung Seo, Patrick Carrington, Yapeng Tian, Yuhang Zhao, Franklin Mingzhe Li, Toby Jia-Jun Li

UIST 2025

AgentPbD thumbnail

AgentPbD: Interactive Agentic Workflow Generation from User Demonstration on Web Browsers

Jiawen Li, Zheng Ning, Yuan Tian, Toby Jia-Jun Li

VL/HCC 2025

Developer Behaviors thumbnail

Developer Behaviors in Validating and Repairing LLM-Generated Code Using IDE and Eye Tracking

Ningzhi Tang*, Meng Chen*, Zheng Ning, Aakash Bansal, Yu Huang, Collin McMillan, and Toby Li

VL/HCC 2024

PodReels thumbnail

PodReels: Human-AI Co-Creation of Video Podcast Teasers

Sitong Wang, Zheng Ning, Anh Truong, Mira Dontcheva, Dingzeyu Li, Lydia B. Chilton

DIS 2024

MIMOSA thumbnail

MIMOSA: Human-AI Co-Creation of Computational Spatial Audio Effects on Videos

Zheng Ning*, Zheng Zhang*, Jerrick Ban, Kaiwen Jiang, Ruohong Gan, Yapeng Tian, and Toby Li

C&C 2024

SPICA thumbnail

SPICA: Interactive Video Content Exploration through Augmented Audio Descriptions for Blind or Low-Vision Viewers

Zheng Ning, Brianna Wimer, Kaiwen Jiang, Keyi Chen, Jerrick Ban, Yapeng Tian, Yuhang Zhao, and Toby Li

CHI 2024

TiiS thumbnail

Insights into Natural Language Database Query Errors: From Attention Misalignment to User Handling Strategies

Zheng Ning*, Yuan Tian*, Zheng Zhang, Tianyi Zhang, Toby Li

TiiS (IUI 2023 Special Issue)

PEANUT thumbnail

PEANUT: A Human-AI Collaborative Tool for Annotating Audio-Visual Data

Zheng Zhang*, Zheng Ning*, Chenliang Xu, Yapeng Tian, and Toby Li

UIST 2023

Step-by-Step thumbnail

Interactive Text-to-SQL Generation via Editable Step-by-Step Explanations

Yuan Tian, Zheng Zhang, Zheng Ning, Toby Jia-Jun Li, Jonathan K. Kummerfeld, Tianyi Zhang

EMNLP 2023

NLQ Study thumbnail

An Empirical Study of Model Errors & User Error Discovery and Repair Strategies in Natural Language Database Queries

Zheng Ning*, Zheng Zhang*, Tianyi Sun, Tian Yuan, Tianyi Zhang, and Toby Li

IUI 2023