About Me

Hi 👋👋 I am a fourth year Ph.D student in the Department of Computer Science and Engineering (CSE) at University of Notre Dame. My advisor is Prof. Toby Jia-Jun Li. Before joining Notre Dame, I received dual Bachelor degrees in Electrical and Electronic Engineering (EEE) from University of Electronic Science and Technology of China (UESTC) and University of Glasgow (Graduated with First-class honor degree). I have also worked at Adobe adobe_logo and Microsoft Microsoft as a research scientist intern.

My research focus on human-AI interaction. I build interactive systems with multimodal AI models to help users engage with content across different modalities e.g. visual, audio, text. I have developed tools in video context, including multimodal data annotation (PEANUT@UIST'24), creating spatial audio effects for videos (MIMOSA@C&C'24), and enabling blind or low-vision users to consume video content through transforming visual information into layered, interactive audio descriptions (SPICA@CHI'24). More recently, my work looks at multimodal content representation and transformation, specifically how can we align the multimodal perceptions of human (like touch, smell, and sight) with the multimodal understanding capabilities of the AI agent, to streamline user workflows.

News
  • 06/2024

    📄 Our paper: "Developer Behaviors in Validating and Repairing LLM-Generated Code Using IDE and Eye Tracking" got accepted by VL/HCC 2024! Check the preprint here.

  • 05/2024

    📄 One paper got accepted by DIS 2024! We presented PodReels, a human-AI co-creative tool that works with Adobe Pr to help podcasters create video teasers for social media by streamlining the clip selection and editing process.

  • 04/2024

    📄 One paper got accepted by C&C 2024! We presented MIMOSA, an interactive tool that works with Adobe Pr to allow video creators to generate and manipulate spatial audio effects for videos.

  • 02/2024

    📣 Excited to share that I will be working at Microsoft Research Microsoft as a research intern this summer! I am collaborating with the EPIC group!

  • 01/2024

    📄 Our paper got accepted by CHI 2024! We presented SPICA, a system to facilitate interactive video consumption experience for blind or low-vision viewers. See you in Hawaii 🏝️

  • 01/2024

    📄 Our paper got accepted by TiiS. We extended our previous research at IUI'23 with new experiments on large language models (GPT-4) and a root cause analysis of the model attention and human attention. Check the preprint here.

  • 12/2023

    ☘️ The official website of ND HCI is out! Come and meet the brilliant minds at Notre Dame! Go Irish!

  • 10/2023

    📄 Our paper on Interactive Text-to-SQL Generation got accepted to EMNLP 2023!

  • 06/2023

    📄 PEANUT got accepted by UIST 2023! We presented a human-AI collaboration tool for annotating sounding objects in videos. See you in San Francisco 🌁!

  • 01/2023

    📄 One paper accepted by PLATEAU Workshop. We did an empirical study on developer behaviors for validating and repairing AI-generated code. Check our paper here.

  • 12/2022

    📣 I will be working at Adobe Research adobe_logo as a research scientist intern in 2023 Summer. Super excited to work with Dingzeyu Li.

  • 01/2023

    📄 Our paper on error taxonomy and error-handling mechanisms in Natural Language to SQL (NL2SQL) got accepted by ACM IUI 2023. See you in Sydney 🇦🇺!

  • 10/2022

    💬 Oral presentation (virtual) at AV4D Workshop @ ECCV 2022.

  • 10/2022

    💬 Presenting a poster titled Towards Effective Multi-Modal Human-AI Collaboration on Audiovisual Data @ Lucy Institute for Data & Society 2022 Fall Symposium.

  • 03/2022

    🏆 I'm honored to receive the gift from NVIDIA Academic Hardware Grant Program (1 RTX A6000 48GB) to support my research in creative tools and VR/AR.

  • 09/2021

    🥪 Our lab has an official name! SaNDwich: Science of AI at Notre Dame With Interaction between Computer and Human.

  • 09/2021

    ☘️ Started my Ph.D. journey at University of Notre Dame

Publications
V L / H C C ' 2 4
Developer Behaviors in Validating and Repairing LLM-Generated Code Using IDE and Eye Tracking [Paper]

Ningzhi Tang*, Meng Chen*, Zheng Ning, Aakash Bansal, Yu Huang, Collin McMillan, and Toby Li
2024 IEEE Symposium on Visual Languages and Human-Centric Computing (VL/HCC 2024)

D I S ' 2 4
PodReels: Human-AI Co-Creation of Video Podcast Teasers [Paper] [Video] [Preview]

Sitong Wang, Zheng Ning, Anh Truong, Mira Dontcheva, Dingzeyu Li, Lydia B. Chilton
Proceedings of the 2024 ACM Designing Interactive Systems Conference (DIS'24)

C & C ' 2 4
MIMOSA: Human-AI Co-Creation of Computational Spatial Audio Effects on Videos [Paper] [Project]

Zheng Ning*, Zheng Zhang*, Jerrick Ban, Kaiwen Jiang, Ruohong Gan, Yapeng Tian, and Toby Li
Proceedings of the 15th Conference on Creativity and Cognition (C&C'24)

C H I ' 2 4
SPICA: Interactive Video Content Exploration through Augmented Audio Descriptions for Blind or Low-Vision Viewers [Paper] [Project]

Zheng Ning, Brianna Wimer, Kaiwen Jiang, Keyi Chen, Jerrick Ban, Yapeng Tian, Yuhang Zhao, and Toby Li
In Proceedings of the CHI Conference on Human Factors in Computing Systems (CHI'24)

T i i S ' 2 4
Insights into Natural Language Database Query Errors: From Attention Misalignment to User Handling Strategies [Paper]

Zheng Ning*, Yuan Tian*, Zheng Zhang, Tianyi Zhang, Toby Li
ACM Transactions on Interactive Intelligent Systems (TiiS) IUI-2023 special issue

U I S T ' 2 3
PEANUT: A Human-AI Collaborative Tool for Annotating Audio-Visual Data [Paper] [Video]

Zheng Zhang*, Zheng Ning*, Chenliang Xu, Yapeng Tian, and Toby Li
In Proceedings of the 36th Annual ACM Symposium on User Interface Software and Technology (UIST'23)

E M N L P ' 2 3
Interactive Text-to-SQL Generation via Editable Step-by-Step Explanations [Paper]

Yuan Tian, Zheng Zhang, Zheng Ning, Toby Jia-Jun Li, Jonathan K. Kummerfeld, Tianyi Zhang
The 2023 Conference on Empirical Methods in Natural Language Processing (EMNLP'23)

I U I ' 2 3
An Empirical Study of Model Errors & User Error Discovery and Repair Strategies in Natural Language Database Queries [Paper]

Zheng Ning*, Zheng Zhang*, Tianyi Sun, Tian Yuan, Tianyi Zhang, and Toby Li
The 26th International Conference on Intelligent User Interfaces (IUI'23)