Xiaodan Du

I am a Ph.D. student at Toyota Technological Institute at Chicago (TTIC), working on computer vision and machine learning. I am fortunate to be advised by Professor Greg Shakhnarovich. My main research interests are generative models, 2D/3D generation and interpretation of deep generative models.

Before my study at TTIC, I have received an MS degree in Computer Science, advised by Professor Svetlana Lazebnik; and a BS degree in Civil Engineering, both from the amazing University of Illinois Urbana-Champaign (UIUC).

I am also an avid collector of 1800s to 1900s European historical and commemorative Medals. Here are a subset of my finest collections.

profile photo

CV     Email     Google Scholar     Twitter/X.com     Github

2020-
Summer 2024
Spring 2023
2018-2020
Summer 2019
2014-2018
News


06/2024 I presented our work Intrinsic-LoRA at CVPR 2024 workshops in Seattle. 📷Photos
04/2024 I attended the Multi-University Workshop organized by Toyota Research Institute in Los Altos; co-organized the student panel and served as a panelist. 📷Photos
03/2024 I will intern at Toyota Research Institute this summer in Los Altos, working with Dr. Vitor Guizilini.
10/2023 I attened ICCV 2023 in the beautiful Paris. 📷Photos
09/2023 I was awarded the Outstanding TA Award. 📷Photos
02/2023 I started my internship at Adobe Research. 📷Photos
Research
SignMusketeers: An Efficient Multi-Stream Approach for Sign Language Translation at Scale
Shester Gueuwou, Xiaodan Du, Greg Shakhnarovich, Karen Livescu
arXiv
arXiv, 2024

We introduce SignMusketeers for Sign Language Translation at Scale. Using only 3% of compute, 41x less pretraining data, and 160x less pretraining epochs, it achieves competitive performance(-0.4 BLEU) compared to the recent ASL-English Translation SOTA.

Intrinsic LoRA: A Generalist Approach for Discovering Knowledge in Generative Models Previous title: Generative Models: What do they know? Do they know things? Let's find out!
Xiaodan Du, Nicholas Kolkin, Greg Shakhnarovich, Anand Bhattad
project page / arXiv / code
IEEE/CVF Computer Vision and Pattern Recognition Conference (CVPR) workshops, 2024

We introduce Intrinsic LoRA (I-LoRA), a universal, plug-and-play approach that transforms any generative model into a scene intrinsic predictor, capable of extracting intrinsic scene maps directly from the original generator network.

Score Jacobian Chaining: Lifting Pretrained 2D Diffusion Models for 3D Generation
Haochen Wang*, Xiaodan Du*, Jiahao Li*, Raymond A. Yeh, Greg Shakhnarovich
project page / arXiv / code
IEEE/CVF Computer Vision and Pattern Recognition Conference (CVPR), 2023

Generating 3D objects from 2D diffusion models by chaining scores with NeRF gradients.

Text-Free Learning of a Natural Language Interface for Pretrained Face Generators
Xiaodan Du, Raymond A. Yeh, Nicholas Kolkin, Eli Shechtman, Greg Shakhnarovich
arXiv / code
arXiv, 2022

We propose Fast text2StyleGAN, a natural language interface that adapts pre-trained GANs for text-guided human face synthesis.

Internship


06/2024 - 08/2024 Research intern at Toyota Research Institute under the supervision of Dr. Vitor Guizilini.
02/2023 - 05/2023 Research intern at Adobe Research under the supervision of Dr. Nick Kolkin and Dr. Eli Shechtman.
Services


Conference/Workshop Reviewer:

ECCV 2024 (3) | CVPR 2024 (3) | SIGGRAPH 2024 (1)

Workshop Organizer:


09/2024 ECCV 2024 Workshop on Knowledge in Generative Models: co-organizer
04/2024 Toyota Research Institute Multi-University Workshop: student panel co-organizer, panelist

This website's source code is borrowed from here.