Assistant Professor
Associate Director, ML@GT
School of Interactive Computing
CODA room S1181B
Email: zkira at gatech dot edu

I will be teaching a seminar/project-style course on Vision-Language Foundation Models in Fall 2024!

Latest News [Older]

Recent Talks:

07/2024 Two ECCV papers on long-horizon embodied rearrangement and large-scale MAE-based NeRF pre-training!
06/2024 Congratulations to Junjiao Tian for defending his thesis! Amazing body of work!
03/2024 Congratulations to
James Smith for winning outstanding GRA Award and Ram Ramrakhya for winning CoC Rising Star Doctoral Student Research Award! Well done!
02/2024 Three CVPR Papers: on unsupervised open-world segmentation with diffusion models, open-world Go To Anything Benchmark, and Semantic Placement
01/2024 Co-Organizing Workshop: RoboNerF: 1st Workshop on Neural Fields in Robotics
01/2024 2 ICRA paper on Self-supervised 3D pose estimation and Multi-robot correspondences.
01/2024 ICLR paper on Habitat 3.0: Fast simulation for studying learning human-robot interaction!
11/2023 Survey Paper on robotics and foundation models, a multi-institution collaboration!
11/2023 Congratulations James Smith, Nathan Glaser, Yen-Cheng Liu, Zubair Irshad, and Chia-Wen Kuo for defending their theses!
11/2023 Invited talk at the CoRL 2023 LangRob Workshop (slides, video).
10/2023 Two WACV papers on missing-modality robustness and domain generalization with latent augmentation in Transformers.
10/2023 Three NeurIPS papers on robust fine-tuning, mixture-of-experts for unified models on dataset mixtures, and energy-based normalizing flows for generative models
09/2023 CoRL paper on Open-Vocabulary Mobile Manipulation (w/ Meta)
08/2023 EURASIP Best Paper Award for our early TS-LSTM work on video space/time methods for video. Congrats Kevin and Steve!
08/2023 Invited talk at the CVIT Summer School on AI
07/2023 ICCV paper on neural fields for single/few-view novel view synthesis for outdoor scenes!
06/2023 Invited talk at CVPR 2023 CLVISION workshop, AC Meeting, and industry booth.
06/2023 Second place paper at the ICRA CoPerception workshop. Congrats Nathan Glaser!
05/2023 NeurIPS Challenge: HomeRobot: Open-Vocabulary Mobile Manipulation (OVMM)
04/2023 ICML paper on Social Embodied Rearrangement (work with Meta)!
04/2023 Honored to have won the College of Computing Outstanding Junior Faculty Research Award! Made possible by great work by my students and support from colleagues and mentors!
04/2023 ICCV tutorial (with Tyler Hayes, Aishwarya Agrawal, Mancini Massimiliano, and Riccardo Volpi) on Continual Learning accepted! Details coming soon!
04/2023 Congrats to James Smith on being accepted to the CVPR 2023 Doctoral Colloquium.
02/2023 Four CVPR papers on robust finetuning of large pre-trained models, single & multi-modal continual learning, and retrieval-based captioning.
01/2023 ICLR paper on inverse reinforcement learning (Notable top 25% paper).
01/2023 NSF CAREER Award - Honored to receive this award to further visual learning in an open and continual world!
01/2023 Area Chair for ICLR and NeurIPS 2023
01/2023 ICRA paper integration distributed perception and planning with trajectory exchange.
11/2022 Gift Funding from Google. Thank you for the support!
10/2022 Talk at the ECCV 2022 Workshop on Cross-Modal HRI
09/2022 One NeurIPS paper on parameter-efficient multi-task training of vision transformers!
09/2022 Funding from TRI to support Zubair; thank you for the support!
08/2022 Area Chair for CVPR 2023 and Associate Editor for ICRA 2023.
07/2022 Two ECCV papers on open-set semi-supervised detection and implicit models for shape/pose estimation
05/2022 Co-Organizer of 2nd workshop on Learning from Limited and Imperfect Data (L2ID)
05/2022 Neurips Challenge accepted: The Habitat Rearrangement Challenge 2022
03/2022 Nature Machine Intelligence paper on biological underpinnings of lifelong learning
03/2022 Action Editor for the exciting new Transactions on ML Research (TMLR)
03/2022 Two CVPR papers on unbiased teacher v2 and cross-modal models for captioning.
02/2022 Congrats to Andrew Szot for his Outstanding Online Teaching Asst. of the Year Award
02/2022 Two ICRA papers on imbalanced semantic seg and 6D pose estimation for grasping.
01/2022 Funding from IRIM/IPaT for multi-modal continual learning, with Diyi Yang.
01/2022 Talks at UIUC, Vanderbilt, UW, and UTSA.
09/2021 Two NeurIPS Spotlights (<3% accept) on principled calibration and Habitat 2.0 (w/ FB).

2021 We had 1 ICLR, 2 ICRA, 1 ICCV, 2 NeurIPS spotlight papers, and 1 IJCNN papers. I served as Area Chair for ICLR and NeurIPS, we had significant press for Habitat 2.0, received funding for DARPA LwLL and DARPA L2M Phase II projects, gave invited talks at Google and Microsoft AI, and I co-organized the CVPR L2ID Workshop.

2020 We had 1 AAAI, 2 ICRA, 3 CVPR, 2 ECCV papers. We partnered with Facebook on a co-teaching program, I served as Area Chair for NeurIPS, and I gave invited talks at the VL3, Agriculture Vision, and ULAD-2020 Workshops.

2019 We had 3 ICLR, 1 ICRA, 1 CVPR, 1 Oral ICCV, 2 journal, 1 WACV, and ICLR/IROS workshop papers. We received funding from DARPA LwLL and Samsung.

2018 We had a ICLR, CVPR, WACV, NeurIPS Continual Learning Workshop, and one journal paper. We received new funding from DARPA L2M and ONR.

08/2018 Assistant Professor!

About Me

I am an Assistant Professor at the School of Interactive Computing in the College of Computing, and serve as an Associate Director of ML@GT which is the machine learning center recently created at Georgia Tech. Previously I was a Branch Chief at the Georgia Tech Research Institute (GTRI) and Research Scientist at SRI International Sarnoff in Princeton. I received my Ph.D. in 2010 with Professor Ron Arkin as my advisor.

I lead the RobotIcs Perception and Learning (RIPL) lab. My areas of research specifically focus on the intersection of learning methods for sensor processing and robotics, developing novel machine learning algorithms and formulations towards solving some of the more difficult perception problems in these areas. I am especially interested in moving beyond supervised learning (un/semi/self-supervised and continual/lifelong learning) as well as distributed perception (multi-modal fusion, learning to incorporate information across a group of robots, etc.).