I am
an Associate Professor in the School of Interactive Computing at Georgia Tech. Till recently, I was a Senior Director leading the Embodied AI (robotics, virtual agents, and egocentric CV) efforts in the Fundamental AI Research (FAIR) team at Meta.
I am fascinated by the natural phenomenon of intelligence, and I work on understanding and advancing the limits of artificial intelligence (AI).
More specifically, my research lies at the intersection of machine learning and computer vision, with forays into robotics and natural language processing.
Here are some representative projects:
- Summarizing beliefs of AI agents via diverse plausible predictions/hypotheses:
Diverse Beam Search, Multiple Choice Learning,
Tutorial on Diversity at CVPR '13 and
CVPR '16,
- Vision-and-language (or multimodal AI):
Image Captioning, Visual Question Answering,
Visual Dialog, Audio-Video Dialog,
Human-AI GuessWhich games,
- Embodied AI and robotics:
Habitat: A Platform for Embodied AI,
Decentralized Distributed PPO,
Embodied Question Answering,
Sim2Real Predictivity,
ASC: Adaptive Skill Coordination for Robotic Mobile Manipulation,
LSC: Language-guided Skill Coordination for Open-Vocabulary Mobile Pick-and-Place
- Explainable, Unbiased, Trustworthy AI:
Grad-CAM (Visual Explanations), Human-vs-machine attention, Counterfactual Visual Explanations
- Platforms for reproducible AI research:
EvalAI, a platform for evaluating AI algorithms.
Bio. CV.
Google Scholar.