I research AI for Safety and Safety of AI
I am an Assistant Professor at CSE, College of Computing at Georgia Institute of Technology. My research expertise lies in developing AI, applied machine learning, and data mining methods. I build graph, content (NLP, multimodal), and adversarial learning methods, while utilizing terabytes of data from multiple online platforms spanning multiple modalities and languages. I innovate scalable and efficient methods for online safety by detecting and mitigating malicious actors (e.g., ban evaders, sockpuppets, coordinated campaigns, fraudsters) and dangerous content (e.g., misinformation, hate speech, fake reviews). At the same time, I develop methods to improve the security and safety of AI methods.
I have been passionate about the following research topics that comprehensively study some of the biggest threats to web safety and integrity from complementary angles:
I am passionate about putting my research into practice -- my models have been used at Flipkart (India's largest e-commerce platform, acquired by Walmart), has influenced Twitter's Birdwatch platform (community-driven misinformation detection platform), and is now being deployed on Wikipedia.
Prior to Georgia Tech, I was a visiting researcher at Google AI, a postdoctoral researcher at Stanford University, and a PhD student at the University of Maryland. I am honored to be recognized as a NSF CAREER awardee, Kavli Fellow (by the National Academy of Sciences), Forbes 30 under 30, Rising Star in Data Science by Frontiers in Big Data, CRA Computing Innovation Mentor, Facebook Faculty Research Awardee, Adobe Faculty Research Awardee, Class of 1969 Teaching Fellow, ACM SIGKDD Doctoral Dissertation Award 2018 runner-up, WWW 2017 Best Paper Award runner-up, Larry S. Davis Doctoral Dissertation Award 2017, and Dr. B.C. Roy Gold Medal. My work has been covered in a documentary (Familiar Shapes), in a radio interview (WABE), and by popular press, including Wired, CNN, Wall Street Journal, Tech Crunch, New York Magazine, and more.
Online malicious actors and dangerous content threaten public health, democracy, science, and society. To combat these threats, I build technological solutions, including accurate and robust models for early identification, prediction and attibution, as well as social mitigation solutions, such as empowering people to counter online harms. I have conducted the largest study of malicious sockpuppetry across nine platforms, ban evasion/recidivism on online platforms, and some of the earliest works on online misinformation. I am the one of the first to investigate of the reliability of web safety models used in practice, including Facebook's TIES and Twitter's Birdwatch. My work is one of the first to study whole-of-society solutions to mitigate online misinformation.
My research interests lie in comprehensively studying some of the biggest threats to Web Safety and Integrity from complementary angles:
In detail, my research interests spans the following topics:
(1) AI for Safety: I develop methods to efficiently characterize the behavior of and detect both harmful content and malicious actors. Accurate characterization and early detection can greatly improve the safety, integrity, and well-being of online users, communities, and platforms. I have worked on the following type of bad behavior:
(2) Secure, Robust, and Responsible AI: Machine learnind and deep learning models are being used for high-stakes tasks. However, their trustworthiness, reliability, and robustness against manipulation by smart adversaries and to unintentional changes in data is not known. I have explored how adversaries can manipulate recommender systems for their gains. I have conducted the first investigate to quantify the trustworthiness of Facebook's TIES deep learning-based fraud detection models [ACM SIGKDD 2021], recommender systems [ACM CIKM 2022], graph-based models [ACM CIKM 2021b], and community-driven counter misinformation platform used at Twitter's Birdwatch [ASONAM 2021b].
(3) Graphs and Networks: Modeling and predicting over large-scale networks is crucial to mine actionable insights from large inter-connect data, including social networks, e-commerce networks, knowledge graphs, spatio-temporal networks, and interaction networks. My relevant works include:
(4) Recommender systems and Behavior Modeling: Recommender systems power much of the content and products that we see online. I develop user-based and graph-based efficient recommender systems that are accurate, scalable, and trustworthy [ACM CIKM 2021, ACM SIGKDD 2019]. I also investigate how malicious actors can manipulate deep learning-powered recommender systems for their ulterior motives. I create new techniques to quantify this robustness and innovate new adversarially-robust deep recommender system architectures, to usher an era of trustworthy recommendations. Relevant works include:
For my complete list of publications, please refer to my Google Scholar profile.
Highlights (selected from the full list below)
List of all publications
Conference, Journal, and Other Publications
Included in the curriculum at: University of Waterloo
Press: Russian spam accounts are still a big problem for Reddit (Engadget), What Reddit Tells Us About Political Coalitions and Conflicts (The Atlantic), Most Reddit battles are started by 1 percent of communities (Engadget), Tiny percent of Reddit communities spark majority of conflicts (CNET), One Percent of Subreddits Are Responsible for Most of the Raids on Reddit (VICE), and more by Inverse, TheNextWeb, theregister.co.uk
Included in the curriculum at: Stanford University
Best Paper Award Honorable Mention
Documentary: Familiar Shapes by Heather D. Freeman
Press: Sock puppet accounts unmasked by the way they write and post (New Scientist), Tool unmasks online puppeteers (New Scientist, print version), Spotting sockpuppets with science (TechCrunch), Sock Puppet Accounts on the Internet Getting You Down? Here’s How to Spot Them (WOWscience)
Top 10 most cited papers of ICDM in the last 5 years. [Link]
Included in the curriculum at: UIUC, University of Waterloo, McGill University, Texas A&M University, University of Hawaii, University of Freiburg, Leibniz University, Hannover, University of Waterloo, University of Alberta, University of Wellington, New Zealand, and Bari BigData winter school 2017.
Press: Should you worry about people who are too polite? (CNN), When Diplomacy Leads to Betrayal (The Wall Street Journal), Here’s a Good Reason to Be Wary of Overly Polite People (New York Magazine) and more here.
For all publications, please see my Google Scholar.
The CLAWS - Computational Data Science Lab for the Web and Social Media - at Georgia Tech develops data science and applied machine learning solutions to solve the most pressing challenges facing the users, communities, and platforms on web and social media. We focus on pertinent online threats of malicious actors and dangerous content. We investigate the social and technological factors behind these issues and innovate multi-pronged solutions to overcome these challenges.
Sponsors: We are grateful for grants and gifts from NSF (CNS-2154118, IIS-2027689, ITE-2137724, ITE-2230692, CNS-2239879), DARPA, CDC, IDEaS, The Home Depot, Adobe, Google, Facebook, and Microsoft.
Masters and Undergraduate Students:
Malicious, fake, fraud behavior and content:
Paper Reviewing and Program Committee:
Proposal Reviewing for NSF and other Agencies:
Senior Program Committee/Area Chair: