I am a visiting researcher at Meta FAIR, advised by Dr. Michal Drozdzal and Prof. Adriana Romero. I am also a PhD candidate at Mila and Université de Montréal, advised by Prof. Aishwarya Agrawal.
My research interests lie at the intersection of computer vision and natural language processing. I believe that, like humans (and other animals), AI systems should have a holistic understanding of the world around them. This means working with multiple sensory modalities, among which vision and language arise as particularly interesting. My work focuses on multimodal vision-language generative models, i.e. models capable of generating images and/or text conditioned on multimodal inputs.
December 12, 2024
Melissa Hall, Oscar Mañas, Reyhane Askari, Mark Ibrahim, Candace Ross, Pietro Astolfi, Tariq Berrada Ifriqi, Marton Havasi, Yohann Benchetrit, Karen Ullrich, Carolina Braga, Abhishek Charnalia, Maeve Ryan, Mike Rabbat, Michal Drozdzal, Jakob Verbeek, Adriana Romero Soriano
December 12, 2024
June 05, 2024
Florian Bordes, Richard Pang, Anurag Ajay, Alexander C. Li, Adrien Bardes, Suzanne Petryk, Oscar Mañas, Zhiqiu Lin, Anas Mahmoud, Bargav Jayaraman, Mark Ibrahim, Melissa Hall, Yunyang Xiong, Jonathan Lebensold, Candace Ross, Srihari Jayakumar, Chuan Guo, Diane Bouchacourt, Haider Al-Tahan, Karthik Padthe, Vasu Sharma, Hu Xu, Ellen Tan, Megan Richards, Samuel Lavoie, Pietro Astolfi, Reyhane Askari, Jun Chen, Kushal Tirumala, Rim Assouel, Mazda Moayeri, Arjang Talattof, Kamalika Chaudhuri, Zechun Liu, Xilun Chen, Quentin Garrido, Karen Ullrich, Aishwarya Agrawal, Kate Saenko, Asli Celikyilmaz, Vikas Chandra
June 05, 2024
Foundational models
Latest news
Foundational models