The latest AI news from Meta
Latest News
Get the latest from AI at Meta in your inbox
Blog
December 02, 2024
The team behind BiMediX2, an Arabic-English medical large multimodal model, aims to expand healthcare access in Africa and the Middle East.
December 12, 2024
Meta FAIR is releasing new research artifacts that highlight our recent innovations in developing agents, robustness and safety, and architectures that facilitate machine learning.
December 06, 2024
Inarix is transforming the agricultural industry by turning smartphones into pocket laboratories using Meta FAIR’s DinoV2.
November 19, 2024
Meta FAIR is releasing a large-scale dataset of experimental results on various materials, providing valuable insights for the development of new catalysts.
December 05, 2024
We’re releasing emg2qwerty and emg2pose—two large datasets and benchmarks for sEMG-based typing and pose estimation, as part of the NeurIPS 2024 Datasets and Benchmarks track.
November 22, 2024
SPDL is a framework-agnostic data loading solution that uses multi-threading, which achieves high-throughput in a regular Python interpreter (built without free-threading option enabled).
November 13, 2024
IBM and Meta are implementing the combined power of IBM’s watsonx AI and data platform and Llama to help businesses reach their AI goals.
November 08, 2024
Teams embarked on a 30-hour journey to create AI solutions tackling real-world challenges, leveraging Meta’s Llama 3 models and WhatsApp APIs.
July 23, 2024
Bringing open intelligence to all, our latest models expand context length, add support across eight languages, and include Meta Llama 3.1 405B— the first frontier-level open source AI model.
October 31, 2024
Today, Meta FAIR is publicly releasing several new research artifacts that advance robotics and support our goal of reaching advanced machine intelligence (AMI).
August 28, 2024
An early adopter of Llama 3.1, Infosys is leveraging its capabilities to unlock new efficiencies as part of the company’s commitment to staying at the forefront of open source.
October 24, 2024
As our first quantized models in this Llama category, these instruction-tuned models retain the quality and safety of the original 1B and 3B models, while achieving 2-4x speedup.
Product experiences
Foundational models
Our approach
Research
Latest news
Meta © 2025