The latest AI news from Meta
Latest News
Get the latest from AI at Meta in your inbox
Blog
May 07, 2024
Retrieval-Augmented Fine-Tuning (RAFT) combines the benefits of Retrieval-Augmented Generation and Supervised Fine-Tuning for better domain adaptation.
May 08, 2024
Mayo Clinic’s pioneering RadOnc-GPT is a large language model (LLM) leveraging Meta Llama 2 that has the potential to significantly improve the speed, accuracy, and quality of radiation therapy decision-making.
May 20, 2024
This Mother’s Day weekend, we teamed up with Cerebral Valley to host the first-ever Meta Llama 3 hackathon along with 10 other sponsors.
May 22, 2024
To drive the virtual world of Peridot, Niantic integrated Meta Llama 2, transforming its “Dots” into responsive AR pets that now exhibit smart behaviors.
June 06, 2024
Students can ask their AI-enabled study buddy questions on WhatsApp and Messenger and receive conversational replies that help them with their schoolwork.
June 18, 2024
Meta FAIR is releasing several new research artifacts. Our hope is that the research community can use them to innovate, explore, and discover new ways to apply AI at scale.
June 20, 2024
SAIF CHECK built a model evaluation system using Meta Llama 3 to help companies ensure their AI models are compliant with local laws.
July 23, 2024
Bringing open intelligence to all, our latest models expand context length, add support across eight languages, and include Meta Llama 3.1 405B— the first frontier-level open source AI model.
Today, we’re sharing the measures and safeguards we’ve taken to responsibly scale the Llama 3.1 collection of models, including the 405B.
August 05, 2024
We’re excited to begin accepting applications for the Llama 3.1 Impact Grants, the next iteration of a larger portfolio of work to support organizations as they pursue their ideas for how Llama can be used to address social challenges in their communities.
August 07, 2024
In this post, we’ll discuss the following question: “When should we fine-tune, and when should we consider other techniques?”
In this post, we explore some rules of thumb for curating a good training dataset.
Product experiences
Foundational models
Our approach
Research
Latest news
Meta © 2025