Open Source

Meet the winners of our first-ever LlamaCon Hackathon

May 13, 2025
5 minute read

Following LlamaCon, our inaugural AI event that convened developers from around the world, we hosted our first-ever LlamaCon Hackathon in San Francisco. The event brought together 238 talented developers and innovators from a pool of more than 600 registrants for a day of building. The challenge was to create a demonstrable project using the Llama API, Llama 4 Scout, or Llama 4 Maverick—or any combination of these cutting-edge tools—within just 24 hours.


The stakes were high, with $35K USD in cash prizes up for grabs, including awards for 1st, 2nd, and 3rd place, as well as a special sidepot for the best usage of Llama API. Our panel of judges from Meta and our sponsoring partners carefully evaluated each of the 44 submitted projects.

We’re grateful to our partners—Groq, Crew AI, Tavus, Lambda, Nebius, and SambaNova—who provided invaluable support throughout the hackathon. Each sponsor offered credited usage, workshops with expert speakers, mentorship, onsite Q&A booths, judges, and remote support on Discord.

Meet the winners

We conducted two rounds of judging to whittle the 44 submitted projects down to the top six before rewarding 1st, 2nd, 3rd place, as well as best usage of Llama API.

  • OrgLens – 1st Place
    • OrgLens created an AI-enabled expert matching system that connects you with the right professionals within your organization. By analyzing data from various sources, including Jira tasks, GitHub code and issues, internal documents, and resumes, OrgLens creates a comprehensive knowledge graph and detailed profiles for each contributor. This lets you search for experts using advanced AI-enabled search capabilities and even interact with a person’s digital twin to ask questions before reaching out. To demonstrate its capabilities, a demo web application was built using React, Tailwind, and Django, leveraging the GitHub API and Llama API to process and store data. OrgLens streamlines expert matching, making it easier to find the right person for the right job. (GitHub)
  • Compliance Wizards – 2nd Place
    • Compliance Wizards created an AI-enabled transaction analyzer to detect fraud and alert users based on a custom risk assessment algorithm. An email notification gets sent to the user, prompting a report or acknowledgement of the transactions. Users are then able to speak to an AI voice assistant for reporting and acknowledgements. Using Llama API’s multimodality, fraud assessors can then upload client information and search for relevant news surrounding their clients to assist in determining if clients were involved in any noteworthy criminal activity. (GitHub)
  • Llama CCTV Operator – 3rd Place
    • A team led by Agajan Torayev built a Llama CCTV AI control room operator to automatically identify custom surveillance video events without requiring any model fine-tuning. Operators are able to define video events in simple language. Using Llama 4’s multimodal image understanding, the system captures and detects movement every five frames to assess these pre-defined events and report them to the operator. (GitHub)
  • Geo-ML – Best Llama API Usage
    • Geologist William Davis used Llama 4 Maverick and GemPy to generate 3D geological models for possible dig locations, terrain maps, and mineral deposits. Geo-ML works by processing 400-page geology reports, consolidating the information into a structured geology domain-specific language, and then uses it to generate 3D representations of subsurface geology. (GitHub)

“It’s the first time I actually ever used an LLM API to extract the really long text and images from long geological research papers, so I used the really long context window of Llama Maverick and the text and image multimodal capabilities to extract the text and convert it to a domain-specific language, giving a condensed version of everything that is stored in the documents,” Davis said. “I spend most of my time reading through geology documents in my day-to-day work. Having an LLM that can do this work for me in the background will be really excellent.”

One finalist, Team Concierge, stood out by bringing their own GPUs to the competition.

“We believe the best aspect of Llama 4 Maverick is its sparse mixture of experts nature and open source availability, allowing for fine-tuning,” the team said. “Meta recently released an excellent fine-tuning tool, the Synthetic Data Generation tool, on GitHub. Using the Llama API, we compiled data from multiple sources to create QA datasets and fine-tuned a Llama 4 Maverick model. We plan to submit it to open benchmarks, as we currently lack a Llama 4 coder, and with the 1M context window, it promises to be exceptional.”

You can watch the finalist presentations on YouTube.

Developers can apply to the next Llama Hackathon, which will be held in New York City May 31 – June 1, 2025.

Join us in the pursuit of what’s possible with AI.

Related Posts
Computer Vision
Introducing Segment Anything: Working toward the first foundation model for image segmentation
April 5, 2023
FEATURED
Research
MultiRay: Optimizing efficiency for large-scale AI models
November 18, 2022
FEATURED
ML Applications
MuAViC: The first audio-video speech translation benchmark
March 8, 2023