Large Language Model

How Llama and Oracle are helping Instituto PROA kickstart careers for students in Brazil

August 27, 2025
4 minute read

Instituto PROA, a nonprofit organization in Brazil, has transformed its job preparation process for young candidates by leveraging Llama and Oracle Cloud Infrastructure (OCI). By automating research into job openings and employers, PROA’s AI assistant has driven program growth by an astonishing 60x, resulting in 35,000 students enrolled per year.

PROA’s assistant engages with students in Portuguese, scouring the web for valuable insights about specific jobs and delivering detailed reports to aid in interview preparation. The seamless integration of Llama with OCI ensures a smooth deployment and alignment with PROA’s technical stack.

Before incorporating AI into their workflows, PROA staff would spend 30 minutes per report on average, manually searching for information across multiple websites and compiling it into a document. With the new solution, this time was reduced to just five minutes per report, significantly scaling impact. The automation not only saves time, it also ensures consistency and quality in the information provided, eliminating numerous hours of manual labor for the team and allowing them to focus on one-on-one student support.

“Llama is a core component of our AI-powered solution to enhance the efficiency of our job preparation process for young candidates,” says Alini Dal Magro, Executive Director at Instituto PROA. “We chose Llama 3.1 primarily because of its seamless integration with our existing infrastructure on Oracle Cloud Infrastructure, where all of our solutions are hosted. We needed a model that could efficiently leverage OCI’s capabilities, including its scalability and performance for AI workloads.”

How it works

When a user inputs a company name, PROA’s system sends the query to a search engine results page API to retrieve relevant results from the web. The results are fed into PROA’s knowledge base, which serves as the retrieval component of the RAG architecture. Llama 3.1 processes the retrieved data, combining it with its generative capabilities to create a structured, comprehensive report. The output is automatically formatted into a PDF that is shared with the candidate to help them prepare for an interview.

Beyond the RAG-based generative AI application, PROA is exploring additional possibilities, such as enhancing the PDFs with more personalized insights and integrating tool-calling features to further automate parts of the candidate preparation process.

Looking ahead

With the release of Llama 3.2, the PROA team sees opportunities to leverage its multimodal capabilities and lightweight models. The 11B and 90B vision models, which process both text and images, could enhance dossiers by incorporating visual data, such as analyzing company infographics or job-related visuals, to provide richer insights for candidates.

“The primary goal remains bringing more efficiency and impact to our platform, while helping low-income young people succeed in job searches with high-quality, accessible resources,” Dal Magro says.


Share:

Our latest updates delivered to your inbox

Subscribe to our newsletter to keep up with Meta AI news, events, research breakthroughs, and more.

Join us in the pursuit of what’s possible with AI.

Related Posts
Computer Vision
Introducing Segment Anything: Working toward the first foundation model for image segmentation
April 5, 2023
FEATURED
Research
MultiRay: Optimizing efficiency for large-scale AI models
November 18, 2022
FEATURED
ML Applications
MuAViC: The first audio-video speech translation benchmark
March 8, 2023