Large Language Model
How SAIF CHECK is using Meta Llama 3 to validate and build trust in AI models
June 20, 2024
4 minute read

As artificial intelligence becomes more integrated into business operations and everyday life, it’s important that companies are aware of potential risks and compliant with local laws in the markets where the technology is used. Failure to do so can result in severe consequences, including legal actions and hefty fines.

While staying abreast of risks isn’t easy, it’s necessary. SAIF CHECK, a company based in Riyadh, Saudi Arabia, built a model evaluation system using Meta Llama 3 to help address this challenge. Working with clients in the Middle East and North Africa, SAIF CHECK offers an assessment, auditing, and certification service for companies that want to check their AI models against various legal, regulatory, privacy, and data security risks.

A large part of the company’s work is scanning regulatory environments around the world and then creating, obtaining, and sourcing documentation that describes those regulatory environments in concrete terms. SAIF CHECK then integrates these findings into its growing knowledge base, which spans a variety of regulatory domains. SAIF CHECK’s Llama 3-based system allows for quick updates to the comprehensive knowledge base, enabling the machine agent to understand the environment of the customer’s AI model and their regulatory landscape. It supports easy, conversational queries that deliver relevant answers to a person’s regulatory questions by using a Retrieval Augmented Generation (RAG) framework trained on a large corpus of AI regulations.

“SAIF CHECK’s goal is to make model evaluation a conversational workflow that a technical or non-technical user could complete,” says Dr. Shaista Hussain, founder and CEO of SAIF CHECK. “We’ve integrated Llama 3 into a system designed to retain a customer’s unique business context [country of operation, regulatory agency] while retrieving and synthesizing information from diverse sources.”

Retaining context with Ghost Attention

Llama first drew the SAIF CHECK team’s attention when they read the 2023 paper published by the Llama 2 team. Hussain says they were especially intrigued by the Llama team’s approach to solving a common problem with conversational AI systems—they tend to lose track of context over the course of a conversation. For example, if you tell an AI model to respond only in haiku, it may forget that initial instruction after a couple of conversational turns unless you repeat it with every new request. Having to repeat the instruction takes up valuable tokens and limits the overall length of the conversation.

To address this issue, the Llama team developed a training technique called Ghost Attention (GAtt), which uses reinforcement learning with human feedback to fine-tune the model’s responses, keeping the initial instruction in mind. This results in an AI model that is much better at retaining initial instructions over the course of a multi-turn conversation.

“Because our AI Model evaluation surveys are processed over multiple runs, we take advantage of Llama’s GAtt mechanism, which helps control dialogue flow over multiple turns,” says Hussain. “By doing so, our platform can offer users more precise and informative responses to raise the quality of the output from our services.”

To customize Llama for their use case, SAIF CHECK has configured multiple layers through an additive fine-tuning process. Using Llama 3 Instruct, the generation layer receives a person’s prompt and context. Its outputs are fed into a regulatory classifier trained on various regulatory bodies and country-specific regulatory documents from SAIF CHECK’s comprehensive knowledge base. This enables the model to categorize the prompt and context within a distinct country and regulatory body.

Confidence through ethical alignment

After learning more about the responsible AI principles used to train Llama, the team decided to switch to using Llama models for text generation. Meta has made significant efforts to both blue-team and red-team its Llama models, which gives the SAIF CHECK team confidence that the Llama models align with their own priorities.

“By using Llama, we’re using an ethically trained and sourced model in the core of our process, so our process aligns with our values,” says Hussain.

She acknowledges that challenges remain in figuring out how to correctly target documents, query them, and generate responses that are appropriate to each person’s context and specific requirements.

“Every machine learning model is different, and each company deploys their model with a unique process,” she adds.

Hussain is confident that the team’s approach, which involves “chunking” documents into smaller pieces of content, will be successful.

“We believe Llama is an excellent model to use to validate our hypothesis around chunking strategies and monitor the effectiveness of our services’ responses,” she says.

The future of AI and human collaboration

The responsible grounding of Llama and its transparency are crucial to the team’s values and their view of how AI can make a global impact. SAIF CHECK believes that the true role of AI will be in complementing and enhancing humans’ use of computers.

For that to work, humans need to trust the AI models they’re using. That trust is an essential building block for SAIF CHECK—both for its own AI models and the models it validates for customers.

“Since Llama is open source, we can literally see its development, trust its documentation, and have confidence that we’re not alone in understanding and implementing this model into real-world services,” says Hussain.


Our latest updates delivered to your inbox

Subscribe to our newsletter to keep up with Meta AI news, events, research breakthroughs, and more.

Join us in the pursuit of what’s possible with AI.

Related Posts
Computer Vision
Introducing Segment Anything: Working toward the first foundation model for image segmentation
April 5, 2023
MultiRay: Optimizing efficiency for large-scale AI models
November 18, 2022
ML Applications
MuAViC: The first audio-video speech translation benchmark
March 8, 2023