It’s a challenge for financial institutions to adopt rapidly developing digital solutions. Building with Llama gave ANZ, one of Australia’s Big Four banks, an opportunity to find increased efficiency.
ANZ was looking to streamline engineering workflows for its Ensayo AI platform, which is designed to accelerate software delivery and facilitate knowledge sharing across its technology division. The bank needed technology that could improve collaboration between its business and technology teams and automate repetitive software development tasks.
“Llama stood out for its flexible deployment options, allowing ANZ to choose between on-premises and cloud platforms to align policy requirements,” says Rico Zhang at ANZ. “This flexibility is paramount in financial services, where data security and compliance are non-negotiable.”
The use of an open source model like Llama, along with development and debugging tools, enabled the bank to remain compliant while maintaining operational efficiency.
ANZ’s use of Llama has evolved significantly since it employed the earliest version for basic prompt-response tasks, which allowed the team to understand the model’s strengths and limitations within the context of the Ensayo AI software delivery acceleration platform.
Over time, Llama has become deeply integrated into the Ensayo AI framework, evolving from a general-purpose assistant into a specialized, knowledge-driven system.
With the release of Llama 3, the team integrated advanced features like retrieval-augmented generation (RAG) and agent capabilities. This evolution enabled ANZ to establish a more structured knowledge base, allowing Ensayo AI to dynamically retrieve and present the most relevant information.
In the banking sector, ensuring data privacy and security compliance is paramount. Even though Ensayo AI doesn’t handle customer or production data, it must still meet strict security standards.
ANZ formed a dedicated AI Working Group, an AI Use-Case Steering Committee, and an Ethics and Privacy Review Function to collectively define and enforce security protocols, compliance guidelines, and ethical standards to govern the use of generative AI.
The team implemented secure sandbox environments and utilized data anonymization techniques to protect sensitive information.
ANZ worked cross-functionally with internal and external stakeholders to align the implementation with relevant policies.
To fine-tune Llama for Ensayo AI-specific needs, they curated and prepared three key datasets: ANZ System’s API Specification (Swagger), ANZ Test Script Dataset, and ANZ Business Service and Production Incident History.
Fine-tuning objectives were to enable Llama to understand and infer potential interactions between APIs, anticipate how objects and data might be passed between APIs, even with minimal context, and generate accurate outputs such as API test cases, test data, sequence diagrams, and comprehensive API knowledge.
The fine-tuning process involved data annotation, supervised fine-tuning, and prompt engineering, as well as iterative testing and refinement.
As a result, the fine-tuned Llama model demonstrated enhanced API understanding and significantly improved generative accuracy. It effectively handles complex workflows with limited context, allowing Ensayo AI to deliver precise and actionable outputs.
“The open source approach has had a positive impact on our organization by fostering innovation, transparency, and collaboration,” Zhang says. “By adopting an open source model like Llama, we’ve gained improved versioning control and insight into our AI solution.”
While Llama 3 8B offers a solution for fine-tuning to meet immediate needs, ANZ plans to adapt the larger 405B model in the coming months, which will require collaboration with security, risk, and policy teams.
Looking ahead, the bank aims to leverage multiple Llama models, including fine-tuning smaller models for specific subject matter expert tasks—a strategic approach allowing the ANZ team to optimize resources while delivering specialized capabilities.
“The belief that ‘bigger is better’ doesn’t always hold true in banking,” Zhang says. “As we move toward an ‘AI agent-driven’ future, smaller, fine-tuned models specializing in specific areas will play an important role. Llama’s adaptability makes it ideal for developing focused models with subject matter expertise, efficiently addressing critical needs.”
Our latest updates delivered to your inbox
Subscribe to our newsletter to keep up with Meta AI news, events, research breakthroughs, and more.
Our approach
Latest news
Foundational models