Large Language Model

How Altana employs Llama to elevate global value chain management

January 30, 2025
3 minute read

Altana uses AI to help businesses and governments manage their global supply chains, providing insight into extended supplier and distribution networks that span from raw material origins to the sale of finished products. Founded on the world’s largest body of supply chain data, Altana powers workflows across what were previously opaque global networks.

To supercharge its AI capabilities, the Brooklyn, NY-based company leverages Llama open source models on the Databricks data intelligence platform. This accelerates product development and boosts overall efficiency. The company anticipates that a customized Llama model will be able to replace several internal models and better understand Altana’s use cases.

“With Llama on Databricks, the company can now deploy Gen AI systems into production 20 times faster,” says Saurabh Khanwalkar, VP of AI at Altana.

Currently, Altana uses Llama for sophisticated tariff code classification, mapping shipment data to over 10,000 customs tariff categories based on product types and graph features from its global supply chain map. Additionally, the company is developing a fine-tuned chatbot trained on its historical interactions to help customers effortlessly select the right tariff codes. These innovations align seamlessly with Altana’s mission: fine-tuning open source models to work faster and smarter, enabling users to make sense of transactions with ease.

When exploring large language model options, Altana chose Llama for its accuracy, lower cost, and seamless integration into Databricks. This setup allows Altana to deploy customized Llama models directly within a customer’s Databricks cloud environment (AWS or Azure), eliminating the need to rely on external APIs. The team says the benefits of embracing open source models are clear: greater flexibility, cost savings, and easy access to rich developer documentation and online support.

Unlocking Llama’s potential

Altana’s fine-tuning process used Databricks Mosaic AI training, with deployment managed through Databricks’ serving endpoint. The team used a concise system and user prompts, fine-tuning on roughly a million input-output examples. Additionally, they conducted continuous pre-training on domain-specific data, refining instructions across various input-output formats and diverse use cases.

At the start, the team noticed that when responses seemed more mechanical instead of using the natural language humans use, it was more challenging to fine-tune Llama 3.1 8B to the types of tasks customers would be completing. The team determined that this was because Llama had only been fine-tuned to provide a classification output, so it struggled to respond well to open-ended questions or chat. However, the team addressed those limitations by incorporating multiple types of inputs and outputs, resulting in a fine-tuned model that is more accurate and less costly than alternatives.

Altana’s fine-tuned Llama models are now live in production with customers across a variety of industries, including retail, apparel, automotive, and more. As customers use the models, they continue to improve by refining and expanding the knowledge embedded in the system. It’s this open source innovation coupled with their deep supply chain expertise that Altana says is enabling them to push the boundaries of what’s possible with AI, which in turn helps further their mission of empowering businesses to conquer global supply chain challenges.

Our latest updates delivered to your inbox

Subscribe to our newsletter to keep up with Meta AI news, events, research breakthroughs, and more.

Join us in the pursuit of what’s possible with AI.

Related Posts
Computer Vision
Introducing Segment Anything: Working toward the first foundation model for image segmentation
April 5, 2023
FEATURED
Research
MultiRay: Optimizing efficiency for large-scale AI models
November 18, 2022
FEATURED
ML Applications
MuAViC: The first audio-video speech translation benchmark
March 8, 2023