LyRise is a pioneering platform dedicated to connecting companies with job seekers from the Middle East and North Africa who have expertise in artificial intelligence. What makes LyRise different is that the company uses AI to help enable its talent-as-a-service (TaaS) platform.
In its pursuit of innovation in the TaaS field, LyRise identified two critical challenges: text summarization for job descriptions and resumes and developing robust retrieval-augmented generation (RAG) pipelines to match the two together. To tackle these obstacles, the company turned to Meta Llama 2, which it found delivered exceptional accuracy in condensing resumes and analyzing job descriptions. Leveraging Llama 2, LyRise has been able to build robust matching pipelines that present relevant candidates to clients—reducing time-to-hire by as much as 50%, while ensuring quality talent acquisition. LyRise has also been able to catapult itself into business, benefiting from the open source nature of Llama to conduct experiments and quickly advance its product development from the minimum viable product to beta.
The journey from experimentation to deployment
RECOMMENDED READS
LyRise started by testing different open source models as potential replacements for proprietary language models. After evaluating various options, Llama 2 emerged as the most promising candidate, particularly for its integration capabilities. Initially, LyRise used Llama 2 in an exploratory way. However, its role quickly evolved.
“We played around with different models and found that Llama 2 was the best for our case,” says LyRise Technology & AI Lead Mohamed Rashad. “We started using Llama for experimentation, but it is currently utilized reliably as part of our alpha and beta development.”
The company’s implementation journey progressed from utilizing Llama 2 through Ollama, which enables open source large language models to run locally to Amazon Bedrock. Next, they were ready to integrate their platform through Replicate.com, using its RAG pipelines by leveraging advanced GenAI capabilities without the need for managing underlying infrastructure.
Replicate provides an accessible platform to deploy and utilize Llama 2 through an API, enabling seamless integration into existing workflows. In their RAG pipeline, data retrieval involves indexing a database of CVs and job descriptions, formulating queries, and retrieving relevant candidates based on a complex knowledge graph. Llama 2 generates embeddings for both CVs and job descriptions, allowing for contextual understanding and improved similarity scoring.
This process includes initial retrieval, embedding generation, similarity scoring, and ranking candidates based on their relevance to the job description. By combining retrieval and generation capabilities, the integration ensures more precise and relevant matches, leading to better hiring outcomes. This gradual adoption allowed the company to assess Llama 2’s performance and refine it, as LyRise built out its talent-matching solutions.
tailored to its use case. This integration allowed it to harness Llama 2’s natural language understanding capabilities to accurately condense and analyze resumes and job postings.
LangChain orchestrates the RAG process by handling the indexing and retrieval of CVs and job descriptions, querying the database, and passing the relevant data to Llama for embedding generation and similarity scoring. It also integrates conversation design capabilities, guardrails for preventing abuse, and model monitoring. This ensures efficient data flow and execution of tasks, from initial retrieval to the final ranking and selection of candidate CVs. With LangChain, the system can dynamically manage and integrate different data sources and models, enabling Llama2 to provide accurate and contextually aware matches between CVs and job descriptions, ultimately improving the efficiency and effectiveness of the hiring process.
Looking ahead, LyRise plans to fine-tune its use of Llama. The company is currently testing Llama 3 to further enhance the performance of its RAG pipeline. Its new features and capabilities bring significant enhancements that the LyRise team is excited about, particularly when it comes to their product for CV matching with job descriptions. One of the standout features of Llama 3 is its improved contextual understanding and generation capabilities. This allows for more precise and nuanced embeddings, which can significantly improve the accuracy of matching candidates’ CVs with job requirements. Additionally, Llama 3 offers better scalability and efficiency, enabling LyRise to handle larger datasets and more complex queries without compromising performance.
Another exciting feature is the enhanced ability to understand and generate multilingual content. This capability is particularly beneficial for the product, as it enables LyRise to cater to a more diverse pool of candidates and job listings, improving inclusivity and reach. Furthermore, Llama 3’s improved fine-tuning capabilities mean LyRise can better customize the model to their specific needs, ensuring it aligns more closely with the unique matching criteria and business requirements.
Integrating Llama 2 into LyRise’s workflow has yielded tangible benefits, particularly by enabling LyRise to build a talent-matching solution quickly and effectively. By leveraging Llama 2’s exceptional performance in text summarization and understanding job descriptions, LyRise has accelerated its product development.
Moreover, the company’s adoption of an open source model has translated into cost savings compared to relying solely on proprietary language models. This cost-effective approach not only enhances its scalability but also allows LyRise to maintain data privacy and security for its clients. By keeping sensitive information within its controlled infrastructure, the company can assure clients of the utmost confidentiality throughout the talent acquisition process.
“The main impact of using open source is the reduced cost to move forward and build our MVP (minimum viable product) and alpha,” says Rashad. “The existence of open source reduced this cost to virtually zero.”
Democratizing access through open source
LyRise’s success with Llama 2 is a testament to the potential of open source software and the collective efforts of the open source community. Like many other companies deploying open source AI models, LyRise has significantly reduced barriers to entry and accelerated its launch timeline. Rather than being constrained by the costs and limitations of proprietary models, it was able to experiment, learn, and generate business value rapidly.
This democratization of access to cutting-edge AI technologies is particularly powerful for startups and smaller organizations. Open source initiatives help level the playing field, enabling companies of all sizes to start using state-of-the-art models and contribute to the collective advancement of the field.
LyRise’s founder emphasizes the importance of knowledge sharing and collaborative development, which are core principles of the open source movement. Although its work is currently in private alpha and beta stages, the company is committed to contributing to the open-source community once its solutions are publicly available.
As Rashad says, “It allows us to learn fast, build fast, and generate business value in no time, while also pushing the industry forward.” He adds, “For our engineers, we learn a lot from exploring open-source code.”
Our latest updates delivered to your inbox
Subscribe to our newsletter to keep up with Meta AI news, events, research breakthroughs, and more.
Join us in the pursuit of what’s possible with AI.
Foundational models
Latest news
Foundational models