Large Language Model
RadOnc-GPT: Leveraging Meta Llama for a pioneering radiation oncology model
May 8, 2024

In medicine, few fields require more precision or more data than radiation oncology. Patients’ lives depend on getting the right treatment in this specialized domain.

Mayo Clinic’s pioneeringRadOnc-GPT is a large language model (LLM) leveraging Meta Llama 2 that has the potential to significantly improve the speed, accuracy, and quality of radiation therapy decision-making, benefiting both medical practitioners and the patients they serve. It was fine-tuned on a large dataset of radiation oncology patient records from the Mayo Clinic in Arizona. No patient data was shared outside of a secure network since the model is trained locally using Llama 2 running a local GPU server. All studies are approved by an institutional review board.

“Properly fine-tuned, open source LLMs have immense potential to revolutionize radiation oncology and other highly specialized healthcare domains,” says Dr. Wei Liu, Professor of Radiation Oncology and Research Director of Division of Medical Physics, Mayo Clinic in Arizona.

The immediate clinical use case for RadOnc-GPT is for patient follow-up. Liu’s team plans to develop a chatbot to answer routine questions patients raise post-radiotherapy, reducing the workload of nurses and clinicians so they can focus on more prioritized work.

Future developments may include the expansion into additional clinical tasks, such as constructing models for predicting patient outcomes in radiation oncology. Furthermore, Liu says the team is considering leveraging the recently released and more advanced Llama 3 model to enhance its performance.

Driving efficiencies in handling vast amounts of unstructured data

AI-driven tools can automate routine tasks, analyze complex data sets quickly, and identify patterns that might escape human notice, thus freeing up valuable time for healthcare providers. This acceleration and efficiency empower clinicians to concentrate on the highest priority work, such as direct patient care and decision-making in complex cases.

Liu says the Mayo Clinic team had collaborated for several years with a University of Georgia group on healthcare natural language processing, actively following state-of-the-art developments in the industry.

They chose Llama 2 as the foundation model to derive RadOnc-GPT, which employs instruction tuning on three key tasks: generating radiotherapy treatment regimens, determining optimal radiation modalities, and providing diagnostic descriptions / International Statistical Classification of Diseases (ICD-10) codes based on patient diagnostic details.

With Llama 2, RadOnc-GPT improves specificity and clinical relevance compared to general LLMs.

“This process is traditionally time-consuming, reliant on manual analysis of vast amounts of unstructured clinical data, and susceptible to variations in human interpretation,” Liu says. “Efficient tools for language-involved processing can significantly enhance each phase of radiation therapy and potentially improve treatment outcomes.”

The team performed extensive manual processing to overcome the challenges of curating and preparing the radiation oncology dataset, which involved extracting, separating, and labeling relevant information from patient records.

The impact of open-source

Open-sourcing advanced AI models allows Mayo Clinic to use cutting-edge models directly in its research and accelerates the development process, Liu says. The impact on the clinical side is also amplified, improving patient care. And the initiative’s aim goes even beyond enhancing the precision of therapeutic interventions.

“It's also about fostering an ecosystem where data security is paramount,” Liu says. “Ensuring that patient confidentiality is never compromised is especially vital in oncology, where patient data is highly sensitive.”

For smaller companies and institutions, open-source AI systems are pivotal in democratizing innovation, allowing for the collective advancement of medical science. Open-source approaches could make LLMs accessible to smaller organizations with limited resources to develop their own tailored models, Liu says.


Share:

Our latest updates delivered to your inbox

Subscribe to our newsletter to keep up with Meta AI news, events, research breakthroughs, and more.

Join us in the pursuit of what’s possible with AI.

Related Posts
Computer Vision
Introducing Segment Anything: Working toward the first foundation model for image segmentation
April 5, 2023
FEATURED
Research
MultiRay: Optimizing efficiency for large-scale AI models
November 18, 2022
FEATURED
ML Applications
MuAViC: The first audio-video speech translation benchmark
March 8, 2023