Apollo: Lightweight Multilingual Medical LLMs
towards Democratizing Medical AI to 6B People (short summary)

Apollo: Lightweight Multilingual Medical LLMs towards Democratizing Medical AI to 6B People (short summary)

Apollo is a family of lightweight state of the art multilingual LLMs

·

2 min read

TLDR - Multilingual medical LLMs (Apollo) are being developed to improve healthcare access in regions with limited resources and non-English speakers. These small, powerful models achieve state-of-the-art performance and will be openly available for further development.

There is a strong push to integrate medical knowledge with AI for better patient care. This paper focuses on making these tools multilingual to serve diverse populations and those in regions with limited medical resources.

--> For video tutorials on top LLM papers, check Kalyan KS YouTube channel

--> For top LLM papers of the week, check the newsletter.

Why Multilingual Medical AI?

  • Better Training for Doctors: Non-English speaking doctors often learn medicine in both their native language and English, necessitating multilingual AI tools for effective training.

  • Improved Local Care: Multilingual medical AI tools can improve communication and acceptance of healthcare in communities where English isn't the primary language.

  • Knowledge Exchange: Local medical knowledge and practices can benefit the broader medical community, fostering faster advancements through exchange.

Building Apollo LLMs

  • The Corpus: ApolloCorpora is created, containing medical data in the six most spoken languages (English, Chinese, Hindi, Spanish, French, Arabic). These languages cover regions that often have more limited healthcare resources.

  • The Models: The multilingual medical language models trained on ApolloCorpora are called 'Apollo'. They aim to bring the benefits of cutting-edge medical AI to 6 billion people.

  • Lite but Powerful: Apollo models are designed in smaller sizes (0.5B, 1.8B, 2B, 6B, and 7B) so they can be used in various ways, including improving larger AI models without sacrificing medical data privacy.

XMedBench Benchmark

  • The Benchmark: XMedBench was created to assess multilingual medical AI model performance. It utilizes local medical exams and translations from established tests.

  • Results: While closed-source models like GPT-4 still have an edge, Apollo models outperform similarly-sized open-source alternatives, showing significant progress.

Key Contributions

  1. ApolloCorpora: A high-quality multilingual medical dataset specifically focused on languages from diverse regions of the world.

  2. Apollo Models: State-of-the-art multilingual medical LLMs in relatively small sizes.

  3. Proxy Tuning: An approach that uses Apollo to boost larger AI models without compromising sensitive medical data privacy.

  4. XMedBench: A multilingual benchmark for evaluating medical LLM performance.

--> For detailed information regarding Apollo LLMs, refer to Apollo LLMs paper.