Lingshu: A Generalist Foundation Model for Unified
Multimodal Medical Understanding and Reasoning


DAMO Academy, Alibaba Group
Equal Contribution, Project Lead
Overview Image

Welcome to the Lingshu project.

  • We show our SOTA multimodal large language models for the medical domain, Lingshu.
  • We release a unified evaluation framework, MedEvalKit, that consolidates major benchmarks for multimodal and textual medical tasks.

Highlights:

  • Lingshu supports more than 12 medical imaging modalities, including X-Ray, CT Scan, MRI, Microscopy, Ultrasound, Histopathology, Dermoscopy, Fundus, OCT, Digital Photography, Endoscopy, and PET.
  • Lingshu models achieve SOTA on most medical multimodal/textual QA and report generation tasks for 7B and 32B model sizes. Lingshu-32B outperforms GPT-4.1 and Claude Sonnet 4 in most multimodal QA and report generation tasks.
  • MedEvalKit allows for standardized, fair, and easy-to-use model assessment in medical domains, including multimdaol VQA, textual QA, and report generation.

Quick Links:

  • Models: Model weights are available in multiple model sizes: Lingshu-7B, Lingshu-32B
  • Evaluation Toolkit: We release our evaluation framework: MedEvalKit
  • Technique Report: Our technique report for the whole project is available at Lingshu Report .

Performance of Lingshu

Visual Question Answering for Various Medical Modalities


Medical Report Generation


Public Health


Medical Knowledge Understanding


Medical Visual Question Answering Results


Medical Textual Question Answering Results


Medical Report Generation Results

BibTeX

If you find our project useful, we hope you would kindly star our repo and cite our work as follows.
* `*` are equal contributions. `^` are corresponding authors.

@misc{lasateam2025lingshu,
      title={Lingshu: A Generalist Foundation Model for Unified Multimodal Medical Understanding and Reasoning}, 
      author={LASA Team and Weiwen Xu* and Hou Pong Chan* and Long Li* and Mahani Aljunied and Ruifeng Yuan and Jianyu Wang and Chenghao Xiao and Guizhen Chen and Chaoqun Liu and Zhaodonghui Li and Yu Sun and Junao Shen and Chaojun Wang and Jie Tan and Deli Zhao and Tingyang Xu and Hao Zhang^ and Yu Rong^},
      year={2025},
      eprint={2506.07044},
      archivePrefix={arXiv},
      primaryClass={cs.CL},
      url={https://arxiv.org/abs/2506.07044}, 
}