Importance In-context learning, a prompt-based learning mechanism that enables multimodal foundation models to adapt to new tasks, can eliminate the need for retraining or large annotated datasets. We use diabetic retinopathy detection as an exemplar to probe in-context learning for ophthalmology.
Objective To evaluate whether in-context learning using a multimodal foundation model (Google Gemini 1.5 Pro) can match the performance of a domain-specific model (RETFound) fine-tuned for diabetic retinopathy detection from color fundus photographs.
Design/Setting/Participants This cross-sectional study compared two approaches for adapting foundation models to diabetic retinopathy detection using a public dataset of 516 color fundus photographs. The images were dichotomized into two groups based on the presence or absence of any signs of diabetic retinopathy. RETFound was fine-tuned for this binary classification task, while Gemini 1.5 Pro was assessed for it under zero-shot and few-shot prompting scenarios, with the latter incorporating random or k-nearest-neighbors-based sampling of a varying number of example images. For experiments, data were partitioned into training, validation, and test sets in a stratified manner, with the process repeated for 10-fold cross-validation.
Main Outcome(s) and Measure(s) Performance was assessed via accuracy, F1 score, and expected calibration error of predictive probabilities. Statistical significance was evaluated using Wilcoxon tests.
Results The best in-context learning performance with Gemini 1.5 Pro yielded an average accuracy of 0.841 (95% CI: 0.803–0.879), F1 score of 0.876 (95% CI: 0.844–0.909), and calibration error of 0.129 (95% CI: 0.107–0.152). RETFound achieved an average accuracy of 0.849 (95% CI: 0.813–0.885), F1 score of 0.883 (95% CI: 0.852–0.915), and calibration error of 0.081 (95% CI: 0.066–0.097). While accuracy and F1 scores were comparable (p>0.3), RETFound’s calibration was superior (p=0.004).
Conclusions and Relevance Gemini 1.5 Pro with in-context learning demonstrated performance comparable to RETFound for binary diabetic retinopathy detection, illustrating how future medical artificial intelligence systems may build upon such frontier models rather than being bespoke solutions.
Question Can in-context learning using a general-purpose foundation model (Gemini 1.5 Pro) achieve performance comparable to a domain-specific model (RETFound) for binary diabetic retinopathy detection from color fundus photographs?
Findings In this cross-sectional study, Gemini 1.5 Pro demonstrated accuracy and F1 scores comparable to the fine-tuned RETFound model. While classification performance was similar, RETFound showed better calibration.
Meaning In-context learning with general-purpose foundation models like Gemini 1.5 Pro offers a promising, accessible approach for diabetic retinopathy detection, potentially enabling broader clinical adoption of advanced AI tools without the need for retraining or large labeled datasets.
Competing Interest StatementThe authors have declared no competing interest.
Funding StatementFor the purpose of open access, the author has applied a Creative Commons Attribution (CC BY) licence to any Author Accepted Manuscript version arising. We also acknowledge support through UKRI EPSRC (Artificial intelligence innovation to accelerate health research, EP/Y017803/1 [MSA]), National Institute for Health Research (NIHR) - Moorfields Eye Charity (MEC) Doctoral Fellowship (NIHR303691 [AYO]), UCL UKRI Centre for Doctoral Training in AI-enabled healthcare systems Studentship (EP/S021612/1 [ER]), EURETINA (Retinal Medicine Clinical Research Grant [DAM and PAK]), UK Research & Innovation Future Leaders Fellowship (MR/T019050/1 [PAK]) and The Rubin Foundation Charitable Trust (PAK).
Author DeclarationsI confirm all relevant ethical guidelines have been followed, and any necessary IRB and/or ethics committee approvals have been obtained.
Yes
The details of the IRB/oversight body that provided approval or exemption for the research described are given below:
We used a well-known publicly available dataset, the Indian Diabetic Retinopathy Image Dataset (IDRiD). The dataset is available at https://ieee-dataport.org/open-access/indian-diabetic-retinopathy-image-dataset-idrid
I confirm that all necessary patient/participant consent has been obtained and the appropriate institutional forms have been archived, and that any patient/participant/sample identifiers included were not known to anyone (e.g., hospital staff, patients or participants themselves) outside the research group so cannot be used to identify individuals.
Yes
I understand that all clinical trials and any other prospective interventional studies must be registered with an ICMJE-approved registry, such as ClinicalTrials.gov. I confirm that any such study reported in the manuscript has been registered and the trial registration ID is provided (note: if posting a prospective study registered retrospectively, please provide a statement in the trial ID field explaining why the study was not registered in advance).
Yes
I have followed all appropriate research reporting guidelines, such as any relevant EQUATOR Network research reporting checklist(s) and other pertinent material, if applicable.
Yes
Comments (0)