Explainable multimodal foundational models for retinal disease stratification: a robustness study across 15+ heterogeneous datasets

Cargando...
Miniatura
Fecha
2026-02-25
Título de la revista
ISSN de la revista
Título del volumen
Editor
Institute of Electrical and Electronics Engineers Inc.
google-scholar
Resumen
The automated stratification of retinal diseases remains a significant challenge due to data heterogeneity and the closed-box nature of deep learning models. Although foundational models have demonstrated remarkable success in general computer vision, their clinical reliability and interpretability in multimodal ophthalmology remain insufficiently explored. In this work, we introduce an Explainable Multimodal Foundational AI framework trained on a large-scale integrated corpus of 760,243 retinal images collected from over 15 heterogeneous repositories, encompassing both fundus photography and optical coherence tomography (OCT). We systematically evaluate self-supervised learning (SSL) paradigms DINO and iBOT across convolutional (ResNet) and Transformer-based (Vision Transformer, ViT) architectures. Our results show that ResNet-DINO achieves state-of-the-art performance, reaching 93.53% accuracy and a 0.935 F1-score in 6-class multimodal retinal disease classification, while exhibiting superior robustness under data-limited conditions, attributed to its inductive bias. Notably, we observe emergent clinical localization capabilities in Vision Transformer models (ViT-DINOv2 and ViT-iBOT). Using frozen pre-trained weights and without exposure to expert-labeled data or ground truth labels, these models autonomously highlight clinically relevant biomarkers, including subretinal fluid and drusen, demonstrating intrinsic pathological awareness. By bridging the semantic gap between unsupervised representation learning and targeted clinical diagnosis, this study establishes a benchmark for robust, explainable, and label-efficient AI in ophthalmology. Our findings indicate that large-scale foundational pre-training not only enhances diagnostic accuracy but also induces meaningful visual priors aligned with established clinical biomarkers, supporting the deployment of trustworthy AI systems in real-world clinical decision support.
Palabras clave
Explainable AI (XAI)
Foundation models
Large-scale ophthalmic benchmark
Multimodal fusion
Retinal pathology
Self-supervised learning
Descripción
Materias
Cita
Osa-Sanchez, A., El-Baz, A., Oleagordia-Ruiz, I., & Garcia-Zapirain, B. (2026). Explainable multimodal foundational models for retinal disease stratification: a robustness study across 15+ heterogeneous datasets. IEEE Access, 14, 31567-31579. https://doi.org/10.1109/ACCESS.2026.3668034
Colecciones