Osa Sánchez, AinhoaEl-Baz, AymanOleagordia Ruiz, IbonGarcía-Zapirain, Begoña2026-04-132026-04-132026-02-25Osa-Sanchez, A., El-Baz, A., Oleagordia-Ruiz, I., & Garcia-Zapirain, B. (2026). Explainable multimodal foundational models for retinal disease stratification: a robustness study across 15+ heterogeneous datasets. IEEE Access, 14, 31567-31579. https://doi.org/10.1109/ACCESS.2026.366803410.1109/ACCESS.2026.3668034https://hdl.handle.net/20.500.14454/5627The automated stratification of retinal diseases remains a significant challenge due to data heterogeneity and the closed-box nature of deep learning models. Although foundational models have demonstrated remarkable success in general computer vision, their clinical reliability and interpretability in multimodal ophthalmology remain insufficiently explored. In this work, we introduce an Explainable Multimodal Foundational AI framework trained on a large-scale integrated corpus of 760,243 retinal images collected from over 15 heterogeneous repositories, encompassing both fundus photography and optical coherence tomography (OCT). We systematically evaluate self-supervised learning (SSL) paradigms DINO and iBOT across convolutional (ResNet) and Transformer-based (Vision Transformer, ViT) architectures. Our results show that ResNet-DINO achieves state-of-the-art performance, reaching 93.53% accuracy and a 0.935 F1-score in 6-class multimodal retinal disease classification, while exhibiting superior robustness under data-limited conditions, attributed to its inductive bias. Notably, we observe emergent clinical localization capabilities in Vision Transformer models (ViT-DINOv2 and ViT-iBOT). Using frozen pre-trained weights and without exposure to expert-labeled data or ground truth labels, these models autonomously highlight clinically relevant biomarkers, including subretinal fluid and drusen, demonstrating intrinsic pathological awareness. By bridging the semantic gap between unsupervised representation learning and targeted clinical diagnosis, this study establishes a benchmark for robust, explainable, and label-efficient AI in ophthalmology. Our findings indicate that large-scale foundational pre-training not only enhances diagnostic accuracy but also induces meaningful visual priors aligned with established clinical biomarkers, supporting the deployment of trustworthy AI systems in real-world clinical decision support.eng© 2026 The AuthorsExplainable AI (XAI)Foundation modelsLarge-scale ophthalmic benchmarkMultimodal fusionRetinal pathologySelf-supervised learningExplainable multimodal foundational models for retinal disease stratification: a robustness study across 15+ heterogeneous datasetsjournal article2026-04-132169-3536