Examinando por Autor "Baccouche, Asma"
Mostrando 1 - 3 de 3
Resultados por página
Opciones de ordenación
Ítem Connected-UNets: a deep learning architecture for breast mass segmentation(Nature Research, 2021-12) Baccouche, Asma; García-Zapirain, Begoña; Castillo Olea, Cristian; Elmaghraby, Adel SaidBreast cancer analysis implies that radiologists inspect mammograms to detect suspicious breast lesions and identify mass tumors. Artificial intelligence techniques offer automatic systems for breast mass segmentation to assist radiologists in their diagnosis. With the rapid development of deep learning and its application to medical imaging challenges, UNet and its variations is one of the state-of-the-art models for medical image segmentation that showed promising performance on mammography. In this paper, we propose an architecture, called Connected-UNets, which connects two UNets using additional modified skip connections. We integrate Atrous Spatial Pyramid Pooling (ASPP) in the two standard UNets to emphasize the contextual information within the encoder–decoder network architecture. We also apply the proposed architecture on the Attention UNet (AUNet) and the Residual UNet (ResUNet). We evaluated the proposed architectures on two publically available datasets, the Curated Breast Imaging Subset of Digital Database for Screening Mammography (CBIS-DDSM) and INbreast, and additionally on a private dataset. Experiments were also conducted using additional synthetic data using the cycle-consistent Generative Adversarial Network (CycleGAN) model between two unpaired datasets to augment and enhance the images. Qualitative and quantitative results show that the proposed architecture can achieve better automatic mass segmentation with a high Dice score of 89.52%, 95.28%, and 95.88% and Intersection over Union (IoU) score of 80.02%, 91.03%, and 92.27%, respectively, on CBIS-DDSM, INbreast, and the private dataset.Ítem Early detection and classification of abnormality in prior mammograms using image-to-image translation and YOLO techniques(Elsevier Ireland Ltd, 2022-06) Baccouche, Asma; García-Zapirain, Begoña; Zheng, Yufeng; Elmaghraby, Adel SaidBackground and Objective: Computer-aided-detection (CAD) systems have been developed to assist radiologists on finding suspicious lesions in mammogram. Deep Learning technology have recently succeeded to increase the chance of recognizing abnormality at an early stage in order to avoid unnecessary biopsies and decrease the mortality rate. In this study, we investigated the effectiveness of an end-to-end fusion model based on You-Only-Look-Once (YOLO) architecture, to simultaneously detect and classify suspicious breast lesions on digital mammograms. Four categories of cases were included: Mass, Calcification, Architectural Distortions, and Normal from a private digital mammographic database including 413 cases. For all cases, Prior mammograms (typically scanned 1 year before) were all reported as Normal, while Current mammograms were diagnosed as cancerous (confirmed by biopsies) or healthy. Methods: We propose to apply the YOLO-based fusion model to the Current mammograms for breast lesions detection and classification. Then apply the same model retrospectively to synthetic mammograms for an early cancer prediction, where the synthetic mammograms were generated from the Prior mammograms by using the image-to-image translation models, CycleGAN and Pix2Pix. Results: Evaluation results showed that our methodology could significantly detect and classify breast lesions on Current mammograms with a highest rate of 93% ± 0.118 for Mass lesions, 88% ± 0.09 for Calcification lesions, and 95% ± 0.06 for Architectural Distortion lesions. In addition, we reported evaluation results on Prior mammograms with a highest rate of 36% ± 0.01 for Mass lesions, 14% ± 0.01 for Calcification lesions, and 50% ± 0.02 for Architectural Distortion lesions. Normal mammograms were accordingly classified with an accuracy rate of 92% ± 0.09 and 90% ± 0.06 respectively on Current and Prior exams. Conclusions: Our proposed framework was first developed to help detecting and identifying suspicious breast lesions in X-ray mammograms on their Current screening. The work was also suggested to reduce the temporal changes between pairs of Prior and follow-up screenings for early predicting the location and type of abnormalities in Prior mammogram screening. The paper presented a CAD method to assist doctors and experts to identify the risk of breast cancer presence. Overall, the proposed CAD method incorporates the advances of image processing, deep learning and image-to-image translation for a biomedical application.Ítem An integrated framework for breast mass classification and diagnosis using stacked ensemble of residual neural networks(Springer Nature, 2022-07-18) Baccouche, Asma; García-Zapirain, Begoña; Elmaghraby, Adel SaidA computer-aided diagnosis (CAD) system requires automated stages of tumor detection, segmentation, and classification that are integrated sequentially into one framework to assist the radiologists with a final diagnosis decision. In this paper, we introduce the final step of breast mass classification and diagnosis using a stacked ensemble of residual neural network (ResNet) models (i.e. ResNet50V2, ResNet101V2, and ResNet152V2). The work presents the task of classifying the detected and segmented breast masses into malignant or benign, and diagnosing the Breast Imaging Reporting and Data System (BI-RADS) assessment category with a score from 2 to 6 and the shape as oval, round, lobulated, or irregular. The proposed methodology was evaluated on two publicly available datasets, the Curated Breast Imaging Subset of Digital Database for Screening Mammography (CBIS-DDSM) and INbreast, and additionally on a private dataset. Comparative experiments were conducted on the individual models and an average ensemble of models with an XGBoost classifier. Qualitative and quantitative results show that the proposed model achieved better performance for (1) Pathology classification with an accuracy of 95.13%, 99.20%, and 95.88%; (2) BI-RADS category classification with an accuracy of 85.38%, 99%, and 96.08% respectively on CBIS-DDSM, INbreast, and the private dataset; and (3) shape classification with 90.02% on the CBIS-DDSM dataset. Our results demonstrate that our proposed integrated framework could benefit from all automated stages to outperform the latest deep learning methodologies.