Advances in Applied Science Research Open Access

  • ISSN: 0976-8610
  • Journal h-index: 57
  • Journal CiteScore: 93.86
  • Average acceptance to publication time (5-7 days)
  • Average article processing time (30-45 days) Less than 5 volumes 30 days
    8 - 9 volumes 40 days
    10 and more volumes 45 days
Reach us +32 25889658

Perspective - (2022) Volume 13, Issue 9

Using Histopathological Images, an Ensemble of Swin Transformers for Multi-Class Breast Cancer Classification
Clare Lawton*
 
Department of Oncology, Coventry University, UK
 
*Correspondence: Clare Lawton, Department of Oncology, Coventry University, UK, Email:

Received: 31-Aug-2022, Manuscript No. AASRFC-22-14675; Editor assigned: 02-Sep-2022, Pre QC No. AASRFC-22-14675(PQ); Reviewed: 16-Sep-2022, QC No. AASRFC-22-14675; Revised: 21-Sep-2022, Manuscript No. AASRFC-22-14675(R); Published: 28-Sep-2022, DOI: 10.36648/0976-8610.13.9.90

Introduction

A Breast cancer (BC), one of the deadliest types of cancer, claims the lives of many people worldwide. The most popular imaging methods for BC screening are mammography and ultrasonography. These imaging techniques, however, are unable to discriminate between benign and malignant tumour subtypes. In this situation, histopathology pictures may be better able to distinguish between benign and malignant kinds of cancer. Vision transformers have lately attracted interest in medical imaging due to their performance in a range of computer vision tasks. Swin Transformer (SwinT), a variation of the vision transformer, uses the concept of non-overlapping shifted windows to carry out a variety of vision detection tasks.

Description

We investigated the performance of an ensemble of SwinTs (including tiny, small, base, and large) in the two-class classification of benign vs. malignant and 8 class classification of four benign and four malignant subtypes using an openly available BreaKHis dataset containing 7909 histopathology images acquired at different zoom factors of 40, 100, 200, and 400. For the 8 class classification, the group of SwinTs achieved an average test accuracy of 96.0%. As a result, a team of SwinTs could identify BC subtypes from histological images, relieving pathologists of their burden.

Breast cancer (BC) is the second most lethal malignancy worldwide after lung cancer, causing female-specific morbidity and mortality. By 2030, its prevalence in the US could increase by more than 50%. The non-invasive BC diagnostic treatments involve physical examinations and imaging techniques like mammography, ultrasonography, and magnetic resonance imaging. Nevertheless, a physical examination might not be able to detect it at an early stage, and imaging techniques have limited sensitivity for more detailed evaluation of malignant regions and the identification of cancer subtypes. Despite the fact that a breast biopsy is a minimally invasive procedure, histological imaging can precisely localise the tumour and identify the cancer subtype.

The pathologist’s manual examination, however, could be taxing and prone to mistakes. Therefore, automated techniques for categorising BC subtypes are required. Deep learning has transformed a variety of industries over the last 10 years, including healthcare with applications like precise disease detection and prognosis and robotic-assisted surgery. Across all magnification factors, the SwinTs are also effective. Performance is optimal at the lowest zoom factor, nevertheless, as photos captured at a zoom factor of 40 might have more discriminative information. The patch splitting and merging could be the reason for the swin transformers’ higher performance. When combined with the cross-windowing method, this could have a strong chance of teaching the image the most crucial abstract aspects on both the local and global scales.

Conclusion

This paper suggests an ensemble of four swin transformer models for the categorization of benign vs malignant and subtype from histopathological pictures from the BreaKHis dataset. Both the individual models and the ensemble model fared better for 8 class classification at a zoom factor of 40. In every instance, Swin-L was the ideal individual model. Without the use of pre-processing or augmentation processes, the ensemble of swin transformers surpassed earlier efforts for the two-class (benign vs. malignant) and eight-class (benign vs. malignant) classification of BC, with overall test accuracies of 99.6% and 96.0% (at 40), respectively. The complete framework in this case is freely usable by others on GitHub. BreaST-Net could be utilised to computer-aided diagnose benign and malignant BC subtypes as a consequence, relieving pathologists.

Citation: Lawton C (2022) Using Histopathological Images, an Ensemble of Swin Transformers for Multi-Class Breast Cancer Classification. Adv Appl Sci Res. 13:90.

Copyright: © 2022 Lawton C. This is an open-access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited.