Refine Search

New Search

Results: 11

(searched for: doi:10.1016/j.jksuci.2020.12.010)
Save to Scifeed
Page of 1
Articles per Page
by
Show export options
  Select all
, Sandhya Clement
Published: 29 November 2022
Abstract:
Advances in the field of image classification using convolutional neural networks (CNNs) have greatly improved the accuracy of medical image diagnosis by radiologists. Numerous research groups have applied CNN methods to diagnose respiratory illnesses from chest x-rays, and have extended this work to prove the feasibility of rapidly diagnosing COVID-19 to high degrees of accuracy. One issue in previous research has been the use of datasets containing only a few hundred images of chest x-rays containing COVID-19, causing CNNs to overfit the image data. This leads to a lower accuracy when the model attempts to classify new images, as would be clinically expected of it. In this work, we present a model trained on the COVID-QU-Ex dataset, overall containing 33,920 chest x-ray images, with an equal share of COVID-19, Non-COVID pneumonia, and Normal images. The model itself is an ensemble of pre-trained CNNs (ResNet50, VGG19, VGG16) and GLCM textural features. It achieved a 98.34% binary classification accuracy (COVID-19/no COVID-19) on a balanced test dataset of 6581 chest x-rays, and 94.68% for distinguishing between COVID-19, Non-COVID pneumonia and normal chest x-rays. Also, we herein discuss the effects of dataset size, demonstrating that a 98.82% 3-class accuracy can be achieved using the model if the training dataset only contains a few thousand images, but that generalisability of the model suffers with such small datasets.
Published: 22 October 2022
by MDPI
Journal: Diagnostics
Abstract:
The COVID-19 pandemic has had a significant impact on many lives and the economies of many countries since late December 2019. Early detection with high accuracy is essential to help break the chain of transmission. Several radiological methodologies, such as CT scan and chest X-ray, have been employed in diagnosing and monitoring COVID-19 disease. Still, these methodologies are time-consuming and require trial and error. Machine learning techniques are currently being applied by several studies to deal with COVID-19. This study exploits the latent embeddings of variational autoencoders combined with ensemble techniques to propose three effective EVAE-Net models to detect COVID-19 disease. Two encoders are trained on chest X-ray images to generate two feature maps. The feature maps are concatenated and passed to either a combined or individual reparameterization phase to generate latent embeddings by sampling from a distribution. The latent embeddings are concatenated and passed to a classification head for classification. The COVID-19 Radiography Dataset from Kaggle is the source of chest X-ray images. The performances of the three models are evaluated. The proposed model shows satisfactory performance, with the best model achieving 99.19% and 98.66% accuracy on four classes and three classes, respectively.
A. Jothi Prabha, N. Venkateswaran, Prabaharan Sengodan
Published: 24 May 2022
The publisher has not yet granted permission to display this abstract.
Rafid Mostafiz, , Iffat Jabin, Muhammad Minoar Hossain,
International Journal of Ambient Computing and Intelligence, Volume 13, pp 1-18; https://doi.org/10.4018/ijaci.293163

Abstract:
The brain tumor is one of the most health hazard diseases across the world in recent time. The development of the intelligent system has extended its applications in the automated medical diagnosis domains. However, image-based medical diagnosis result strongly depends on the selection of relevant features. This research focuses on the automatic detection of brain tumors based on the concatenation of curvelet transform and convolutional neural network (CNN) features extracted from the preprocessed MRI sequence of the brain. Relevant features are selected from the feature vector using mutual information based on the minimum redundancy maximum relevance (mRMR) method. The detection is done using the ensemble classifier of the bagging method. The experiment is performed using two standard datasets of BraTS 2018 and BraTS 2019. After five-fold cross-validation, we have obtained an outperforming accuracy of 98.96%.
Torikul Islam Palash, Redwanul Islam, Monisha Basak, Amit Dutta Roy
Abstract:
COVID-19 has become one of the most virulent, acute, and life-threatening diseases in recent times. No clinically approved drug is available till now for its treatment. Therefore, early and swift detection is very essential for reducing overall mortality. The chest x-ray image is one of the possible alternative methods for detecting COVID-19. Researchers are exploring image processing techniques along with deep learning-based models like AlexNet, VGGNet, SqueezeNet, GoogleNet, etc. to detect COVID-19. This study aims to formulate, implement and investigate deep learning-based models and their probable hyperparameters tuning for obtaining the best results when identifying COVID-19 using chest x-ray images. To meet this objective, images from different publicly available databases were collected. In this paper, ResNet18, ResNet50V2, DenseNet121, DenseNet201, modified DenseNet201 and VGG16 were used to detect COVID-19. From the experimental results, modified DenseNet201 showed the best performance with 99.5% mean accuracy, 99.5% mean F1 score and 100% mean sensitivity in binary (COVID-19 and normal) classification and 98.33% mean accuracy, 98.34 mean F1 score, and 98.34% mean sensitivity (98% sensitivity for COVID-19) in 3-class (COVID-19, pneumonia, normal) classification. This may contribute to the process of designing and implementing a system that can detect COVID-19 automatically in the near future and enhance the quality of healthcare services.
Published: 2 December 2021
by MDPI
Journal: Applied Sciences
Applied Sciences, Volume 11; https://doi.org/10.3390/app112311423

Abstract:
The COVID-19 pandemic has claimed the lives of millions of people and put a significant strain on healthcare facilities. To combat this disease, it is necessary to monitor affected patients in a timely and cost-effective manner. In this work, CXR images were used to identify COVID-19 patients. We compiled a CXR dataset with equal number of 2313 COVID positive, pneumonia and normal CXR images and utilized various transfer learning models as base classifiers, including VGG16, GoogleNet, and Xception. The proposed methodology combines fuzzy ensemble techniques, such as Majority Voting, Sugeno Integral, and Choquet Fuzzy, and adaptively combines the decision scores of the transfer learning models to identify coronavirus infection from CXR images. The proposed fuzzy ensemble methods outperformed each individual transfer learning technique and several state-of-the-art ensemble techniques in terms of accuracy and prediction. Specifically, VGG16 + Choquet Fuzzy, GoogleNet + Choquet Fuzzy, and Xception + Choquet Fuzzy achieved accuracies of 97.04%, 98.48%, and 99.57%, respectively. The results of this work are intended to help medical practitioners achieve an earlier detection of coronavirus compared to other detection strategies, which can further save millions of lives and advantageously influence society.
Mustafa Ghaderzadeh, Mehrad Aria,
Published: 22 August 2021
Biomed Research International, Volume 2021, pp 1-16; https://doi.org/10.1155/2021/9942873

Abstract:
Purpose. Due to the excessive use of raw materials in diagnostic tools and equipment during the COVID-19 pandemic, there is a dire need for cheaper and more effective methods in the healthcare system. With the development of artificial intelligence (AI) methods in medical sciences as low-cost and safer diagnostic methods, researchers have turned their attention to the use of imaging tools with AI that have fewer complications for patients and reduce the consumption of healthcare resources. Despite its limitations, X-ray is suggested as the first-line diagnostic modality for detecting and screening COVID-19 cases. Method. This systematic review assessed the current state of AI applications and the performance of algorithms in X-ray image analysis. The search strategy yielded 322 results from four databases and google scholar, 60 of which met the inclusion criteria. The performance statistics included the area under the receiver operating characteristics (AUC) curve, accuracy, sensitivity, and specificity. Result. The average sensitivity and specificity of CXR equipped with AI algorithms for COVID-19 diagnosis were 96 (83-100) and 92 (80-100), respectively. For common X-ray methods in COVID-19 detection, these values were 0.56 (95 CI 0.51-0.60) and 0.60 (95 CI 0.54-0.65), respectively. AI has substantially improved the diagnostic performance of X-rays in COVID-19. Conclusion. X-rays equipped with AI can serve as a tool to screen the cases requiring CT scans. The use of this tool does not waste time or impose extra costs, has minimal complications, and can thus decrease or remove unnecessary CT slices and other healthcare resources.
, Elena Battini Sönmez
Journal of King Saud University - Computer and Information Sciences, Volume 34, pp 6199-6207; https://doi.org/10.1016/j.jksuci.2021.07.005

Abstract:
The Coronavirus disease is quickly spreading all over the world and the emergency situation is still out of control. Latest achievements of deep learning algorithms suggest the use of deep Convolutional Neural Network to implement a computer-aided diagnostic system for automatic classification of COVID-19 CT images. In this paper, we propose to employ a feature-wise attention layer in order to enhance the discriminative features obtained by convolutional networks. Moreover, the original performance of the network has been improved using the mixup data augmentation technique. This work compares the proposed attention-based model against the stacked attention networks, and traditional versus mixup data augmentation approaches. We deduced that feature-wise attention extension, while outperforming the stacked attention variants, achieves remarkable improvements over the baseline convolutional neural networks. That is, ResNet50 architecture extended with a feature-wise attention layer obtained 95.57% accuracy score, which, to best of our knowledge, fixes the state-of-the-art in the challenging COVID-CT dataset.
International Journal of Environmental Research and Public Health, Volume 18; https://doi.org/10.3390/ijerph18062842

Abstract:
Since December 2019, the world has been devastated by the Coronavirus Disease 2019 (COVID-19) pandemic. Emergency Departments have been experiencing situations of urgency where clinical experts, without long experience and mature means in the fight against COVID-19, have to rapidly decide the most proper patient treatment. In this context, we introduce an artificially intelligent tool for effective and efficient Computed Tomography (CT)-based risk assessment to improve treatment and patient care. In this paper, we introduce a data-driven approach built on top of volume-of-interest aware deep neural networks for automatic COVID-19 patient risk assessment (discharged, hospitalized, intensive care unit) based on lung infection quantization through segmentation and, subsequently, CT classification. We tackle the high and varying dimensionality of the CT input by detecting and analyzing only a sub-volume of the CT, the Volume-of-Interest (VoI). Differently from recent strategies that consider infected CT slices without requiring any spatial coherency between them, or use the whole lung volume by applying abrupt and lossy volume down-sampling, we assess only the “most infected volume” composed of slices at its original spatial resolution. To achieve the above, we create, present and publish a new labeled and annotated CT dataset with 626 CT samples from COVID-19 patients. The comparison against such strategies proves the effectiveness of our VoI-based approach. We achieve remarkable performance on patient risk assessment evaluated on balanced data by reaching 88.88%, 89.77%, 94.73% and 88.88% accuracy, sensitivity, specificity and F1-score, respectively.
Page of 1
Articles per Page
by
Show export options
  Select all
Back to Top Top