An intelligence design for detection and classification of COVID19 using fusion of classical and convolutional neural network and improved microscopic features selection approach
Open Access
- 8 May 2021
- journal article
- research article
- Published by Wiley in Microscopy Research and Technique
- Vol. 84 (10), 2254-2267
- https://doi.org/10.1002/jemt.23779
Abstract
Coronavirus19 is caused due to infection in the respiratory system. It is the type of RNA virus that might infect animal and human species. In the severe stage, it causes pneumonia in human beings. In this research, hand‐crafted and deep microscopic features are used to classify lung infection. The proposed work consists of two phases; in phase I, infected lung region is segmented using proposed U‐Net deep learning model. The hand‐crafted features are extracted such as histogram orientation gradient (HOG), noise to the harmonic ratio (NHr), and segmentation based fractal texture analysis (SFTA) from the segmented image, and optimum features are selected from each feature vector using entropy. In phase II, local binary patterns (LBPs), speeded up robust feature (Surf), and deep learning features are extracted using a pretrained network such as inceptionv3, ResNet101 from the input CT images, and select optimum features based on entropy. Finally, the optimum selected features using entropy are fused in two ways, (i) The hand‐crafted features (HOG, NHr, SFTA, LBP, SURF) are horizontally concatenated/fused (ii) The hand‐crafted features (HOG, NHr, SFTA, LBP, SURF) are combined/fused with deep features. The fused optimum features vector is passed to the ensemble models (Boosted tree, bagged tree, and RUSBoosted tree) in two ways for the COVID19 classification, (i) classification using fused hand‐crafted features (ii) classification using fusion of hand‐crafted features and deep features. The proposed methodology is tested /evaluated on three benchmark datasets. Two datasets employed for experiments and results show that hand‐crafted & deep microscopic feature's fusion provide better results compared to only hand‐crafted fused features.Keywords
This publication has 35 references indexed in Scilit:
- Artificial Intelligence in Bio-Medical DomainInternational Journal of Advanced Computer Science and Applications, 2017
- Deep Residual Learning for Image RecognitionPublished by Institute of Electrical and Electronics Engineers (IEEE) ,2016
- Rethinking the Inception Architecture for Computer VisionPublished by Institute of Electrical and Electronics Engineers (IEEE) ,2016
- Delving Deep into Rectifiers: Surpassing Human-Level Performance on ImageNet ClassificationPublished by Institute of Electrical and Electronics Engineers (IEEE) ,2015
- An Efficient Algorithm for Fractal Analysis of TexturesPublished by Institute of Electrical and Electronics Engineers (IEEE) ,2012
- RUSBoost: Improving classification performance when training data is skewed2008 19th International Conference on Pattern Recognition, 2008
- Speeded-Up Robust Features (SURF)Computer Vision and Image Understanding, 2008
- Histograms of Oriented Gradients for Human DetectionPublished by Institute of Electrical and Electronics Engineers (IEEE) ,2005
- Multiresolution gray-scale and rotation invariant texture classification with local binary patternsIEEE Transactions on Pattern Analysis and Machine Intelligence, 2002
- Bagging predictorsMachine Learning, 1996