Refine Search

New Search

Results: 33

(searched for: Deep Learning Applied to the Detection of Masks on Faces)
Save to Scifeed
Page of 1
Articles per Page
by
Show export options
  Select all
Published: 24 September 2021
by MDPI
Sensors, Volume 21; https://doi.org/10.3390/s21196387

Abstract:
As the interest in facial detection grows, especially during a pandemic, solutions are sought that will be effective and bring more benefits. This is the case with the use of thermal imaging, which is resistant to environmental factors and makes it possible, for example, to determine the temperature based on the detected face, which brings new perspectives and opportunities to use such an approach for health control purposes. The goal of this work is to analyze the effectiveness of deep-learning-based face detection algorithms applied to thermal images, especially for faces covered by virus protective face masks. As part of this work, a set of thermal images was prepared containing over 7900 images of faces with and without masks. Selected raw data preprocessing methods were also investigated to analyze their influence on the face detection results. It was shown that the use of transfer learning based on features learned from visible light images results in mAP greater than 82% for half of the investigated models. The best model turned out to be the one based on Yolov3 model (mean average precision—mAP, was at least 99.3%, while the precision was at least 66.1%). Inference time of the models selected for evaluation on a small and cheap platform allows them to be used for many applications, especially in apps that promote public health.
S Lokesh Kumar, Yamani Sai Asish,
Published: 23 August 2021
Abstract:
Recently, the emerging applications such as banking, mobile payments, face recognition technology are gradually booming and also increases the users count around the world. The extensive deployment of facial recognition systems has drawn close attention to the dependability of facial biometrics in the fight against spoof attacks, in which a picture, video or 3D mask of a real user's face may be used to access facilities or services illegitimately. While a number of anti-spoofing or liveness detection approaches (which identify whether a face is live or spoof when captured) were suggested, the problem is still unresolved because of the difficulty in discovering discriminatory and computer-cost characteristics and techniques for spoof assaults. Existing methods also utilise a full picture or video to determine luminosity. Often though, some facial areas (video frames) are redundant or relate to the confusion of the picture (video). In this paper, we propose a new hybrid deep learning technique called Hybrid Convolutional Neural Network (CNN) based architecture with Long Short-Term Memory (LSTM) units to study the impact in classification. In this technique is applied a non-softmax function for making effective decision on classification. The hybrid approach is implemented followed by a comparative analysis with existing conventional and hybrid techniques used for spoof detection. The proposed model is proved as better than the existing deep learning techniques and other hybrid models in terms of precision, recall, f-measure and accuracy.
Vinod Kumar Yadav, Pritaj Yadav, Shailja Sharma
International Journal of Scientific Research in Computer Science, Engineering and Information Technology pp 503-512; https://doi.org/10.32628/cseit2174106

Abstract:
In the current scenario on the increasing number of motor vehicles day by day, so traffic regulation faces many challenges on intelligent road surveillance and governance, this is one of the important research areas in the artificial intelligence or deep learning. Among various technologies, computer vision and machine learning algorithms have the most efficient, as a huge vehicles video or image data on road is available for study. In this paper, we proposed computer vision-based an efficient approach to vehicle detection, recognition and Tracking. We merge with one-stage (YOLOv4) and two-stage (R-FCN) detectors methods to improve vehicle detection accuracy and speed results. Two-stage object detection methods provide high localization and object recognition precision, even as one-stage detectors achieve high inference and test speed. Deep-SORT tracker method applied for detects bounding boxes to estimate trajectories. We analyze the performance of the Mask RCNN benchmark, YOLOv3 and Proposed YOLOv4 + R-FCN on the UA-DETRAC dataset and study with certain parameters like Mean Average Precisions (mAP), Precision recall.
Sai Kiruthika K. M
International Journal for Research in Applied Science and Engineering Technology, Volume 9, pp 3233-3237; https://doi.org/10.22214/ijraset.2021.37051

Abstract:
The covid -19 is an unparalleled crisis resulting in huge number of casualties security problem. So has to scale back the spread of corona virus, people often wear a mask to guard themselves. Indeed, during this challenging context, the matter of face recognition is usually like periocular recognition involving iris, pupil, sclera, upper and lower eyelids, eye folds, eye corners, skin texture, fine wrinkles, complexion, skin color, skin pores etc. In this paper, we propose a reliable method supported discard masked region and deep learning based features so as to deal with the matter of masked face recognition process. The primary step to discard the masked face region. Next, we apply deep learning algorithm to extract the simplest features from obtained regions (mostly eyes and forehead regions). This leads to good accuracy than the previous work for detecting the masked face.
Giovanna Castellano, Berardina De Carolis, Nicola Macchiarulo
CHItaly 2021: 14th Biannual Conference of the Italian SIGCHI Chapter; https://doi.org/10.1145/3464385.3464730

Abstract:
People communicate emotions through several nonverbal channels and facial expressions play an important part in this communicative process. Automatic Facial Expression Recognition (FER) is a very hot topic that has attracted a lot of interest in the last years. Most FER systems try to recognize emotions from the entire face of a person. Unfortunately, due to pandemic situation, people wear a mask most of the time, thus their faces are not fully visible. In our study, we investigate the effectiveness of a FER system in recognizing emotions only from the eyes region, which is the sole visible region when wearing a mask by comparing the results of the same approach when applied to the entire face. The proposed pipeline involves several steps: detecting a face in an image, detecting a mask on a face, extracting the eyes region, and recognize the emotion expressed on the basis of such region. As it was expected, emotions that are related mainly to the mouth region (e.g. disgust) are not recognized at all and positive emotions are the ones that are better determined by considering only the region of the eyes.
Dingyuan Chen, , Zhuo Zheng, Ailong Ma, Xiaoyan Lu
ISPRS Journal of Photogrammetry and Remote Sensing, Volume 178, pp 345-365; https://doi.org/10.1016/j.isprsjprs.2021.05.016

The publisher has not yet granted permission to display this abstract.
Xinqi Fan, Mingjie Jiang,
IEEE Access, Volume 9, pp 96964-96974; https://doi.org/10.1109/access.2021.3095191

Abstract:
Coronavirus disease 2019 has seriously affected the world. One major protective measure for individuals is to wear masks in public areas. Several regions applied a compulsory mask-wearing rule in public areas to prevent transmission of the virus. Few research studies have examined automatic face mask detection based on image analysis. In this paper, we propose a deep learning based single-shot light-weight face mask detector to meet the low computational requirements for embedded systems, as well as achieve high performance. To cope with the low feature extraction capability caused by the light-weight model, we propose two novel methods to enhance the model’s feature extraction process. First, to extract rich context information and focus on crucial face mask related regions, we propose a novel residual context attention module. Second, to learn more discriminating features for faces with and without masks, we introduce a novel auxiliary task using synthesized Gaussian heat map regression. Ablation studies show that these methods can considerably boost the feature extraction ability and thus increase the final detection performance. Comparison with other models shows that the proposed model achieves state-of-the-art results on two public datasets, the AIZOO and Moxa3K face mask datasets. In particular, compared with another light-weight you only look once version 3 tiny model, the mean average precision of our model is 1.7% higher on the AIZOO dataset, and 10.47% higher on the Moxa3K dataset. Therefore, the proposed model has a high potential to contribute to public health care and fight against the coronavirus disease 2019 pandemic.
Published: 1 July 2021
by MDPI
Sensors, Volume 21; https://doi.org/10.3390/s21134520

Abstract:
Facial recognition is a method of identifying or authenticating the identity of people through their faces. Nowadays, facial recognition systems that use multispectral images achieve better results than those that use only visible spectral band images. In this work, a novel architecture for facial recognition that uses multiple deep convolutional neural networks and multispectral images is proposed. A domain-specific transfer-learning methodology applied to a deep neural network pre-trained in RGB images is shown to generalize well to the multispectral domain. We also propose a skin detector module for forgery detection. Several experiments were planned to assess the performance of our methods. First, we evaluate the performance of the forgery detection module using face masks and coverings of different materials. A second study was carried out with the objective of tuning the parameters of our domain-specific transfer-learning methodology, in particular which layers of the pre-trained network should be retrained to obtain good adaptation to multispectral images. A third study was conducted to evaluate the performance of support vector machines (SVM) and k-nearest neighbor classifiers using the embeddings obtained from the trained neural network. Finally, we compare the proposed method with other state-of-the-art approaches. The experimental results show performance improvements in the Tufts and CASIA NIR-VIS 2.0 multispectral databases, with a rank-1 score of 99.7% and 99.8%, respectively.
, Indrajit Bhattacharya
Published: 1 July 2021
Abstract:
Coronavirus disease 2019 (covid-19 ) is a contiguous disease which is caused by severe acute respiratory syndrome coronavirus2(SARAS-2) started from Wuhan, china, and spread all over the world within a few months in 2019. Government of all countries had to apply lockdown to decrease the number of affected patient as mortality rate of many countries became very high at that time. In the awake of 2nd wave of COVID 19 WHO has made mandatory to use mask in largely crowded areas, health centers, communities and in different places to prevent spread of virus. Many countries have invented the vacancies but it will firstly available for corona front line warriors only, not for general people, So, people have to wear mask when they are going out from home. But In recent days it can be followed that people are reluctant to wear mask when they are entering in offices, departmental stores or local shops where, gathering might happen anytime. This could lead to spread of COVID-19 among the communities. With the help of computer-vision, people who are not wearing mask can be detected by generating an alarm signal. To achieve this challenging task, a face mask detector ‘HybridFaceMaskNet’ is proposed, which is a combination of classical Machine Learning and deep learning algorithm. ‘HybridFaceMaskNet’ can achieve state-of-art accuracy on public faces. The real challenges are the low-quality images, different distances of people from camera and dynamic lighting on the faces at daylight or in artificial light.This problem can be overcome by using different noise removal techniques. HybridFaceMaskNet is trained with three different classification of images ‘proper-mask’, ‘incorrect-mask’ and ‘no-mask’ which are collected from real life images and some synthetic data , to generate alarm for different scenario .This HybridFaceMaskNet is trained on Google Colab and is compared with different existing face mask detector model. There is a possibility of deploy the model in IOT devices as it is light weight compare to other existing models.
, Krzysztof Mierzejewski
Photonics Letters of Poland, Volume 13, pp 22-24; https://doi.org/10.4302/plp.v13i2.1091

Abstract:
Biometric systems are becoming more and more efficient due to increasing performance of algorithms. These systems are also vulnerable to various attacks. Presentation of falsified identity to a biometric sensor is one the most urgent challenges for the recent biometric recognition systems. Exploration of specific properties of thermal infrared seems to be a comprehensive solution for detecting face presentation attacks. This letter presents outcome of our study on detecting 3D face masks using thermal infrared imaging and deep learning techniques. We demonstrate results of a two-step neural network-featured method for detecting presentation attacks. Full Text: PDF ReferencesS.R. Arashloo, J. Kittler, W. Christmas, "Face Spoofing Detection Based on Multiple Descriptor Fusion Using Multiscale Dynamic Binarized Statistical Image Features", IEEE Trans. Inf. Forensics Secur. 10, 11 (2015). CrossRef A. Anjos, M.M. Chakka, S. Marcel, "Motion-based counter-measures to photo attacks inface recognition", IET Biometrics 3, 3 (2014). CrossRef M. Killioǧlu, M. Taşkiran, N. Kahraman, "Anti-spoofing in face recognition with liveness detection using pupil tracking", Proc. SAMI IEEE, (2017). CrossRef A. Asaduzzaman, A. Mummidi, M.F. Mridha, F.N. Sibai, "Improving facial recognition accuracy by applying liveness monitoring technique", Proc. ICAEE IEEE, (2015). CrossRef M. Kowalski, "A Study on Presentation Attack Detection in Thermal Infrared", Sensors 20, 14 (2020). CrossRef C. Galdi, et al, "PROTECT: Pervasive and useR fOcused biomeTrics bordEr projeCT - a case study", IET Biometrics 9, 6 (2020). CrossRef D.A. Socolinsky, A. Selinger, J. Neuheisel, "Face recognition with visible and thermal infrared imagery", Comput. Vis Image Underst. 91, 1-2 (2003) CrossRef L. Sun, W. Huang, M. Wu, "TIR/VIS Correlation for Liveness Detection in Face Recognition", Proc. CAIP, (2011). CrossRef J. Seo, I. Chung, "Face Liveness Detection Using Thermal Face-CNN with External Knowledge", Symmetry 2019, 11, 3 (2019). CrossRef A. George, Z. Mostaani, D Geissenbuhler, et al., "Biometric Face Presentation Attack Detection With Multi-Channel Convolutional Neural Network", IEEE Trans. Inf. Forensics Secur. 15, (2020). CrossRef S. Ren, K. He, R. Girshick, J. Sun, "Proceedings of IEEE Conference on Computer Vision and Pattern Recognition", Proc. CVPR IEEE 39, (2016). CrossRef K. He, X. Zhang, S. Ren, J. Sun, "Deep Residual Learning for Image Recognition", Proc. CVPR, (2016). CrossRef K. Mierzejewski, M. Mazurek, "A New Framework for Assessing Similarity Measure Impact on Classification Confidence Based on Probabilistic Record Linkage Model", Procedia Manufacturing 44, 245-252 (2020). CrossRef
Shilpa Sethi, , Trilok Kaushik
Journal of Biomedical Informatics, Volume 120; https://doi.org/10.1016/j.jbi.2021.103848

The publisher has not yet granted permission to display this abstract.
Fan Yang
Abstract:
This thesis explores the contactless estimation of people’s vital signs. We designed two camera-based systems and applied object detection algorithms to locate the regions of interest where vital signs are estimated. With the development of Deep Learning, Convolutional Neural Network (CNN) model has many applications in the real world nowadays. We applied the CNN based frameworks to the different types of camera based systems and improve the efficiency of the contactless vital signs estimation. In the field of medical healthcare, contactless monitoring draws a lot attention in the recent years because the wide use of different sensors. However most of the methods are still in the experimental phase and have never been used in real applications. We were interested in monitoring vital signs of patients lying in bed or sitting around the bed at a hospital. This required using sensors that have range of 2 to 5 meters. We developed a system based on the depth camera for detecting people’s chest area and the radar for estimating the respiration signal. We applied a CNN based object detection method to locate the position of the subject lying in the bed covered with blanket. And the respiratory-like signal is estimated from the radar device based on the detected subject’s location. We also create a manually annotated dataset containing 1,320 depth images. In each of the depth image the silhouette of the subject’s upper body is annotated, as well as the class. In addition, a small subset of the depth images also labeled four keypoints for the positioning of people’s chest area. This dataset is built on the data collected from the anonymous patients at the hospital which is substantial. Another problem in the field of human vital signs monitoring is that systems seldom contain the functions of monitoring multiple vital signs at the same time. Though there are few attempting to work on this problem recently, they are still all prototypes and have a lot limitations like shorter operation distance. In this application, we focused on contactless estimating subjects’ temperature, breathing rate and heart rate at different distances with or without wearing the mask. We developed a system based on thermal and RGB camera and also explore the feasibility of CNN based object detection algorithms to detect the vital signs from human faces with specifically defined RoIs based on our thermal camera system. We proposed the methods to estimate respiratory rate and heart rate from the thermal videos and RGB videos. The mean absolute difference (MAE) between the estimated HR using the proposed method and the baseline HR for all subjects at different distances is 4.24 ± 2.47 beats per minute, the MAE between the estimated RR and the reference RR for all subjects at different distances is 1.55 ± 0.78 breaths per minute.
Nadia.M. Nawwar, Salama May
International Journal of Innovative Technology and Exploring Engineering, Volume 10, pp 18-23; https://doi.org/10.35940/ijitee.g8893.0510721

Abstract:
During the spread of the COVID-I9 pandemic in early 2020, the WHO organization advised all people in the world to wear face-mask to limit the spread of COVID-19. Many facilities required that their employees wear face-mask. For the safety of the facility, it was mandatory to recognize the identity of the individual wearing the mask. Hence, face recognition of the masked individuals was required. In this research, a novel technique is proposed based on a mobile-net and Haar-like algorithm for detecting and recognizing the masked face. Firstly, recognize the authorized person that enters the nuclear facility in case of wearing the masked-face using mobile-net. Secondly, applying Haar-like features to detect the retina of the person to extract the boundary box around the retina compares this with the dataset of the person without the mask for recognition. The results of the proposed modal, which was tested on a dataset from Kaggle, yielded 0.99 accuracies, a loss of 0.08, F1.score 0.98.
Jian Xiao, Jia Wang, Shaozhong Cao, Yang Li
2021 2nd International Conference on Artificial Intelligence and Information Systems; https://doi.org/10.1145/3469213.3470705

Abstract:
During the 2019-nCoV epidemic, in order to effectively prevent the spread of the virus, people generally wore masks when entering public places, rendering traditional facial recognition technology ineffective. This paper constructs a dataset of face images with masks, and proposes a face recognition algorithm for masks based on deep learning. Applying the TensorFlow framework and proposing an improved MTCNN algorithm to cluster the effective feature regions of the face; using the FaceNet model shortens the time of face detection and improves the efficiency of face recognition. The test results show that the improved model has an average accuracy of 91% in recognition of faces wearing masks, and an average recall rate of 92%. Compared with the unimproved algorithm, the candidate frame of the improved algorithm focuses on important feature information to make it accurate. The rate increased by an average of 3%.
Bhuwan Bhattarai, Yagya Raj Pandeya, Joonwhoan Lee
2021 6th International Conference on Machine Learning Technologies; https://doi.org/10.1145/3468891.3468899

Abstract:
The COVID-19 pandemic has caused a global health crisis. In response, the World Health Organization (WHO) has suggested wearing a face mask in public for effective protection. While much of the global population has adhered to these recommendations, some continue to wear the face mask improperly or refuse to wear the mask at all. It is essential that face masks are properly worn in public. To address this, we implemented computer vision, a recent advanced technology, to detect the status of face masks on individuals in crowded public places. Our research is intended to aid in minimizing the spread of coronavirus by developing technology for authorities to discern if face masks are being worn properly. We collected data from the Internet and increased it synthetically by augmentation. Two publicly available datasets were merged: the face mask detection dataset and the MASKEDFACE-NET dataset. Our data was annotated manually and then made into a graphical user interface (GUI) for semi-automatic annotation. The multiple object detection networks were trained for three states of face mask wearing: with_mask, without_mask, and mask_weared_incorrect. Four two-stage object detection models were trained and tested during the experiment. The results are compared based on the mean average precisions and scores. The networks achieved above 91% accuracy in both mean average precisions and scores for the three classes of object. We applied these object detectors to our annotation tool for quick semi-supervised annotation. The proposed mask status detection system can aid in reducing the spread of COVID-19 if deployed in a real-world scenario. Our data labeling tool with annotation, augmentation, and automatic suggestion can help further research into these types of technologies.
Yang Liu
Published: 14 April 2021
Abstract:
[ACCESS RESTRICTED TO THE UNIVERSITY OF MISSOURI AT REQUEST OF AUTHOR.] With the rapid development of deep learning in computer vision, especially deep convolutional neural networks (CNNs), significant advances have been made in recent years on object recognition and detection in images. Highly accurate detection results have been achieved for large objects, whereas detection accuracy on small objects remains to be low. This dissertation focuses on investigating deep learning methods for small object detection in images and proposing new methods with improved performance. First, we conducted a comprehensive review of existing deep learning methods for small object detections, in which we summarized and categorized major techniques and models, identified major challenges, and listed some future research directions. Existing techniques were categorized into using contextual information, combining multiple feature maps, creating sufficient positive examples, and balancing foreground and background examples. Methods developed in four related areas, generic object detection, face detection, object detection in aerial imagery, and segmentation, were summarized and compared. In addition, the performances of several leading deep learning methods for small object detection, including YOLOv3, Faster R-CNN, and SSD, were evaluated based on three large benchmark image datasets of small objects. Experimental results showed that Faster R-CNN performed the best, while YOLOv3 was a close second. Furthermore, a new deep learning method, called Retina-context Net, was proposed and outperformed state-of-the art one-stage deep learning models, including SSD, YOLOv3 and RetinaNet, on the COCO and SUN benchmark datasets. Secondly, we created a new dataset for bird detection, called Little Birds in Aerial Imagery (LBAI), from real-life aerial imagery. LBAI contains birds with sizes ranging from 10 by 10 pixels to 40 by 40 pixels. We adapted and applied several state-of-the-art deep learning models to LBAI, including object detection models such as YOLOv2, SSH, and Tiny Face, and instance segmentation models such as U-Net and Mask R-CNN. Our empirical results illustrated the strength and weakness of these methods, showing that SSH performed the best for easy cases, whereas Tiny Face performed the best for hard cases with cluttered backgrounds. Among small instance segmentation methods, U-Net achieved slightly better performance than Mask R-CNN. Thirdly, we proposed a new graph neural network-based object detection algorithm, called GODM, to take the spatial information of candidate objects into consideration in small object detection. Instead of detecting small objects independently as the existing deep learning methods do, GODM treats the candidate bounding boxes generated by existing object detectors as nodes and creates edges based on the spatial or semantic relationship between the candidate bounding boxes. GODM contains four major components: node feature generation, graph generation, node class labelling, and graph convolutional neural network model. Several graph generation methods were proposed. Experimental results on the LBDA dataset show that GODM outperformed existing state-of-the-art object detector Faster R-CNN significantly, up to 12% better in accuracy. Finally, we proposed a new computer vision-based grass analysis using machine learning. To deal with the variation of lighting condition, a two-stage segmentation strategy is proposed for grass coverage computation based on a blackboard background. On a real world dataset we collected from natural environments, the proposed method was robust to varying environments, lighting, and colors. For grass detection and coverage computation, the error rate was just 3%.
Daniel Matthias, Chidozie Managwu, O. Olumide
Journal of Computer Science and Its Application, Volume 27; https://doi.org/10.4314/jcsia.v27i2.5

Abstract:
The COVID–19 pandemic is, without any doubt, changing our world in ways that are beyond our wildest imagination. In a bid to curb the spiraling negative fallouts from the virus that has resulted in a large number of casualties and security concerns. The World Health Organization, amongst other safety protocols, recommended the compulsory wearing of face masks by individuals in public spaces. The problem with the enforcement of this and other relevant safety protocols, all over the world, is the reluctance and outright refusal of citizens to comply and the inability of relevant agencies to monitor and enforce compliance. This paper explores the development of a CCTV–enabled facial mask recognition software that will facilitate the monitoring and enforcement of this protocol. Such models can be particularly useful for security purposes in checking if the disease transmission is being kept in check. A constructive research methodology was adopted, where a pre-trained deep convolutionary neural network (CNN) (mostly eyes and forehead regions) used and the most probable limit (MPL) was use for the classification process. The designed method uses two datasets to train in order to detect key facial features and apply a decision-making algorithm. Experimental findings on the Real-World-Masked-Face-Dataset indicate high success in recognition. A proof of concept as well as a development base are provided towards reducing the spread of COVID-19 by allowing people to validate the face mask via their webcam. We recommend that the use of the app and to further investigate the development of highly robust detectors by training a deep learning model with respect to specified face-feature categories or to correctly and incorrectly wear mask categories.
Shayan Khosravipour, Erfan Taghvaei, Nasrollah Moghadam Charkari
Published: 27 March 2021
Abstract:
The exponential spread of COVID-19 in over 215 countries has led WHO to recommend face masks and gloves for a safe return to school or work. We used artificial intelligence and deep learning algorithms for automatic face masks and gloves detection in public areas. We investigated and assessed the efficacy of two popular deep learning algorithms of YOLO (You Only Look Once) and SSD MobileNet for the detection and proper wearing of face masks and gloves trained over a data set of 8250 images imported from the internet. YOLOv3 is implemented using the DarkNet framework, and the SSD MobileNet algorithm is applied for the development of accurate object detection. The proposed models have been developed to provide accurate multi-class detection (Mask vs. No-Mask vs. Gloves vs. No-Gloves vs. Improper). When people wear their masks improperly, the method detects them as an improper class. The introduced models provide accuracies of (90.6% for YOLO and 85.5% for SSD) for multi-class detection. The systems' results indicate the efficiency and validity of detecting people who do not wear masks and gloves in public.
IOP Conference Series: Materials Science and Engineering, Volume 1070; https://doi.org/10.1088/1757-899x/1070/1/012061

Abstract:
The World Health Organization (WHO) has stated that there are two ways in which the spread of COVID 19 virus takes place that are respiratory droplets and physical contact. So, avoiding the spread of this virus need some precautionary steps to be taken that are social distancing and the wearing of masks. Among these two precautions the mask wearing is considered as the important factor for the spread of COVID 19 virus because these droplets can land on any surface. So, to keep track of the people that are wearing mask or not is more important. Here we have presented a mask detection system that is able to detect any type of mask and masks of different shapes from the video streams for following the rules that are applied by the government. Deep learning algorithm is used here and the PyTorch library of python is used for mask detection from the images/video streams. The proposed system is able to detect the mask wearing people and those one who are not wearing the masks.
Yunjie Xiang, Haiyan Yang, Rong Hu, Chih-Yu Hsu
2021 IEEE International Conference on Power Electronics, Computer Applications (ICPECA) pp 314-318; https://doi.org/10.1109/icpeca51329.2021.9362685

Abstract:
For the fatigue driving detection of a driver wearing a mask, the traditional fatigue driving detection method cannot effectively detect the face. The characteristics of the mouth area are disappeared due to the mask’s occlusion. Therefore, the extraction of fatigue features in the eye area becomes very important. The accuracy of the eye area detection will directly affect the performance of the fatigue driving detection algorithm. At present, YOLOv3 and Faster-RCNN are both excellent models in the field of target detection. Therefore, this article uses the same data set and sets the same training parameters during training. Under a unified evaluation standard, the YOLOv3 model and the Faster-RCNN model are evaluated. Experimental results show that YOLOv3 has a better effect on human eye detection under the same conditions.
Albar Albar, Hendrick Hendrick, Rahmad Hidayat
Knowledge Engineering and Data Science, Volume 3, pp 99-105; https://doi.org/10.17977/um018v3i22020p99-105

Abstract:
Face detection is mostly applied in RGB images. The object detection usually applied the Deep Learning method for model creation. One method face spoofing is by using a thermal camera. The famous object detection methods are Yolo, Fast RCNN, Faster RCNN, SSD, and Mask RCNN. We proposed a segmentation Mask RCNN method to create a face model from thermal images. This model was able to locate the face area in images. The dataset was established using 1600 images. The images were created from direct capturing and collecting from the online dataset. The Mask RCNN was configured to train with 5 epochs and 131 iterations. The final model predicted and located the face correctly using the test image.
Vu-Anh-Quang Nguyen, Jongoh Park, KyeongJin Joo, Thi Tra Vinh Tran, Trung Tin Tran, Joonhyeon Choi
The 4th International Conference on Future Networks and Distributed Systems (ICFNDS); https://doi.org/10.1145/3440749.3442654

Abstract:
The human temperature measurement system has been widely applying in hospitals and public areas during the widespread Covid-19 pandemic. However, the current systems in the quarantine checkpoint are only capable of measuring the human temperature; however, it can not combine with the identification of facial recognition, human temperature information, and wearing mask detection. In addition, in the hospitals as well as the public areas such as schools, libraries, train stations, airports, etc. facial recognition of employees combined with temperature measurement and masking will save the time check and update employee status immediately. This study proposes a method that combines body temperature measurement, facial recognition, and masking based on deep learning. Furthermore, the proposed method adds the ability to prevent spoofing between a real face and face-in-image recognition. A depth camera is used in the proposed system to measure and calculate the length between the human's face and camera to approach the best accuracy of facial recognition and anti-spoofing. Moreover, a low-cost thermal camera measures the human body temperature. The methodology and algorithm for the human face and body temperature recognition are validated through the experimental results.
Sarin Watcharabutsarakham, Supphachoke Suntiwichaya, Chanchai Junlouchai, Apichon Kitvimorat
2020 15th International Joint Symposium on Artificial Intelligence and Natural Language Processing (iSAI-NLP) pp 1-5; https://doi.org/10.1109/isai-nlp51646.2020.9376825

Abstract:
Since the coronavirus disease 2019 (COVID-19) outbreak has spread across the country, our research applies to remind the people to wear a face mask when we go outside because a facial image detection and classification method will be used to authentication and authorization. This paper has shown that our created models based on CNN can detect the face mask-wearing, glasses-wearing, and gender with comparison two models. We training model with mix public datasets such as WIDER FACE, AFW, and MAFA. Moreover, we use VGG-Face to pre-train the model for the advance detection rate.
E. Omer Akay, K. Oguz Canbek, Yesim Oniz
2020 4th International Symposium on Multidisciplinary Studies and Innovative Technologies (ISMSIT) pp 1-5; https://doi.org/10.1109/ismsit50672.2020.9255052

Abstract:
In this study, an automated attendance taking system is developed and implemented. Two different face detection algorithms, namely Histogram of Oriented Gradients and Haar-Cascade algorithms, are applied and their performances are compared. Deep learning based on convolutional neural networks (CNNs) is employed for the identification of the students in the classroom. Furthermore, a mask checking feature is also included as a measure against the Covid-19 pandemic. A graphical user interface (GUI) system is designed using Python.
Adnane Cabani, Karim Hammoudi, Halim Benhabiles, Mahmoud Melkemi
Published: 18 August 2020
Abstract:
The wearing of the face masks appears as a solution for limiting the spread of COVID-19. In this context, efficient recognition systems are expected for checking that people faces are masked in regulated areas. To perform this task, a large dataset of masked faces is necessary for training deep learning models towards detecting people wearing masks and those not wearing masks. Some large datasets of masked faces are available in the literature. However, at the moment, there are no available large dataset of masked face images that permits to check if detected masked faces are correctly worn or not. Indeed, many people are not correctly wearing their masks due to bad practices, bad behaviors or vulnerability of individuals (e.g., children, old people). For these reasons, several mask wearing campaigns intend to sensitize people about this problem and good practices. In this sense, this work proposes three types of masked face detection dataset; namely, the Correctly Masked Face Dataset (CMFD), the Incorrectly Masked Face Dataset (IMFD) and their combination for the global masked face detection (MaskedFace-Net). Realistic masked face datasets are proposed with a twofold objective: i) to detect people having their faces masked or not masked, ii) to detect faces having their masks correctly worn or incorrectly worn (e.g.; at airport portals or in crowds). To the best of our knowledge, no large dataset of masked faces provides such a granularity of classification towards permitting mask wearing analysis. Moreover, this work globally presents the applied mask-to-face deformable model for permitting the generation of other masked face images, notably with specific masks. Our datasets of masked face images (137,016 images) are available at https://github.com/cabani/MaskedFace-Net.
, Sushanth Arunachalam, Sagayasree Z.
International Journal of Pervasive Computing and Communications, Volume 16, pp 223-234; https://doi.org/10.1108/ijpcc-05-2020-0046

Abstract:
Purpose The purpose of this paper is to inspect whether the people in a public place maintain social distancing. It also checks whether every individual is wearing face mask. If both are not done, the drone sends alarm signal to nearby police station and also give alarm to the public. In addition, it also carries masks and drop them to the needed people. Nearby, traffic police will also be identified and deliver water packet and mask to them if needed. Design/methodology/approach The proposed system uses an automated drone which is used to perform the inspection process. First, the drone is being constructed by considering the parameters such as components selection, payload calculation and then assembling the drone components and connecting the drone with the mission planner software for calibrating the drone for its stability. The trained yolov3 algorithm with the custom data set is being embedded in the drone’s camera. The drone camera runs the yolov3 algorithm and detects the social distance is maintained or not and whether the people in public is wearing masks. This process is carried out by the drone automatically. Findings The proposed system delivers masks to people who are not wearing masks and tells importance of masks and social distancing. Thus, this proposed system would work in an efficient manner after the lockdown period ends and helps in easy social distance inspection in an automatic manner. The algorithm can be embedded in public cameras and then details can be fetched to the camera unit same as the drone unit which receives details from the drone location details and store it in database. Thus, the proposed system favours the society by saving time and helps in lowering the spread of corona virus. Practical implications It can be implemented practically after lockdown to inspect people in public gatherings, shopping malls, etc. Social implications Automated inspection reduces manpower to inspect the public and also can be used in any place. Originality/value This is the original project done with the help of under graduate students of third year B.E. CSE. The system was tested and validated for accuracy with real data.
Published: 6 March 2020
by MDPI
Sensors, Volume 20; https://doi.org/10.3390/s20051465

Abstract:
In this paper, we consider building extraction from high spatial resolution remote sensing images. At present, most building extraction methods are based on artificial features. However, the diversity and complexity of buildings mean that building extraction methods still face great challenges, so methods based on deep learning have recently been proposed. In this paper, a building extraction framework based on a convolution neural network and edge detection algorithm is proposed. The method is called Mask R-CNN Fusion Sobel. Because of the outstanding achievement of Mask R-CNN in the field of image segmentation, this paper improves it and then applies it in remote sensing image building extraction. Our method consists of three parts. First, the convolutional neural network is used for rough location and pixel level classification, and the problem of false and missed extraction is solved by automatically discovering semantic features. Second, Sobel edge detection algorithm is used to segment building edges accurately so as to solve the problem of edge extraction and the integrity of the object of deep convolutional neural networks in semantic segmentation. Third, buildings are extracted by the fusion algorithm. We utilize the proposed framework to extract the building in high-resolution remote sensing images from Chinese satellite GF-2, and the experiments show that the average value of IOU (intersection over union) of the proposed method was 88.7% and the average value of Kappa was 87.8%, respectively. Therefore, our method can be applied to the recognition and segmentation of complex buildings and is superior to the classical method in accuracy.
Jun Zhang, , Zhe Zhu, Maciej A. Mazurowski
Medical Imaging 2018: Computer-Aided Diagnosis, Volume 10575; https://doi.org/10.1117/12.2295436

Abstract:
Breast tumor segmentation based on dynamic contrast-enhanced magnetic resonance imaging (DCE-MRI) remains an active as well as a challenging problem. Previous studies often rely on manual annotation for tumor regions, which is not only time-consuming but also error-prone. Recent studies have shown high promise of deep learning-based methods in various segmentation problems. However, these methods are usually faced with the challenge of limited number (e.g., tens or hundreds) of medical images for training, leading to sub-optimal segmentation performance. Also, previous methods cannot efficiently deal with prevalent class-imbalance problems in tumor segmentation, where the number of voxels in tumor regions is much lower than that in the background area. To address these issues, in this study, we propose a mask-guided hierarchical learning (MHL) framework for breast tumor segmentation via fully convolutional networks (FCN). Our strategy is first decomposing the original difficult problem into several sub-problems and then solving these relatively simpler sub-problems in a hierarchical manner. To precisely identify locations of tumors that underwent a biopsy, we further propose an FCN model to detect two landmarks defined on nipples. Finally, based on both segmentation probability maps and our identified landmarks, we proposed to select biopsied tumors from all detected tumors via a tumor selection strategy using the pathology location. We validate our MHL method using data for 272 patients, and achieve a mean Dice similarity coefficient (DSC) of 0.72 in breast tumor segmentation. Finally, in a radiogenomic analysis, we show that a previously developed image features show a comparable performance for identifying luminal A subtype when applied to the automatic segmentation and a semi-manual segmentation demonstrating a high promise for fully automated radiogenomic analysis in breast cancer.
Page of 1
Articles per Page
by
Show export options
  Select all
Back to Top Top