Eaves: An IoT-Based Acoustic Social Distancing Assistant for Pandemic-Like Situations

Abstract
In this article, we propose an IoT-based acoustic solution – Eaves – for ensuring social distancing in public areas during pandemic-like situations. Existing solutions depend on either sensing nearby radio signals such as Bluetooth or through image processing of video frames from surveillance cameras. Such methods either mandate the need for all parties to have the same application or impose line of sight constraints. We overcome such restrictions by using audio to ensure social distancing. The varying amplitude of the audio signals from different distances is the crux of the proposed method. Toward this, we record audio from different distances to extract human-voice-centric components and use the corresponding Mel-frequency cepstral coefficients. We train multiple machine learning models for selecting the one that predicts the distances efficiently with minimum delay and also propose possible IoT-based architectures to overcome resource limitations. Through extensive experiments and deployment, we observe a training accuracy of 97 percent and prediction accuracy of almost 100 percent up to 2 meters.