Abstract
There is substantial archival data available in different forms, including manuscripts, printed papers, photographs, videos, audios, artefacts, sculptures, building, and others. Media content like photographs, audios, and videos are crucial content because such content conveys information well. The digital version of such media data is essential as it can be shared easily, available in the online or offline platform, easy to copy, easy to transport, easy to back up and easy to keep multiple copies at different places. The limitation of the digital version of media data is the lack of searchability as it hardly has any text that can be processed for OCR. These important data cannot be analysed and, therefore, cannot be used in a meaningful way. To make this data meaningful, one has to manually identify people in the images and tag them to create metadata. Most of the photographs were possible to search based on very basic metadata. This data, when hosted on the web platform, searching media data is becoming a challenge due to its data formats. Improvement in existing search functionality is required to improve the searchability of the photographs in terms of ease of usage, quick retrieval and efficiency. The recent revolution in machine learning, deep learning and artificial intelligence offers a variety of facilities to process media data and identify meaningful information out of it. This research paper explains the methods to process digital photographs to classify people in the given photographs, tag them and saves that information in the metadata. We will tune various hyperparameter to improve their accuracy. Machine learning, deep learning and artificial intelligence offers several benefits, including auto-identification of people, auto-tagging them, provide insights and finally, the most important part is it improves the searchability of photographs drastically. It was envisaged that about 85% of the manual tagging activity might be reduced and improves the searchability of photographs by 90%.