Transfer of Learning from Vision to Touch: A Hybrid Deep Convolutional Neural Network for Visuo-Tactile 3D Object Recognition
Open Access
- 26 December 2020
- Vol. 21 (1), 113
- https://doi.org/10.3390/s21010113
Abstract
Transfer of learning or leveraging a pre-trained network and fine-tuning it to perform new tasks has been successfully applied in a variety of machine intelligence fields, including computer vision, natural language processing and audio/speech recognition. Drawing inspiration from neuroscience research that suggests that both visual and tactile stimuli rouse similar neural networks in the human brain, in this work, we explore the idea of transferring learning from vision to touch in the context of 3D object recognition. In particular, deep convolutional neural networks (CNN) pre-trained on visual images are adapted and evaluated for the classification of tactile data sets. To do so, we ran experiments with five different pre-trained CNN architectures and on five different datasets acquired with different technologies of tactile sensors including BathTip, Gelsight, force-sensing resistor (FSR) array, a high-resolution virtual FSR sensor, and tactile sensors on the Barrett robotic hand. The results obtained confirm the transferability of learning from vision to touch to interpret 3D models. Due to its higher resolution, tactile data from optical tactile sensors was demonstrated to achieve higher classification rates based on visual features compared to other technologies relying on pressure measurements. Further analysis of the weight updates in the convolutional layer is performed to measure the similarity between visual and tactile features for each technology of tactile sensing. Comparing the weight updates in different convolutional layers suggests that by updating a few convolutional layers of a pre-trained CNN on visual data, it can be efficiently used to classify tactile data. Accordingly, we propose a hybrid architecture performing both visual and tactile 3D object recognition with a MobileNetV2 backbone. MobileNetV2 is chosen due to its smaller size and thus its capability to be implemented on mobile devices, such that the network can classify both visual and tactile data. An accuracy of 100% for visual and 77.63% for tactile data are achieved by the proposed architecture.Funding Information
- Natural Sciences and Engineering Research Council of Canada (Discovery Grant Program)
This publication has 17 references indexed in Scilit:
- Visuo-haptic integration in object identification using novel objectsAttention, Perception, & Psychophysics, 2017
- ImageNet classification with deep convolutional neural networksCommunications of the ACM, 2017
- Multimodal Bio-Inspired Tactile Sensing ModuleIEEE Sensors Journal, 2017
- Deep Learning for Surface Material Classification Using Haptic and Visual InformationIEEE Transactions on Multimedia, 2016
- Feeling form: the neural basis of haptic shape perceptionJournal of Neurophysiology, 2016
- Visuo-haptic multisensory object recognition, categorization, and representationFrontiers in Psychology, 2014
- Using Wavelet Extraction for Haptic Texture ClassificationLecture Notes in Computer Science, 2009
- The neural basis of haptic object processing.Canadian Journal of Experimental Psychology / Revue canadienne de psychologie expérimentale, 2007
- Visuo-haptic object-related activation in the ventral visual pathwayNature Neuroscience, 2001
- Similarities between touch and visionPublished by Elsevier BV ,1998