Surface Recognition With a Tactile Finger Based on Automatic Features Transferred From Deep Learning
To date, numerous tactile sensors and algorithms have been developed to tackle various perception problems. However, excessive endeavors are devoted to the multifarious definition, extraction, and analysis of hand-crafted features in order to improve perception accuracy. To address this problem, in this article, we designed a tactile finger containing four sensing elements (SE) to perceive both dynamic and static stimuli and meanwhile proposed a novel signal processing pipeline. The pipeline mainly consisted of time-series signals conversion, an automatic deep features extractor, and a shallow recognition model. When the tactile finger was applied to explore 16 surfaces based on a robotic platform, the four-channel signals were converted and concatenated into a time-frequency image via continuous wavelet transform (CWT). A deep feature extraction network was constructed based on a pretrained deep learning (DL) model, Resnet101, to extract the required features, which acted as high-level representations of the most discriminative components from the tactile images. Finally, these features were fed into a shallow machine learning (ML) model, i.e., extreme learning machine (ELM), achieving an accuracy as high as 92.38%. In such a manner, the powerful learning capability of DL models was transferred to the new recognition model directly while the tedious feature extraction procedures were alleviated. Besides, several relevant issues, such as the layer depth, the DL model type, and the shallow recognition model, are addressed and discussed to reveal their influences on performance.