Research

Speech & Audio

Speech Communication through the Skin: Design of Learning Protocols and Initial Findings

April 21, 2018

Abstract

Evidence for successful communication through the sense of touch is provided by the natural methods of tactual communication that have been used for many years by persons with profound auditory and visual impairments. However, there are still currently no wearable devices that can support speech communication effectively without extensive learning. The present study reports the design and testing of learning protocols with a system that translates English phonemes to haptic stimulation patterns (haptic symbols). In one pilot study and two experiments, six participants went through the learning and testing of phonemes and words that involved different vocabulary sizes and learning protocols. We found that with a distinctive set of haptic symbols, it was possible for the participants to learn phonemes and words in small chunks of time. Further, our results provided evidence of the memory consolidation theory in that recognition performance improved after a period of inactivity on the part of a participant. Our findings pave the way for future work on improving the haptic symbols and on protocols that support the learning of a tactile speech communication system in hours as opposed to the much longer periods of time that are required to learn a new language.

Download the Paper

Related Publications

November 19, 2020

Speech & Audio

Generating Fact Checking Briefs

Angela Fan, Aleksandra Piktus, Antoine Bordes, Fabio Petroni, Guillaume Wenzek, Marzieh Saeidi, Sebastian Riedel, Andreas Vlachos

November 19, 2020

November 09, 2020

Speech & Audio

Multilingual AMR-to-Text Generation

Angela Fan

November 09, 2020

October 26, 2020

Speech & Audio

Deep Multilingual Transformer with Latent Depth

Xian Li, Asa Cooper Stickland, Xiang Kong, Yuqing Tang

October 26, 2020

October 25, 2020

Speech & Audio

Hide and Speak: Towards Deep Neural Networks for Speech Steganography

Yossef Mordechay Adi, Bhiksha Raj, Felix Kreuk, Joseph Keshet, Rita Singh

October 25, 2020

December 11, 2019

Speech & Audio

Computer Vision

Hyper-Graph-Network Decoders for Block Codes | Facebook AI Research

Eliya Nachmani, Lior Wolf

December 11, 2019

April 30, 2018

NLP

Speech & Audio

Identifying Analogies Across Domains | Facebook AI Research

Yedid Hoshen, Lior Wolf

April 30, 2018

April 30, 2018

Speech & Audio

VoiceLoop: Voice Fitting and Synthesis via a Phonolgoical Loop | Facebook AI Research

Yaniv Taigman, Lior Wolf, Adam Polyak, Eliya Nachmani

April 30, 2018

July 11, 2018

Speech & Audio

Fitting New Speakers Based on a Short Untranscribed Sample | Facebook AI Research

Eliya Nachmani, Adam Polyak, Yaniv Taigman, Lior Wolf

July 11, 2018

Help Us Pioneer The Future of AI

We share our open source frameworks, tools, libraries, and models for everything from research exploration to large-scale production deployment.