Touch100k:

A Large-Scale Touch-Language-Vision Dataset for Touch-Centric Multimodal Representation

1 Beijing Jiaotong University   2 WeChat AI, Tencent Inc.   3 Beijing University of Posts and Telecommunications

Abstract

Touch holds a pivotal position in enhancing the perceptual and interactive capabilities of both humans and robots. Despite its significance, current tactile research mainly focuses on visual and tactile modalities, overlooking the language domain. Inspired by this insight, we construct Touch100k, a paired touch-language-vision dataset at the scale of 100k, featuring tactile sensation descriptions in multiple granularities (i.e., sentence-level natural expressions with rich semantics, including contextual and dynamic relationships, and phrase-level descriptions capturing the key features of tactile sensations). Based on the dataset, we propose a pre-training method, Touch-Language-Vision Representation Learning through Curriculum Linking (TLV-Link, for short), inspired by the concept of curriculum learning. TLV-Link aims to learn a tactile representation for the GelSight sensor and capture the relationship between tactile, language, and visual modalities. We evaluate our model's performance across two task categories (namely, material property identification and robot grasping prediction), focusing on tactile representation and zero-shot touch understanding. The experimental results demonstrate that TLV-Link achieves significant advancements, establishing a new state-of-the-art performance in touch-centric multimodal representation learning. Additionally, the results validate the efficacy of the constructed Touch100k dataset.

Credit: The design of this project page references the project pages of Nerfies.