Study for a Tactile Device for Visually Impaired People
The objective of this work is to study the conversion process from the depth information that is acquired from a real scene, into a tactile representation, through the use of a haptic display. The intent is to obtain useful knowledge to support the creation of tools targeted to assist visually impaired subjects, during their navigation tasks, more especially when the path have a significant number of large obstacles along its way.
There are three key areas involved here: The process of depth map acquisition, which is deeply related with computer vision algorithms and techniques; the assembly of a haptic display and the understanding of the human sense of touch; and, the study of related technologies that aid the locomotion of the visually impaired.
As a proof of concept, this work addresses the construction of a prototype that acquires scene depth information and sends it to a haptic display worn on the area of the torso, in real-time. The system was assembled using standard parts only. A Kinect device is the responsible for the depth acquisition, which is read by a PC and then transmitted to an Arduino microcontroller. It controls a haptic display () made of a matrix of seven by five (7x5) vibrating motors (tactors), using PWM (Pulse Width Modulation). The tactors itself are industry standard ERM (Eccentric Rotating Mass) motors, found in cellphones, where they produce the “vibracall” effect.