Get Your Hands on These Tactile Sensors


In the push to develop robotic systems that can sense, and interact with, their environment, huge investments have been made in computer vision. We have seen the fruits of these investments in a wide range of applications from self-driving automobiles to the automation of industrial robots. While these efforts have been very successful, optical sensors are not the ideal solution for every use case. Object manipulation tasks, for example, commonly require tactile information to accurately and safely handle the objects of interest. You might imagine a hybrid approach in which computer vision methods locate the object and direct a robot to the correct position. From there, tactile sensors within a robotic hand provide information about the fragility or robustness of the object, and assist in creating a plan to carry out the robot’s intentions.

As compared with computer vision, much less attention has been devoted to developing tactile sensors, leaving them generally less sophisticated than their optical counterparts. This has hindered the development of robots that are capable of generating a high-resolution understanding of their surroundings by integrating data from multiple types of sensors. In an effort to begin addressing the deficiencies in present tactile sensing technology, a team of engineers from ETH Zürich have developed a device that they call SmartHand. SmartHand is a hardware-software embedded system created to collect and process high-resolution tactile information from a hand-shaped multi-sensor array in real-time.

The SmartHand device uses a low-cost resistive tactile sensor grid, based on a conductive polymer composite, that is glued to a glove. An inertial measurement unit (IMU) is attached to the back of the glove to provide additional movement information. Data from the 1,024 tactile sensors (arranged in a 32 by 32 grid) and the IMU is fed into a STM32F769NI discovery board attached to the wrist via a series of wires. This board contains an Arm Cortex-M7 core running at 216 MHz, with 2 MB of flash memory, and 532 kB of RAM.

To demonstrate SmartHand, the researchers wanted to be able to detect what type of object the hand was holding. To do this, a convolutional neural network, based on the ResNet-18 architecture, was created and trained to recognize the relationship between sensor data and a set of sixteen everyday objects. A dataset was created using the physical device to serve as the training data for the model. Collecting measurements at 100 frames per second (13.7 times faster than previous work), a tactile dataset consisting of 340,000 frames was generated.

In validating the neural network, it was found that the model requires an order of magnitude less memory, and 15.6 times less computations, as compared with current devices. This was achieved by keeping the neural network as compact as possible, without sacrificing accuracy of predictions. Speaking of predictions, the top-1 classification accuracy of the model was found to have achieved 98.86% in recognizing objects. By keeping the processing at the edge, the inference time was kept to a very reasonable 100 milliseconds.

It was noted that due to inherent properties in the materials that compose the tactile sensor grid, there will be some level of degradation with repeated use. Early indications suggest that the degradation may plateau, which would allow for the current design to be recalibrated after an initial break-in period without any other changes. They are currently evaluating if this is the case, or if sensor degradation continues beyond the apparent plateau, which would require further design changes before real world use of the device would be possible.

The team sees SmartHand techniques being used in future robotic and prosthetic hand applications. With a bit more effort, this work may bring us closer to a world in which robots do not seem quite so robotic.


Source link

Leave a Reply

Your email address will not be published.