{"id":12481,"date":"2017-06-06T08:13:20","date_gmt":"2017-06-06T08:13:20","guid":{"rendered":"http:\/\/revoscience.com\/en\/?p=12481"},"modified":"2017-06-06T08:13:20","modified_gmt":"2017-06-06T08:13:20","slug":"giving-robots-sense-touch","status":"publish","type":"post","link":"https:\/\/www.revoscience.com\/en\/giving-robots-sense-touch\/","title":{"rendered":"Giving robots a sense of touch"},"content":{"rendered":"<p style=\"text-align: justify;\"><span style=\"color: #000000;\"><em><strong>GelSight technology lets robots gauge objects\u2019 hardness and manipulate small tools.<\/strong><\/em><\/span><\/p>\n<figure id=\"attachment_12482\" aria-describedby=\"caption-attachment-12482\" style=\"width: 640px\" class=\"wp-caption alignnone\"><img loading=\"lazy\" decoding=\"async\" class=\"size-full wp-image-12482\" src=\"http:\/\/revoscience.com\/en\/wp-content\/uploads\/2017\/06\/MIT-Gel-Sight-01_0.jpg\" alt=\"\" width=\"640\" height=\"426\" title=\"\" srcset=\"https:\/\/www.revoscience.com\/en\/wp-content\/uploads\/2017\/06\/MIT-Gel-Sight-01_0.jpg 640w, https:\/\/www.revoscience.com\/en\/wp-content\/uploads\/2017\/06\/MIT-Gel-Sight-01_0-300x200.jpg 300w\" sizes=\"auto, (max-width: 640px) 100vw, 640px\" \/><figcaption id=\"caption-attachment-12482\" class=\"wp-caption-text\">A GelSight sensor attached to a robot\u2019s gripper enables the robot to determine precisely where it has grasped a small screwdriver, removing it from and inserting it back into a slot, even when the gripper screens the screwdriver from the robot\u2019s camera.<br \/>Photo: Robot Locomotion Group at MIT<\/figcaption><\/figure>\n<p style=\"text-align: justify;\"><span style=\"color: #000000;\">CAMBRIDGE, Mass. &#8212;\u00a0Eight years ago, Ted Adelson\u2019s research group at MIT\u2019s Computer Science and Artificial Intelligence Laboratory (CSAIL) unveiled a new sensor technology, called GelSight, that uses physical contact with an object to provide a remarkably detailed 3-D map of its surface.<\/span><\/p>\n<p style=\"text-align: justify;\"><span style=\"color: #000000;\">Now, by mounting GelSight sensors on the grippers of robotic arms, two MIT teams have given robots greater sensitivity and dexterity. The researchers presented their work in two<\/span> <a href=\"http:\/\/mit.pr-optout.com\/Tracking.aspx?Data=HHL%3d8161%3c6-%3eLCE9%3b4%3b8%3f%26SDG%3c90%3a.&amp;RE=MC&amp;RI=4334046&amp;Preview=False&amp;DistributionActionID=37487&amp;Action=Follow+Link\" target=\"_blank\" rel=\"noopener noreferrer\" data-saferedirecturl=\"https:\/\/www.google.com\/url?hl=en&amp;q=http:\/\/mit.pr-optout.com\/Tracking.aspx?Data%3DHHL%253d8161%253c6-%253eLCE9%253b4%253b8%253f%2526SDG%253c90%253a.%26RE%3DMC%26RI%3D4334046%26Preview%3DFalse%26DistributionActionID%3D37487%26Action%3DFollow%2BLink&amp;source=gmail&amp;ust=1496817375321000&amp;usg=AFQjCNEWv12CuTRWqPuFlwkgBSsEmzo4mA\">papers<\/a> <span style=\"color: #000000;\">at the International Conference on Robotics and Automation last week.<\/span><\/p>\n<p style=\"text-align: justify;\"><span style=\"color: #000000;\">In one paper, Adelson\u2019s group uses the data from the GelSight sensor to enable a robot to judge the hardness of surfaces it touches \u2014 a crucial ability if household robots are to handle everyday objects.<\/span><\/p>\n<p style=\"text-align: justify;\"><span style=\"color: #000000;\">In the other, Russ Tedrake\u2019s Robot Locomotion Group at CSAIL uses GelSight sensors to enable a robot to manipulate smaller objects than was previously possible.<\/span><\/p>\n<p style=\"text-align: justify;\"><span style=\"color: #000000;\">The GelSight sensor is, in some ways, a low-tech solution to a difficult problem. It consists of a block of transparent rubber \u2014 the \u201cgel\u201d of its name \u2014 one face of which is coated with metallic paint. When the paint-coated face is pressed against an object, it conforms to the object\u2019s shape.<\/span><\/p>\n<p style=\"text-align: justify;\"><span style=\"color: #000000;\">The metallic paint makes the object\u2019s surface reflective, so its geometry becomes much easier for computer vision algorithms to infer. Mounted on the sensor opposite the paint-coated face of the rubber block are three colored lights and a single camera.<\/span><\/p>\n<p style=\"text-align: justify;\"><span style=\"color: #000000;\">\u201c[The system] has colored lights at different angles, and then it has this reflective material, and by looking at the colors, the computer \u2026 can figure out the 3-D shape of what that thing is,\u201d explains Adelson, the John and Dorothy Wilson Professor of Vision Science in the Department of Brain and Cognitive Sciences.<\/span><\/p>\n<p style=\"text-align: justify;\"><span style=\"color: #000000;\">In both sets of experiments, a GelSight sensor was mounted on one side of a robotic gripper, a device somewhat like the head of a pincer, but with flat gripping surfaces rather than pointed tips.<\/span><\/p>\n<p style=\"text-align: justify;\"><span style=\"color: #000000;\"><strong>Contact points<\/strong><\/span><\/p>\n<p style=\"text-align: justify;\"><span style=\"color: #000000;\">For an autonomous robot, gauging objects\u2019 softness or hardness is essential to deciding not only where and how hard to grasp them but how they will behave when moved, stacked, or laid on different surfaces. Tactile sensing could also aid robots in distinguishing objects that look similar.<\/span><\/p>\n<p style=\"text-align: justify;\"><span style=\"color: #000000;\">In previous work, robots have attempted to assess objects\u2019 hardness by laying them on a flat surface and gently poking them to see how much they give. But this is not the chief way in which humans gauge hardness. Rather, our judgments seem to be based on the degree to which the contact area between the object and our fingers changes as we press on it. Softer objects tend to flatten more, increasing the contact area.<\/span><\/p>\n<p style=\"text-align: justify;\"><span style=\"color: #000000;\">The MIT researchers adopted the same approach. Wenzhen Yuan, a graduate student in mechanical engineering and first author on the paper from Adelson\u2019s group, used confectionary molds to create 400 groups of silicone objects, with 16 objects per group. In each group, the objects had the same shapes but different degrees of hardness, which Yuan measured using a standard industrial scale.<\/span><\/p>\n<p style=\"text-align: justify;\"><span style=\"color: #000000;\">Then she pressed a GelSight sensor against each object manually and recorded how the contact pattern changed over time, essentially producing a short movie for each object. To both standardize the data format and keep the size of the data manageable, she extracted five frames from each movie, evenly spaced in time, which described the deformation of the object that was pressed.<\/span><\/p>\n<p style=\"text-align: justify;\"><span style=\"color: #000000;\">Finally, she fed the data to a<\/span> <a href=\"http:\/\/mit.pr-optout.com\/Tracking.aspx?Data=HHL%3d8161%3c6-%3eLCE9%3b4%3b8%3f%26SDG%3c90%3a.&amp;RE=MC&amp;RI=4334046&amp;Preview=False&amp;DistributionActionID=37486&amp;Action=Follow+Link\" target=\"_blank\" rel=\"noopener noreferrer\" data-saferedirecturl=\"https:\/\/www.google.com\/url?hl=en&amp;q=http:\/\/mit.pr-optout.com\/Tracking.aspx?Data%3DHHL%253d8161%253c6-%253eLCE9%253b4%253b8%253f%2526SDG%253c90%253a.%26RE%3DMC%26RI%3D4334046%26Preview%3DFalse%26DistributionActionID%3D37486%26Action%3DFollow%2BLink&amp;source=gmail&amp;ust=1496817375321000&amp;usg=AFQjCNH99HaZ5Wr5FO1ig8KknybMXkd1gA\">neural network<\/a><span style=\"color: #000000;\">, which automatically looked for correlations between changes in contact patterns and hardness measurements. The resulting system takes frames of video as inputs and produces hardness scores with very high accuracy. Yuan also conducted a series of informal experiments in which human subjects palpated fruits and vegetables and ranked them according to hardness. In every instance, the GelSight-equipped robot arrived at the same rankings.<\/span><\/p>\n<p style=\"text-align: justify;\"><span style=\"color: #000000;\">Yuan is joined on the paper by her two thesis advisors, Adelson and Mandayam\u00a0Srinivasan, a senior research scientist in the Department of Mechanical Engineering; Chenzhuo Zhu, an undergraduate from Tsinghua University who visited Adelson\u2019s group last summer; and Andrew Owens, who did his PhD in electrical engineering and computer science at MIT and is now a postdoc at the University of California at Berkeley.<\/span><\/p>\n<p style=\"text-align: justify;\"><span style=\"color: #000000;\"><strong>Obstructed views<\/strong><\/span><\/p>\n<p style=\"text-align: justify;\"><span style=\"color: #000000;\">The paper from the Robot Locomotion Group was born of the group\u2019s<\/span> <a href=\"http:\/\/mit.pr-optout.com\/Tracking.aspx?Data=HHL%3d8161%3c6-%3eLCE9%3b4%3b8%3f%26SDG%3c90%3a.&amp;RE=MC&amp;RI=4334046&amp;Preview=False&amp;DistributionActionID=37485&amp;Action=Follow+Link\" target=\"_blank\" rel=\"noopener noreferrer\" data-saferedirecturl=\"https:\/\/www.google.com\/url?hl=en&amp;q=http:\/\/mit.pr-optout.com\/Tracking.aspx?Data%3DHHL%253d8161%253c6-%253eLCE9%253b4%253b8%253f%2526SDG%253c90%253a.%26RE%3DMC%26RI%3D4334046%26Preview%3DFalse%26DistributionActionID%3D37485%26Action%3DFollow%2BLink&amp;source=gmail&amp;ust=1496817375321000&amp;usg=AFQjCNGpwfnkyE1-K8Ptzi-Ji47ezkUCJQ\">experience<\/a> <span style=\"color: #000000;\">with the Defense Advanced Research Projects Agency\u2019s Robotics Challenge (DRC), in which academic and industry teams competed to develop control systems that would guide a humanoid robot through a series of tasks related to a hypothetical emergency.<\/span><\/p>\n<p style=\"text-align: justify;\"><span style=\"color: #000000;\">Typically, an autonomous robot will use some kind of computer vision system to guide its manipulation of objects in its environment. Such systems can provide very reliable information about an object\u2019s location \u2014 until the robot picks the object up. Especially if the object is small, much of it will be occluded by the robot\u2019s gripper, making location estimation much harder. Thus, at exactly the point at which the robot needs to know the object\u2019s location precisely, its estimate becomes unreliable. This was the problem the MIT team faced during the DRC, when their robot had to pick up and turn on a power drill.<\/span><\/p>\n<p style=\"text-align: justify;\"><span style=\"color: #000000;\">\u201cYou can see in our video for the DRC that we spend two or three minutes turning on the drill,\u201d says Greg Izatt, a graduate student in electrical engineering and computer science and first author on the new paper. \u201cIt would be so much nicer if we had a live-updating, accurate estimate of where that drill was and where our hands were relative to it.\u201d<\/span><\/p>\n<p style=\"text-align: justify;\"><span style=\"color: #000000;\">That\u2019s why the Robot Locomotion Group turned to GelSight. Izatt and his co-authors \u2014 Tedrake, the Toyota Professor of Electrical Engineering and Computer Science, Aeronautics and Astronautics, and Mechanical Engineering; Adelson; and Geronimo Mirano, another graduate student in Tedrake\u2019s group \u2014 designed control algorithms that use a computer vision system to guide the robot\u2019s gripper toward a tool and then turn location estimation over to a GelSight sensor once the robot has the tool in hand.<\/span><\/p>\n<p style=\"text-align: justify;\"><span style=\"color: #000000;\">In general, the challenge with such an approach is reconciling the data produced by a vision system with data produced by a tactile sensor. But GelSight is itself camera-based, so its data output is much easier to integrate with visual data than the data from other tactile sensors.<\/span><\/p>\n<p style=\"text-align: justify;\"><span style=\"color: #000000;\">In Izatt\u2019s experiments, a robot with a GelSight-equipped gripper had to grasp a small screwdriver, remove it from a holster, and return it. Of course, the data from the GelSight sensor don\u2019t describe the whole screwdriver, just a small patch of it. But Izatt found that, as long as the vision system\u2019s estimate of the screwdriver\u2019s initial position was accurate to within a few centimeters, his algorithms could deduce which part of the screwdriver the GelSight sensor was touching and thus determine the screwdriver\u2019s position in the robot\u2019s hand.<\/span><\/p>\n","protected":false},"excerpt":{"rendered":"<p>GelSight technology lets robots gauge objects\u2019 hardness and manipulate small tools. CAMBRIDGE, Mass. &#8212;\u00a0Eight years ago, Ted Adelson\u2019s research group at MIT\u2019s Computer Science and Artificial Intelligence Laboratory (CSAIL) unveiled a new sensor technology, called GelSight, that uses physical contact with an object to provide a remarkably detailed 3-D map of its surface. Now, by [&hellip;]<\/p>\n","protected":false},"author":6,"featured_media":12482,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[17,28],"tags":[],"class_list":["post-12481","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-research","category-techbiz"],"featured_image_urls":{"full":["https:\/\/www.revoscience.com\/en\/wp-content\/uploads\/2017\/06\/MIT-Gel-Sight-01_0.jpg",640,426,false],"thumbnail":["https:\/\/www.revoscience.com\/en\/wp-content\/uploads\/2017\/06\/MIT-Gel-Sight-01_0-150x150.jpg",150,150,true],"medium":["https:\/\/www.revoscience.com\/en\/wp-content\/uploads\/2017\/06\/MIT-Gel-Sight-01_0-300x200.jpg",300,200,true],"medium_large":["https:\/\/www.revoscience.com\/en\/wp-content\/uploads\/2017\/06\/MIT-Gel-Sight-01_0.jpg",640,426,false],"large":["https:\/\/www.revoscience.com\/en\/wp-content\/uploads\/2017\/06\/MIT-Gel-Sight-01_0.jpg",640,426,false],"1536x1536":["https:\/\/www.revoscience.com\/en\/wp-content\/uploads\/2017\/06\/MIT-Gel-Sight-01_0.jpg",640,426,false],"2048x2048":["https:\/\/www.revoscience.com\/en\/wp-content\/uploads\/2017\/06\/MIT-Gel-Sight-01_0.jpg",640,426,false],"ultp_layout_landscape_large":["https:\/\/www.revoscience.com\/en\/wp-content\/uploads\/2017\/06\/MIT-Gel-Sight-01_0.jpg",640,426,false],"ultp_layout_landscape":["https:\/\/www.revoscience.com\/en\/wp-content\/uploads\/2017\/06\/MIT-Gel-Sight-01_0.jpg",640,426,false],"ultp_layout_portrait":["https:\/\/www.revoscience.com\/en\/wp-content\/uploads\/2017\/06\/MIT-Gel-Sight-01_0.jpg",600,399,false],"ultp_layout_square":["https:\/\/www.revoscience.com\/en\/wp-content\/uploads\/2017\/06\/MIT-Gel-Sight-01_0.jpg",600,399,false],"newspaper-x-single-post":["https:\/\/www.revoscience.com\/en\/wp-content\/uploads\/2017\/06\/MIT-Gel-Sight-01_0.jpg",640,426,false],"newspaper-x-recent-post-big":["https:\/\/www.revoscience.com\/en\/wp-content\/uploads\/2017\/06\/MIT-Gel-Sight-01_0.jpg",541,360,false],"newspaper-x-recent-post-list-image":["https:\/\/www.revoscience.com\/en\/wp-content\/uploads\/2017\/06\/MIT-Gel-Sight-01_0.jpg",95,63,false],"web-stories-poster-portrait":["https:\/\/www.revoscience.com\/en\/wp-content\/uploads\/2017\/06\/MIT-Gel-Sight-01_0.jpg",640,426,false],"web-stories-publisher-logo":["https:\/\/www.revoscience.com\/en\/wp-content\/uploads\/2017\/06\/MIT-Gel-Sight-01_0.jpg",96,64,false],"web-stories-thumbnail":["https:\/\/www.revoscience.com\/en\/wp-content\/uploads\/2017\/06\/MIT-Gel-Sight-01_0.jpg",150,100,false]},"author_info":{"info":["Amrita Tuladhar"]},"category_info":"<a href=\"https:\/\/www.revoscience.com\/en\/category\/news\/research\/\" rel=\"category tag\">Research<\/a> <a href=\"https:\/\/www.revoscience.com\/en\/category\/techbiz\/\" rel=\"category tag\">Tech<\/a>","tag_info":"Tech","comment_count":"0","_links":{"self":[{"href":"https:\/\/www.revoscience.com\/en\/wp-json\/wp\/v2\/posts\/12481","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.revoscience.com\/en\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.revoscience.com\/en\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.revoscience.com\/en\/wp-json\/wp\/v2\/users\/6"}],"replies":[{"embeddable":true,"href":"https:\/\/www.revoscience.com\/en\/wp-json\/wp\/v2\/comments?post=12481"}],"version-history":[{"count":0,"href":"https:\/\/www.revoscience.com\/en\/wp-json\/wp\/v2\/posts\/12481\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/www.revoscience.com\/en\/wp-json\/wp\/v2\/media\/12482"}],"wp:attachment":[{"href":"https:\/\/www.revoscience.com\/en\/wp-json\/wp\/v2\/media?parent=12481"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.revoscience.com\/en\/wp-json\/wp\/v2\/categories?post=12481"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.revoscience.com\/en\/wp-json\/wp\/v2\/tags?post=12481"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}