{"id":14538,"date":"2018-02-23T09:37:51","date_gmt":"2018-02-23T09:37:51","guid":{"rendered":"https:\/\/www.revoscience.com\/en\/?p=14538"},"modified":"2020-05-27T06:09:00","modified_gmt":"2020-05-27T06:09:00","slug":"robo-picker-grasps-packs","status":"publish","type":"post","link":"https:\/\/www.revoscience.com\/en\/robo-picker-grasps-packs\/","title":{"rendered":"Robo-picker grasps and packs"},"content":{"rendered":"<p style=\"text-align: justify\"><span style=\"color: #000000\"><strong><em>New robotic system could lend a hand with warehouse sorting and other picking or clearing tasks.<\/em><\/strong><\/span><\/p>\n<figure id=\"attachment_14539\" aria-describedby=\"caption-attachment-14539\" style=\"width: 691px\" class=\"wp-caption alignnone\"><img loading=\"lazy\" decoding=\"async\" class=\" wp-image-14539\" src=\"https:\/\/www.revoscience.com\/en\/wp-content\/uploads\/2018\/02\/MIT-Pick-And-Place-01.jpg\" alt=\"\" width=\"691\" height=\"465\" title=\"\"><figcaption id=\"caption-attachment-14539\" class=\"wp-caption-text\">The \u201cpick-and-place\u201d system consists of a standard industrial robotic arm that the researchers outfitted with a custom gripper and suction cup. They developed an \u201cobject-agnostic\u201d grasping algorithm that enables the robot to assess a bin of random objects and determine the best way to grip or suction onto an item amid the clutter, without having to know anything about the object before picking it up.<br \/>Image: Melanie Gonick\/MIT<\/figcaption><\/figure>\n<p style=\"text-align: justify\"><span style=\"color: #000000\">CAMBRIDGE, MASS.&#8211;Unpacking groceries is a straightforward albeit tedious task: You reach into a bag, feel around for an item, and pull it out. A quick glance will tell you what the item is and where it should be stored.<\/span><\/p>\n<p style=\"text-align: justify\"><span style=\"color: #000000\">Now engineers from MIT and Princeton University have developed a robotic system that may one day lend a hand with this household chore, as well as assist in other picking and sorting tasks, from organizing products in a warehouse to clearing debris from a disaster zone.<\/span><\/p>\n<p style=\"text-align: justify\"><span style=\"color: #000000\">The team\u2019s \u201cpick-and-place\u201d system consists of a standard industrial robotic arm that the researchers outfitted with a custom gripper and suction cup. They developed an \u201cobject-agnostic\u201d grasping algorithm that enables the robot to assess a bin of random objects and determine the best way to grip or suction onto an item amid the clutter, without having to know anything about the object before picking it up.<\/span><\/p>\n<p style=\"text-align: justify\"><span style=\"color: #000000\">Once it has successfully grasped an item, the robot lifts it out from the bin. A set of cameras then takes images of the object from various angles, and with the help of a new image-matching algorithm the robot can compare the images of the picked object with a library of other images to find the closest match. In this way, the robot identifies the object, then stows it away in a separate bin.<\/span><\/p>\n<p style=\"text-align: justify\"><span style=\"color: #000000\">In general, the robot follows a \u201cgrasp-first-then-recognize\u201d workflow, which turns out to be an effective sequence compared to other pick-and-place technologies.<\/span><\/p>\n<p style=\"text-align: justify\"><span style=\"color: #000000\">\u201cThis can be applied to warehouse sorting, but also may be used to pick things from your kitchen cabinet or clear debris after an accident. There are many situations where picking technologies could have an impact,\u201d says Alberto Rodriguez, the Walter Henry Gale Career Development Professor in Mechanical Engineering at MIT.<\/span><\/p>\n<p style=\"text-align: justify\"><span style=\"color: #000000\">Rodriguez and his colleagues at MIT and Princeton will present a paper detailing their system at the IEEE International Conference on Robotics and Automation, in May.\u00a0<\/span><\/p>\n<p style=\"text-align: justify\"><span style=\"color: #000000\"><strong>Building a library of successes and failures<\/strong><\/span><\/p>\n<p style=\"text-align: justify\"><span style=\"color: #000000\">While pick-and-place technologies may have many uses, existing systems are typically designed to function only in tightly controlled environments.<\/span><\/p>\n<p style=\"text-align: justify\"><span style=\"color: #000000\">Today, most industrial picking robots are designed for one specific, repetitive task, such as gripping a car part off an assembly line, always in the same, carefully calibrated orientation. However, Rodriguez is working to design robots as more flexible, adaptable, and intelligent pickers, for unstructured settings such as retail warehouses, where a picker may consistently encounter and have to sort hundreds, if not thousands of novel objects each day, often amid dense clutter.<\/span><\/p>\n<p style=\"text-align: justify\"><span style=\"color: #000000\">The team\u2019s design is based on two general operations: picking \u2014 the act of successfully grasping an object, and perceiving \u2014 the ability to recognize and classify an object, once grasped. \u00a0\u00a0<\/span><\/p>\n<p style=\"text-align: justify\"><span style=\"color: #000000\">The researchers trained the robotic arm to pick novel objects out from a cluttered bin, using any one of four main grasping behaviors: suctioning onto an object, either vertically, or from the side; gripping the object vertically like the claw in an arcade game; or, for objects that lie flush against a wall, gripping vertically, then using a flexible spatula to slide between the object and the wall.<\/span><\/p>\n<p style=\"text-align: justify\"><span style=\"color: #000000\">Rodriguez and his team showed the robot images of bins cluttered with objects, captured from the robot\u2019s vantage point. They then showed the robot which objects were graspable, with which of the four main grasping behaviors, and which were not, marking each example as a success or failure. They did this for hundreds of examples, and over time, the researchers built up a library of picking successes and failures. They then incorporated this library into a \u201cdeep neural network\u201d \u2014 a class of learning algorithms that enables the robot to match the current problem it faces with a successful outcome from the past, based on its library of successes and failures.<\/span><\/p>\n<p style=\"text-align: justify\"><span style=\"color: #000000\">\u201cWe developed a system where, just by looking at a tote filled with objects, the robot knew how to predict which ones were graspable or suctionable, and which configuration of these picking behaviors was likely to be successful,\u201d Rodriguez says. \u201cOnce it was in the gripper, the object was much easier to recognize, without all the clutter.\u201d<\/span><\/p>\n<p style=\"text-align: justify\"><span style=\"color: #000000\"><strong>From pixels to labels<\/strong><\/span><\/p>\n<p style=\"text-align: justify\"><span style=\"color: #000000\">The researchers developed a perception system in a similar manner, enabling the robot to recognize and classify an object once it\u2019s been successfully grasped.<\/span><\/p>\n<p style=\"text-align: justify\"><span style=\"color: #000000\">To do so, they first assembled a library of product images taken from online sources such as retailer websites. They labeled each image with the correct identification \u2014 for instance, duct tape versus masking tape \u2014 and then developed another learning algorithm to relate the pixels in a given image to the correct label for a given object.<\/span><\/p>\n<p style=\"text-align: justify\"><span style=\"color: #000000\">\u201cWe\u2019re comparing things that, for humans, may be very easy to identify as the same, but in reality, as pixels, they could look significantly different,\u201d Rodriguez says. \u201cWe make sure that this algorithm gets it right for these training examples. Then the hope is that we\u2019ve given it enough training examples that, when we give it a new object, it will also predict the correct label.\u201d<\/span><\/p>\n<p style=\"text-align: justify\"><span style=\"color: #000000\">Last July, the team packed up the 2-ton robot and shipped it to Japan, where, a month later, they reassembled it to participate in the\u00a0<a style=\"color: #000000\" href=\"http:\/\/mit.pr-optout.com\/Tracking.aspx?Data=HHL%3d8261%3f9-%3eLCE9%3b4%3b8%3f%26SDG%3c90%3a.&amp;RE=MC&amp;RI=4334046&amp;Preview=False&amp;DistributionActionID=46900&amp;Action=Follow+Link\" target=\"_blank\" rel=\"noopener noreferrer\" data-saferedirecturl=\"https:\/\/www.google.com\/url?hl=en&amp;q=http:\/\/mit.pr-optout.com\/Tracking.aspx?Data%3DHHL%253d8261%253f9-%253eLCE9%253b4%253b8%253f%2526SDG%253c90%253a.%26RE%3DMC%26RI%3D4334046%26Preview%3DFalse%26DistributionActionID%3D46900%26Action%3DFollow%2BLink&amp;source=gmail&amp;ust=1519452154506000&amp;usg=AFQjCNGDU_5E6qnsttopeWZcCrb0OZZwFg\">Amazon Robotics Challenge<\/a>, a yearly competition sponsored by the online megaretailer to encourage innovations in warehouse technology. Rodriguez\u2019s team was one of 16 taking part in a competition to pick and stow objects from a cluttered bin.<\/span><\/p>\n<p style=\"text-align: justify\"><span style=\"color: #000000\">In the end, the team\u2019s robot had a 54 percent success rate in picking objects up using suction and a 75 percent success rate using grasping, and was able to recognize novel objects with 100 percent accuracy. The robot also stowed all 20 objects within the allotted time.<\/span><\/p>\n<p style=\"text-align: justify\"><span style=\"color: #000000\">For his work, Rodriguez was recently granted an Amazon Research Award and will be working with the company to further improve pick-and-place technology \u2014 foremost, its speed and reactivity.<\/span><\/p>\n<p style=\"text-align: justify\"><span style=\"color: #000000\">\u201cPicking in unstructured environments is not reliable unless you add some level of reactiveness,\u201d Rodriguez says. \u201cWhen humans pick, we sort of do small adjustments as we are picking. Figuring out how to do this more responsive picking, I think, is one of the key technologies we\u2019re interested in.\u201d<\/span><\/p>\n<p style=\"text-align: justify\"><span style=\"color: #000000\">The team has already taken some steps toward this goal by adding tactile sensors to the robot\u2019s gripper and running the system through a new training regime.<\/span><\/p>\n<p style=\"text-align: justify\"><span style=\"color: #000000\">\u201cThe gripper now has tactile sensors, and we\u2019ve enabled a system where the robot spends all day continuously picking things from one place to another. It\u2019s capturing information about when it succeeds and fails, and how it feels to pick up, or fails to pick up objects,\u201d Rodriguez says. \u201cHopefully it will use that information to start bringing that reactiveness to grasping.\u201d<\/span><\/p>\n<p style=\"text-align: justify\"><span style=\"color: #000000\">This research was sponsored in part by ABB Inc., Mathworks, and Amazon.<\/span><\/p>\n","protected":false},"excerpt":{"rendered":"<p>New robotic system could lend a hand with warehouse sorting and other picking or clearing tasks. CAMBRIDGE, MASS.&#8211;Unpacking groceries is a straightforward albeit tedious task: You reach into a bag, feel around for an item, and pull it out. A quick glance will tell you what the item is and where it should be stored. [&hellip;]<\/p>\n","protected":false},"author":6,"featured_media":14539,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[17],"tags":[],"class_list":["post-14538","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-research"],"featured_image_urls":{"full":["https:\/\/www.revoscience.com\/en\/wp-content\/uploads\/2018\/02\/MIT-Pick-And-Place-01.jpg",639,426,false],"thumbnail":["https:\/\/www.revoscience.com\/en\/wp-content\/uploads\/2018\/02\/MIT-Pick-And-Place-01-150x150.jpg",150,150,true],"medium":["https:\/\/www.revoscience.com\/en\/wp-content\/uploads\/2018\/02\/MIT-Pick-And-Place-01-300x200.jpg",300,200,true],"medium_large":["https:\/\/www.revoscience.com\/en\/wp-content\/uploads\/2018\/02\/MIT-Pick-And-Place-01.jpg",639,426,false],"large":["https:\/\/www.revoscience.com\/en\/wp-content\/uploads\/2018\/02\/MIT-Pick-And-Place-01.jpg",639,426,false],"1536x1536":["https:\/\/www.revoscience.com\/en\/wp-content\/uploads\/2018\/02\/MIT-Pick-And-Place-01.jpg",639,426,false],"2048x2048":["https:\/\/www.revoscience.com\/en\/wp-content\/uploads\/2018\/02\/MIT-Pick-And-Place-01.jpg",639,426,false],"ultp_layout_landscape_large":["https:\/\/www.revoscience.com\/en\/wp-content\/uploads\/2018\/02\/MIT-Pick-And-Place-01.jpg",639,426,false],"ultp_layout_landscape":["https:\/\/www.revoscience.com\/en\/wp-content\/uploads\/2018\/02\/MIT-Pick-And-Place-01.jpg",639,426,false],"ultp_layout_portrait":["https:\/\/www.revoscience.com\/en\/wp-content\/uploads\/2018\/02\/MIT-Pick-And-Place-01.jpg",600,400,false],"ultp_layout_square":["https:\/\/www.revoscience.com\/en\/wp-content\/uploads\/2018\/02\/MIT-Pick-And-Place-01.jpg",600,400,false],"newspaper-x-single-post":["https:\/\/www.revoscience.com\/en\/wp-content\/uploads\/2018\/02\/MIT-Pick-And-Place-01.jpg",639,426,false],"newspaper-x-recent-post-big":["https:\/\/www.revoscience.com\/en\/wp-content\/uploads\/2018\/02\/MIT-Pick-And-Place-01.jpg",540,360,false],"newspaper-x-recent-post-list-image":["https:\/\/www.revoscience.com\/en\/wp-content\/uploads\/2018\/02\/MIT-Pick-And-Place-01.jpg",95,63,false],"web-stories-poster-portrait":["https:\/\/www.revoscience.com\/en\/wp-content\/uploads\/2018\/02\/MIT-Pick-And-Place-01.jpg",639,426,false],"web-stories-publisher-logo":["https:\/\/www.revoscience.com\/en\/wp-content\/uploads\/2018\/02\/MIT-Pick-And-Place-01.jpg",96,64,false],"web-stories-thumbnail":["https:\/\/www.revoscience.com\/en\/wp-content\/uploads\/2018\/02\/MIT-Pick-And-Place-01.jpg",150,100,false]},"author_info":{"info":["Amrita Tuladhar"]},"category_info":"<a href=\"https:\/\/www.revoscience.com\/en\/category\/news\/research\/\" rel=\"category tag\">Research<\/a>","tag_info":"Research","comment_count":"0","_links":{"self":[{"href":"https:\/\/www.revoscience.com\/en\/wp-json\/wp\/v2\/posts\/14538","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.revoscience.com\/en\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.revoscience.com\/en\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.revoscience.com\/en\/wp-json\/wp\/v2\/users\/6"}],"replies":[{"embeddable":true,"href":"https:\/\/www.revoscience.com\/en\/wp-json\/wp\/v2\/comments?post=14538"}],"version-history":[{"count":0,"href":"https:\/\/www.revoscience.com\/en\/wp-json\/wp\/v2\/posts\/14538\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/www.revoscience.com\/en\/wp-json\/wp\/v2\/media\/14539"}],"wp:attachment":[{"href":"https:\/\/www.revoscience.com\/en\/wp-json\/wp\/v2\/media?parent=14538"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.revoscience.com\/en\/wp-json\/wp\/v2\/categories?post=14538"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.revoscience.com\/en\/wp-json\/wp\/v2\/tags?post=14538"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}