{"id":12441,"date":"2017-06-01T06:31:30","date_gmt":"2017-06-01T06:31:30","guid":{"rendered":"http:\/\/revoscience.com\/en\/?p=12441"},"modified":"2017-06-01T06:31:30","modified_gmt":"2017-06-01T06:31:30","slug":"wearable-system-helps-visually-impaired-users-navigate","status":"publish","type":"post","link":"https:\/\/www.revoscience.com\/en\/wearable-system-helps-visually-impaired-users-navigate\/","title":{"rendered":"Wearable system helps visually impaired users navigate"},"content":{"rendered":"<p style=\"text-align: justify;\"><span style=\"color: #000000;\"><em><strong>Device provides information from a 3-D camera, via vibrating motors and a Braille interface.<\/strong><\/em><\/span><\/p>\n<figure id=\"attachment_12442\" aria-describedby=\"caption-attachment-12442\" style=\"width: 639px\" class=\"wp-caption alignnone\"><img loading=\"lazy\" decoding=\"async\" class=\"size-full wp-image-12442\" src=\"http:\/\/revoscience.com\/en\/wp-content\/uploads\/2017\/06\/MIT-Blind-Navigation_0.jpg\" alt=\"\" width=\"639\" height=\"426\" title=\"\" srcset=\"https:\/\/www.revoscience.com\/en\/wp-content\/uploads\/2017\/06\/MIT-Blind-Navigation_0.jpg 639w, https:\/\/www.revoscience.com\/en\/wp-content\/uploads\/2017\/06\/MIT-Blind-Navigation_0-300x200.jpg 300w\" sizes=\"auto, (max-width: 639px) 100vw, 639px\" \/><figcaption id=\"caption-attachment-12442\" class=\"wp-caption-text\">New algorithms power a prototype system for helping visually impaired users avoid obstacles and identify objects.<br \/>Courtesy of the researchers<\/figcaption><\/figure>\n<p style=\"text-align: justify;\"><span style=\"color: #000000;\">CAMBRIDGE, Mass. &#8212;\u00a0Computer scientists have been working for decades on automatic navigation systems to aid the visually impaired, but it\u2019s been difficult to come up with anything as reliable and easy to use as the white cane, the type of metal-tipped cane that visually impaired people frequently use to identify clear walking paths.<\/span><\/p>\n<p style=\"text-align: justify;\"><span style=\"color: #000000;\">White canes have a few drawbacks, however. One is that the obstacles they come in contact with are sometimes other people. Another is that they can\u2019t identify certain types of objects, such as tables or chairs, or determine whether a chair is already occupied.<\/span><\/p>\n<p style=\"text-align: justify;\"><span style=\"color: #000000;\">Researchers from MIT\u2019s Computer Science and Artificial Intelligence Laboratory (CSAIL) have developed a new system that uses a 3-D camera, a belt with separately controllable vibrational motors distributed around it, and an electronically reconfigurable Braille interface to give visually impaired users more information about their environments.<\/span><\/p>\n<p style=\"text-align: justify;\"><span style=\"color: #000000;\">The system could be used in conjunction with or as an alternative to a cane. In a <a style=\"color: #000000;\" href=\"http:\/\/mit.pr-optout.com\/Tracking.aspx?Data=HHL%3d816080-%3eLCE9%3b4%3b8%3f%26SDG%3c90%3a.&amp;RE=MC&amp;RI=4334046&amp;Preview=False&amp;DistributionActionID=37408&amp;Action=Follow+Link\" target=\"_blank\" rel=\"noopener noreferrer\" data-saferedirecturl=\"https:\/\/www.google.com\/url?hl=en&amp;q=http:\/\/mit.pr-optout.com\/Tracking.aspx?Data%3DHHL%253d816080-%253eLCE9%253b4%253b8%253f%2526SDG%253c90%253a.%26RE%3DMC%26RI%3D4334046%26Preview%3DFalse%26DistributionActionID%3D37408%26Action%3DFollow%2BLink&amp;source=gmail&amp;ust=1496383514566000&amp;usg=AFQjCNGUq9l5n0RpNgg4Wkow2GMr9RKPOA\">paper<\/a> they\u2019re presenting this week at the International Conference on Robotics and Automation, the researchers describe the system and a series of usability studies they conducted with visually impaired volunteers.<\/span><\/p>\n<p style=\"text-align: justify;\"><span style=\"color: #000000;\">\u201cWe did a couple of different tests with blind users,\u201d says Robert Katzschmann, a graduate student in mechanical engineering at MIT and one of the paper\u2019s two first authors. \u201cHaving something that didn\u2019t infringe on their other senses was important. So we didn&#8217;t want to have audio; we didn\u2019t want to have something around the head, vibrations on the neck \u2014 all of those things, we tried them out, but none of them were accepted. We found that the one area of the body that is the least used for other senses is around your abdomen.\u201d<\/span><\/p>\n<p style=\"text-align: justify;\"><span style=\"color: #000000;\">Katzschmann is joined on the paper by his advisor Daniela Rus, an Andrew and Erna Viterbi Professor of Electrical Engineering and Computer Science; his fellow first author Hsueh-Cheng Wang, who was a postdoc at MIT when the work was done and is now an assistant professor of electrical and computer engineering at National Chiao Tung University in Taiwan; Santani Teng, a postdoc in CSAIL; Brandon Araki, a graduate student in mechanical engineering; and Laura Giarr\u00e9, a professor of electrical engineering at the University of Modena and Reggio Emilia in Italy.<\/span><\/p>\n<p style=\"text-align: justify;\"><span style=\"color: #000000;\"><strong>Parsing the world<\/strong><\/span><\/p>\n<p style=\"text-align: justify;\"><span style=\"color: #000000;\">The researchers\u2019 system consists of a 3-D camera worn in a pouch hung around the neck; a processing unit that runs the team\u2019s proprietary algorithms; the sensor belt, which has five vibrating motors evenly spaced around its forward half; and the reconfigurable Braille interface, which is worn at the user\u2019s side.<\/span><\/p>\n<p style=\"text-align: justify;\"><span style=\"color: #000000;\">The key to the system is an algorithm for quickly identifying surfaces and their orientations from the 3-D-camera data. The researchers experimented with three different types of 3-D cameras, which used three different techniques to gauge depth but all produced relatively low-resolution images \u2014 640 pixels by 480 pixels \u2014 with both color and depth measurements for each pixel.<\/span><\/p>\n<p style=\"text-align: justify;\"><span style=\"color: #000000;\">The algorithm first groups the pixels into clusters of three. Because the pixels have associated location data, each cluster determines a plane. If the orientations of the planes defined by five nearby clusters are within 10 degrees of each other, the system concludes that it has found a surface. It doesn\u2019t need to determine the extent of the surface or what type of object it\u2019s the surface of; it simply registers an obstacle at that location and begins to buzz the associated motor if the wearer gets within 2 meters of it.<\/span><\/p>\n<p style=\"text-align: justify;\"><span style=\"color: #000000;\">Chair identification is similar but a little more stringent. The system needs to complete three distinct surface identifications, in the same general area, rather than just one; this ensures that the chair is unoccupied. The surfaces need to be roughly parallel to the ground, and they have to fall within a prescribed range of heights.<\/span><\/p>\n<p style=\"text-align: justify;\"><span style=\"color: #000000;\"><strong>Tactile data<\/strong><\/span><\/p>\n<p style=\"text-align: justify;\"><span style=\"color: #000000;\">The belt motors can vary the frequency, intensity, and duration of their vibrations, as well as the intervals between them, to send different types of tactile signals to the user. For instance, an increase in frequency and intensity generally indicates that the wearer is approaching an obstacle in the direction indicated by that particular motor. But when the system is in chair-finding mode, for example, a double pulse indicates the direction in which a chair with a vacant seat can be found.<\/span><\/p>\n<p style=\"text-align: justify;\"><span style=\"color: #000000;\">The Braille interface consists of two rows of five reconfigurable Braille pads. Symbols displayed on the pads describe the objects in the user\u2019s environment \u2014 for instance, a \u201ct\u201d for table or a \u201cc\u201d for chair. The symbol\u2019s position in the row indicates the direction in which it can be found; the column it appears in indicates its distance. A user adept at Braille should find that the signals from the Braille interface and the belt-mounted motors coincide.<\/span><\/p>\n<p style=\"text-align: justify;\"><span style=\"color: #000000;\">In tests, the chair-finding system reduced subjects\u2019 contacts with objects other than the chairs they sought by 80 percent, and the navigation system reduced the number of cane collisions with people loitering around a hallway by 86 percent.<\/span><\/p>\n","protected":false},"excerpt":{"rendered":"<p>Device provides information from a 3-D camera, via vibrating motors and a Braille interface. CAMBRIDGE, Mass. &#8212;\u00a0Computer scientists have been working for decades on automatic navigation systems to aid the visually impaired, but it\u2019s been difficult to come up with anything as reliable and easy to use as the white cane, the type of metal-tipped [&hellip;]<\/p>\n","protected":false},"author":6,"featured_media":12442,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[43,17],"tags":[],"class_list":["post-12441","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-computer-science","category-research"],"featured_image_urls":{"full":["https:\/\/www.revoscience.com\/en\/wp-content\/uploads\/2017\/06\/MIT-Blind-Navigation_0.jpg",639,426,false],"thumbnail":["https:\/\/www.revoscience.com\/en\/wp-content\/uploads\/2017\/06\/MIT-Blind-Navigation_0-150x150.jpg",150,150,true],"medium":["https:\/\/www.revoscience.com\/en\/wp-content\/uploads\/2017\/06\/MIT-Blind-Navigation_0-300x200.jpg",300,200,true],"medium_large":["https:\/\/www.revoscience.com\/en\/wp-content\/uploads\/2017\/06\/MIT-Blind-Navigation_0.jpg",639,426,false],"large":["https:\/\/www.revoscience.com\/en\/wp-content\/uploads\/2017\/06\/MIT-Blind-Navigation_0.jpg",639,426,false],"1536x1536":["https:\/\/www.revoscience.com\/en\/wp-content\/uploads\/2017\/06\/MIT-Blind-Navigation_0.jpg",639,426,false],"2048x2048":["https:\/\/www.revoscience.com\/en\/wp-content\/uploads\/2017\/06\/MIT-Blind-Navigation_0.jpg",639,426,false],"ultp_layout_landscape_large":["https:\/\/www.revoscience.com\/en\/wp-content\/uploads\/2017\/06\/MIT-Blind-Navigation_0.jpg",639,426,false],"ultp_layout_landscape":["https:\/\/www.revoscience.com\/en\/wp-content\/uploads\/2017\/06\/MIT-Blind-Navigation_0.jpg",639,426,false],"ultp_layout_portrait":["https:\/\/www.revoscience.com\/en\/wp-content\/uploads\/2017\/06\/MIT-Blind-Navigation_0.jpg",600,400,false],"ultp_layout_square":["https:\/\/www.revoscience.com\/en\/wp-content\/uploads\/2017\/06\/MIT-Blind-Navigation_0.jpg",600,400,false],"newspaper-x-single-post":["https:\/\/www.revoscience.com\/en\/wp-content\/uploads\/2017\/06\/MIT-Blind-Navigation_0.jpg",639,426,false],"newspaper-x-recent-post-big":["https:\/\/www.revoscience.com\/en\/wp-content\/uploads\/2017\/06\/MIT-Blind-Navigation_0.jpg",540,360,false],"newspaper-x-recent-post-list-image":["https:\/\/www.revoscience.com\/en\/wp-content\/uploads\/2017\/06\/MIT-Blind-Navigation_0.jpg",95,63,false],"web-stories-poster-portrait":["https:\/\/www.revoscience.com\/en\/wp-content\/uploads\/2017\/06\/MIT-Blind-Navigation_0.jpg",639,426,false],"web-stories-publisher-logo":["https:\/\/www.revoscience.com\/en\/wp-content\/uploads\/2017\/06\/MIT-Blind-Navigation_0.jpg",96,64,false],"web-stories-thumbnail":["https:\/\/www.revoscience.com\/en\/wp-content\/uploads\/2017\/06\/MIT-Blind-Navigation_0.jpg",150,100,false]},"author_info":{"info":["Amrita Tuladhar"]},"category_info":"<a href=\"https:\/\/www.revoscience.com\/en\/category\/computer-science\/\" rel=\"category tag\">Computer Science<\/a> <a href=\"https:\/\/www.revoscience.com\/en\/category\/news\/research\/\" rel=\"category tag\">Research<\/a>","tag_info":"Research","comment_count":"0","_links":{"self":[{"href":"https:\/\/www.revoscience.com\/en\/wp-json\/wp\/v2\/posts\/12441","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.revoscience.com\/en\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.revoscience.com\/en\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.revoscience.com\/en\/wp-json\/wp\/v2\/users\/6"}],"replies":[{"embeddable":true,"href":"https:\/\/www.revoscience.com\/en\/wp-json\/wp\/v2\/comments?post=12441"}],"version-history":[{"count":0,"href":"https:\/\/www.revoscience.com\/en\/wp-json\/wp\/v2\/posts\/12441\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/www.revoscience.com\/en\/wp-json\/wp\/v2\/media\/12442"}],"wp:attachment":[{"href":"https:\/\/www.revoscience.com\/en\/wp-json\/wp\/v2\/media?parent=12441"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.revoscience.com\/en\/wp-json\/wp\/v2\/categories?post=12441"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.revoscience.com\/en\/wp-json\/wp\/v2\/tags?post=12441"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}