{"id":13988,"date":"2017-12-24T07:32:46","date_gmt":"2017-12-24T07:32:46","guid":{"rendered":"https:\/\/www.revoscience.com\/en\/?p=13988"},"modified":"2020-05-27T06:19:35","modified_gmt":"2020-05-27T06:19:35","slug":"new-depth-sensors-sensitive-enough-self-driving-cars","status":"publish","type":"post","link":"https:\/\/www.revoscience.com\/en\/new-depth-sensors-sensitive-enough-self-driving-cars\/","title":{"rendered":"New depth sensors could be sensitive enough for self-driving cars"},"content":{"rendered":"<p style=\"text-align: justify\"><span style=\"color: #000000\"><em><strong>Computational method improves the resolution of time-of-flight depth sensors 1,000-fold.<\/strong><\/em><\/span><\/p>\n<figure id=\"attachment_13989\" aria-describedby=\"caption-attachment-13989\" style=\"width: 639px\" class=\"wp-caption alignnone\"><img loading=\"lazy\" decoding=\"async\" class=\"size-full wp-image-13989\" src=\"https:\/\/www.revoscience.com\/en\/wp-content\/uploads\/2017\/12\/MIT-Micrometer-TOF_0.jpg\" alt=\"\" width=\"639\" height=\"426\" title=\"\" srcset=\"https:\/\/www.revoscience.com\/en\/wp-content\/uploads\/2017\/12\/MIT-Micrometer-TOF_0.jpg 639w, https:\/\/www.revoscience.com\/en\/wp-content\/uploads\/2017\/12\/MIT-Micrometer-TOF_0-300x200.jpg 300w\" sizes=\"auto, (max-width: 639px) 100vw, 639px\" \/><figcaption id=\"caption-attachment-13989\" class=\"wp-caption-text\">Comparing of the cascaded GHz approach with Kinect-style approaches visually represented on a key. From left to right, the original image, a Kinect-style approach, a GHz approach, and a stronger GHz approach.<br \/>Courtesy of the researchers<\/figcaption><\/figure>\n<p style=\"text-align: justify\"><span style=\"color: #000000\">CAMBRIDGE, Mass. &#8212;\u00a0For the past 10 years, the Camera Culture group at MIT\u2019s Media Lab has been developing innovative imaging systems \u2014 from a camera that can\u00a0<a style=\"color: #000000\" href=\"http:\/\/mit.pr-optout.com\/Tracking.aspx?Data=HHL%3d8242%3e2-%3eLCE9%3b4%3b8%3f%26SDG%3c90%3a.&amp;RE=MC&amp;RI=4334046&amp;Preview=False&amp;DistributionActionID=44829&amp;Action=Follow+Link\" target=\"_blank\" rel=\"noopener noreferrer\" data-saferedirecturl=\"https:\/\/www.google.com\/url?hl=en&amp;q=http:\/\/mit.pr-optout.com\/Tracking.aspx?Data%3DHHL%253d8242%253e2-%253eLCE9%253b4%253b8%253f%2526SDG%253c90%253a.%26RE%3DMC%26RI%3D4334046%26Preview%3DFalse%26DistributionActionID%3D44829%26Action%3DFollow%2BLink&amp;source=gmail&amp;ust=1514186695405000&amp;usg=AFQjCNEp01JWP9S-SrH8UvCW4ZDpT8VLng\">see around corners<\/a>\u00a0to one that can read text in\u00a0<a style=\"color: #000000\" href=\"http:\/\/mit.pr-optout.com\/Tracking.aspx?Data=HHL%3d8242%3e2-%3eLCE9%3b4%3b8%3f%26SDG%3c90%3a.&amp;RE=MC&amp;RI=4334046&amp;Preview=False&amp;DistributionActionID=44828&amp;Action=Follow+Link\" target=\"_blank\" rel=\"noopener noreferrer\" data-saferedirecturl=\"https:\/\/www.google.com\/url?hl=en&amp;q=http:\/\/mit.pr-optout.com\/Tracking.aspx?Data%3DHHL%253d8242%253e2-%253eLCE9%253b4%253b8%253f%2526SDG%253c90%253a.%26RE%3DMC%26RI%3D4334046%26Preview%3DFalse%26DistributionActionID%3D44828%26Action%3DFollow%2BLink&amp;source=gmail&amp;ust=1514186695405000&amp;usg=AFQjCNG7Mg-Qeg3FKlIS1l6i_HqjqdYVgQ\">closed books<\/a>\u00a0\u2014 by using \u201ctime of flight,\u201d an approach that gauges distance by measuring the time it takes light projected into a scene to bounce back to a sensor.<\/span><\/p>\n<p style=\"text-align: justify\"><span style=\"color: #000000\">In a\u00a0<a style=\"color: #000000\" href=\"http:\/\/mit.pr-optout.com\/Tracking.aspx?Data=HHL%3d8242%3e2-%3eLCE9%3b4%3b8%3f%26SDG%3c90%3a.&amp;RE=MC&amp;RI=4334046&amp;Preview=False&amp;DistributionActionID=44827&amp;Action=Follow+Link\" target=\"_blank\" rel=\"noopener noreferrer\" data-saferedirecturl=\"https:\/\/www.google.com\/url?hl=en&amp;q=http:\/\/mit.pr-optout.com\/Tracking.aspx?Data%3DHHL%253d8242%253e2-%253eLCE9%253b4%253b8%253f%2526SDG%253c90%253a.%26RE%3DMC%26RI%3D4334046%26Preview%3DFalse%26DistributionActionID%3D44827%26Action%3DFollow%2BLink&amp;source=gmail&amp;ust=1514186695405000&amp;usg=AFQjCNEMpW4lu3GpZN5ntyX103OXgg0ZsA\">new paper<\/a>\u00a0appearing in\u00a0<em>IEEE Access<\/em>, members of the Camera Culture group present a new approach to time-of-flight imaging that increases its depth resolution 1,000-fold. That\u2019s the type of resolution that could make self-driving cars practical.<\/span><\/p>\n<p style=\"text-align: justify\"><span style=\"color: #000000\">The new approach could also enable accurate distance measurements through fog, which has proven to be a major obstacle to the development of self-driving cars.<\/span><\/p>\n<p style=\"text-align: justify\"><span style=\"color: #000000\">At a range of 2 meters, existing time-of-flight systems have a depth resolution of about a centimeter. That\u2019s good enough for the assisted-parking and collision-detection systems on today\u2019s cars.<\/span><\/p>\n<p style=\"text-align: justify\"><span style=\"color: #000000\">But as Achuta Kadambi, a \u00a0joint PhD student in electrical engineering and computer science and media arts and sciences and first author on the paper, explains, \u201cAs you increase the range, your resolution goes down exponentially. Let\u2019s say you have a long-range scenario, and you want your car to detect an object further away so it can make a fast update decision. You may have started at 1 centimeter, but now you\u2019re back down to [a resolution of] a foot or even 5 feet. And if you make a mistake, it could lead to loss of life.\u201d<\/span><\/p>\n<p style=\"text-align: justify\"><span style=\"color: #000000\">At distances of 2 meters, the MIT researchers\u2019 system, by contrast, has a depth resolution of 3 micrometers. Kadambi also conducted tests in which he sent a light signal through 500 meters of optical fiber with regularly spaced filters along its length, to simulate the power falloff incurred over longer distances, before feeding it to his system. Those tests suggest that at a range of 500 meters, the MIT system should still achieve a depth resolution of only a centimeter.<\/span><\/p>\n<p style=\"text-align: justify\"><span style=\"color: #000000\">Kadambi is joined on the paper by his thesis advisor, Ramesh Raskar, an associate professor of media arts and sciences and head of the Camera Culture group.<\/span><\/p>\n<p style=\"text-align: justify\"><span style=\"color: #000000\"><strong>Slow uptake<\/strong><\/span><\/p>\n<p style=\"text-align: justify\"><span style=\"color: #000000\">With time-of-flight imaging, a short burst of light is fired into a scene, and a camera measures the time it takes to return, which indicates the distance of the object that reflected it. The longer the light burst, the more ambiguous the measurement of how far it\u2019s traveled. So light-burst length is one of the factors that determines system resolution.<\/span><\/p>\n<p style=\"text-align: justify\"><span style=\"color: #000000\">The other factor, however, is detection rate. Modulators, which turn a light beam off and on, can switch a billion times a second, but today\u2019s detectors can make only about 100 million measurements a second. Detection rate is what limits existing time-of-flight systems to centimeter-scale resolution.<\/span><\/p>\n<p style=\"text-align: justify\"><span style=\"color: #000000\">There is, however, another imaging technique that enables higher resolution, Kadambi says. That technique is interferometry, in which a light beam is split in two, and half of it is kept circulating locally while the other half \u2014 the \u201csample beam\u201d \u2014 is fired into a visual scene. The reflected sample beam is recombined with the locally circulated light, and the difference in phase between the two beams \u2014 the relative alignment of the troughs and crests of their electromagnetic waves \u2014 yields a very precise measure of the distance the sample beam has traveled.<\/span><\/p>\n<p style=\"text-align: justify\"><span style=\"color: #000000\">But interferometry requires careful synchronization of the two light beams. \u201cYou could never put interferometry on a car because it\u2019s so sensitive to vibrations,\u201d Kadambi says. \u201cWe\u2019re using some ideas from interferometry and some of the ideas from LIDAR, and we\u2019re really combining the two here.\u201d<\/span><\/p>\n<p style=\"text-align: justify\"><span style=\"color: #000000\"><strong>On the beat<\/strong><\/span><\/p>\n<p style=\"text-align: justify\"><span style=\"color: #000000\">They\u2019re also, he explains, using some ideas from acoustics. Anyone who\u2019s performed in a musical ensemble is familiar with the phenomenon of \u201cbeating.\u201d If two singers, say, are slightly out of tune \u2014 one producing a pitch at 440 hertz and the other at 437 hertz \u2014 the interplay of their voices will produce another tone, whose frequency is the difference between those of the notes they\u2019re singing \u2014 in this case, 3 hertz.<\/span><\/p>\n<p style=\"text-align: justify\"><span style=\"color: #000000\">The same is true with light pulses. If a time-of-flight imaging system is firing light into a scene at the rate of a billion pulses a second, and the returning light is combined with light pulsing 999,999,999 times a second, the result will be a light signal pulsing once a second \u2014 a rate easily detectable with a commodity video camera. And that slow \u201cbeat\u201d will contain all the phase information necessary to gauge distance.<\/span><\/p>\n<p style=\"text-align: justify\"><span style=\"color: #000000\">But rather than try to synchronize two high-frequency light signals \u2014 as interferometry systems must \u2014 Kadambi and Raskar simply modulate the returning signal, using the same technology that produced it in the first place. That is, they pulse the already pulsed light. The result is the same, but the approach is much more practical for automotive systems.<\/span><\/p>\n<p style=\"text-align: justify\"><span style=\"color: #000000\">\u201cThe fusion of the optical coherence and electronic coherence is very unique,\u201d Raskar says. \u201cWe\u2019re modulating the light at a few gigahertz, so it\u2019s like turning a flashlight on and off millions of times per second. But we\u2019re changing that electronically, not optically. The combination of the two is really where you get the power for this system.\u201d<\/span><\/p>\n<p style=\"text-align: justify\"><span style=\"color: #000000\"><strong>Through the fog<\/strong><\/span><\/p>\n<p style=\"text-align: justify\"><span style=\"color: #000000\">Gigahertz optical systems are naturally better at compensating for fog than lower-frequency systems. Fog is problematic for time-of-flight systems because it scatters light: It deflects the returning light signals so that they arrive late and at odd angles. Trying to isolate a true signal in all that noise is too computationally challenging to do on the fly.<\/span><\/p>\n<p style=\"text-align: justify\"><span style=\"color: #000000\">With low-frequency systems, scattering causes a slight shift in phase, one that simply muddies the signal that reaches the detector. But with high-frequency systems, the phase shift is much larger relative to the frequency of the signal. Scattered light signals arriving over different paths will actually cancel each other out: The troughs of one wave will align with the crests of another. Theoretical analyses performed at the University of Wisconsin and Columbia University suggest that this cancellation will be widespread enough to make identifying a true signal much easier.<\/span><\/p>\n","protected":false},"excerpt":{"rendered":"<p>Computational method improves the resolution of time-of-flight depth sensors 1,000-fold. CAMBRIDGE, Mass. &#8212;\u00a0For the past 10 years, the Camera Culture group at MIT\u2019s Media Lab has been developing innovative imaging systems \u2014 from a camera that can\u00a0see around corners\u00a0to one that can read text in\u00a0closed books\u00a0\u2014 by using \u201ctime of flight,\u201d an approach that gauges [&hellip;]<\/p>\n","protected":false},"author":6,"featured_media":13989,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[43,17],"tags":[],"class_list":["post-13988","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-computer-science","category-research"],"featured_image_urls":{"full":["https:\/\/www.revoscience.com\/en\/wp-content\/uploads\/2017\/12\/MIT-Micrometer-TOF_0.jpg",639,426,false],"thumbnail":["https:\/\/www.revoscience.com\/en\/wp-content\/uploads\/2017\/12\/MIT-Micrometer-TOF_0-150x150.jpg",150,150,true],"medium":["https:\/\/www.revoscience.com\/en\/wp-content\/uploads\/2017\/12\/MIT-Micrometer-TOF_0-300x200.jpg",300,200,true],"medium_large":["https:\/\/www.revoscience.com\/en\/wp-content\/uploads\/2017\/12\/MIT-Micrometer-TOF_0.jpg",639,426,false],"large":["https:\/\/www.revoscience.com\/en\/wp-content\/uploads\/2017\/12\/MIT-Micrometer-TOF_0.jpg",639,426,false],"1536x1536":["https:\/\/www.revoscience.com\/en\/wp-content\/uploads\/2017\/12\/MIT-Micrometer-TOF_0.jpg",639,426,false],"2048x2048":["https:\/\/www.revoscience.com\/en\/wp-content\/uploads\/2017\/12\/MIT-Micrometer-TOF_0.jpg",639,426,false],"ultp_layout_landscape_large":["https:\/\/www.revoscience.com\/en\/wp-content\/uploads\/2017\/12\/MIT-Micrometer-TOF_0.jpg",639,426,false],"ultp_layout_landscape":["https:\/\/www.revoscience.com\/en\/wp-content\/uploads\/2017\/12\/MIT-Micrometer-TOF_0.jpg",639,426,false],"ultp_layout_portrait":["https:\/\/www.revoscience.com\/en\/wp-content\/uploads\/2017\/12\/MIT-Micrometer-TOF_0.jpg",600,400,false],"ultp_layout_square":["https:\/\/www.revoscience.com\/en\/wp-content\/uploads\/2017\/12\/MIT-Micrometer-TOF_0.jpg",600,400,false],"newspaper-x-single-post":["https:\/\/www.revoscience.com\/en\/wp-content\/uploads\/2017\/12\/MIT-Micrometer-TOF_0.jpg",639,426,false],"newspaper-x-recent-post-big":["https:\/\/www.revoscience.com\/en\/wp-content\/uploads\/2017\/12\/MIT-Micrometer-TOF_0.jpg",540,360,false],"newspaper-x-recent-post-list-image":["https:\/\/www.revoscience.com\/en\/wp-content\/uploads\/2017\/12\/MIT-Micrometer-TOF_0.jpg",95,63,false],"web-stories-poster-portrait":["https:\/\/www.revoscience.com\/en\/wp-content\/uploads\/2017\/12\/MIT-Micrometer-TOF_0.jpg",639,426,false],"web-stories-publisher-logo":["https:\/\/www.revoscience.com\/en\/wp-content\/uploads\/2017\/12\/MIT-Micrometer-TOF_0.jpg",96,64,false],"web-stories-thumbnail":["https:\/\/www.revoscience.com\/en\/wp-content\/uploads\/2017\/12\/MIT-Micrometer-TOF_0.jpg",150,100,false]},"author_info":{"info":["Amrita Tuladhar"]},"category_info":"<a href=\"https:\/\/www.revoscience.com\/en\/category\/computer-science\/\" rel=\"category tag\">Computer Science<\/a> <a href=\"https:\/\/www.revoscience.com\/en\/category\/news\/research\/\" rel=\"category tag\">Research<\/a>","tag_info":"Research","comment_count":"0","_links":{"self":[{"href":"https:\/\/www.revoscience.com\/en\/wp-json\/wp\/v2\/posts\/13988","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.revoscience.com\/en\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.revoscience.com\/en\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.revoscience.com\/en\/wp-json\/wp\/v2\/users\/6"}],"replies":[{"embeddable":true,"href":"https:\/\/www.revoscience.com\/en\/wp-json\/wp\/v2\/comments?post=13988"}],"version-history":[{"count":0,"href":"https:\/\/www.revoscience.com\/en\/wp-json\/wp\/v2\/posts\/13988\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/www.revoscience.com\/en\/wp-json\/wp\/v2\/media\/13989"}],"wp:attachment":[{"href":"https:\/\/www.revoscience.com\/en\/wp-json\/wp\/v2\/media?parent=13988"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.revoscience.com\/en\/wp-json\/wp\/v2\/categories?post=13988"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.revoscience.com\/en\/wp-json\/wp\/v2\/tags?post=13988"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}