{"id":14762,"date":"2018-03-22T07:15:00","date_gmt":"2018-03-22T07:15:00","guid":{"rendered":"https:\/\/www.revoscience.com\/en\/?p=14762"},"modified":"2020-05-27T06:04:40","modified_gmt":"2020-05-27T06:04:40","slug":"depth-sensing-imaging-system-can-peer-through-fog","status":"publish","type":"post","link":"https:\/\/www.revoscience.com\/en\/depth-sensing-imaging-system-can-peer-through-fog\/","title":{"rendered":"Depth-sensing imaging system can peer through fog"},"content":{"rendered":"<p style=\"text-align: justify\"><span style=\"color: #000000\"><strong><em>Computational photography could solve a problem that bedevils self-driving cars.<\/em><\/strong><\/span><\/p>\n<figure id=\"attachment_14763\" aria-describedby=\"caption-attachment-14763\" style=\"width: 639px\" class=\"wp-caption alignnone\"><img loading=\"lazy\" decoding=\"async\" class=\"size-full wp-image-14763\" src=\"https:\/\/www.revoscience.com\/en\/wp-content\/uploads\/2018\/03\/MIT-Seeing-Through-Fog-01_0.jpg\" alt=\"\" width=\"639\" height=\"426\" title=\"\" srcset=\"https:\/\/www.revoscience.com\/en\/wp-content\/uploads\/2018\/03\/MIT-Seeing-Through-Fog-01_0.jpg 639w, https:\/\/www.revoscience.com\/en\/wp-content\/uploads\/2018\/03\/MIT-Seeing-Through-Fog-01_0-300x200.jpg 300w\" sizes=\"auto, (max-width: 639px) 100vw, 639px\" \/><figcaption id=\"caption-attachment-14763\" class=\"wp-caption-text\">Guy Satat, a graduate student in the MIT Media Lab, who led the new study.<br \/>Image: Melanie Gonick\/MIT<\/figcaption><\/figure>\n<p style=\"text-align: justify\"><span style=\"color: #000000\">CAMBRIDGE, MASS.&#8211;MIT researchers have developed a system that can produce images of objects shrouded by fog so thick that human vision can\u2019t penetrate it. It can also gauge the objects\u2019 distance.<\/span><\/p>\n<p style=\"text-align: justify\"><span style=\"color: #000000\">An inability to handle misty driving conditions has been one of the chief obstacles to the development of autonomous vehicular navigation systems that use visible light, which are preferable to radar-based systems for their high resolution and ability to read road signs and track lane markers. So, the MIT system could be a crucial step toward self-driving cars.<\/span><\/p>\n<p style=\"text-align: justify\"><span style=\"color: #000000\">The researchers tested the system using a small tank of water with the vibrating motor from a humidifier immersed in it. In fog so dense that human vision could penetrate only 36 centimeters, the system was able to resolve images of objects and gauge their depth at a range of 57 centimeters.<\/span><\/p>\n<p style=\"text-align: justify\"><span style=\"color: #000000\">Fifty-seven centimeters is not a great distance, but the fog produced for the study is far denser than any that a human driver would have to contend with; in the real world, a typical fog might afford a visibility of about 30 to 50 meters. The vital point is that the system performed better than human vision, whereas most imaging systems perform far worse. A navigation system that was even as good as a human driver at driving in fog would be a huge breakthrough.<\/span><\/p>\n<p style=\"text-align: justify\"><span style=\"color: #000000\">\u201cI decided to take on the challenge of developing a system that can see through actual fog,\u201d says Guy Satat, a graduate student in the MIT Media Lab, who led the research. \u201cWe\u2019re dealing with realistic fog, which is dense, dynamic, and heterogeneous. It is constantly moving and changing, with patches of denser or less-dense fog. Other methods are not designed to cope with such realistic scenarios.\u201d<\/span><\/p>\n<p style=\"text-align: justify\"><span style=\"color: #000000\">Satat and his colleagues describe their system in a paper they\u2019ll present at the International Conference on Computational Photography in May. Satat is first author on the paper, and he\u2019s joined by his thesis advisor, associate professor of media arts and sciences Ramesh Raskar, and by Matthew Tancik, who was a graduate student in electrical engineering and computer science when the work was done.<\/span><\/p>\n<p style=\"text-align: justify\"><span style=\"color: #000000\"><strong>Playing the odds<\/strong><\/span><\/p>\n<p style=\"text-align: justify\"><span style=\"color: #000000\">Like\u00a0<a style=\"color: #000000\" href=\"http:\/\/mit.pr-optout.com\/Tracking.aspx?Data=HHL%3d8274%3c4-%3eLCE9%3b4%3b8%3f%26SDG%3c90%3a.&amp;RE=MC&amp;RI=4334046&amp;Preview=False&amp;DistributionActionID=48101&amp;Action=Follow+Link\" target=\"_blank\" rel=\"noopener noreferrer\" data-saferedirecturl=\"https:\/\/www.google.com\/url?hl=en&amp;q=http:\/\/mit.pr-optout.com\/Tracking.aspx?Data%3DHHL%253d8274%253c4-%253eLCE9%253b4%253b8%253f%2526SDG%253c90%253a.%26RE%3DMC%26RI%3D4334046%26Preview%3DFalse%26DistributionActionID%3D48101%26Action%3DFollow%2BLink&amp;source=gmail&amp;ust=1521788133385000&amp;usg=AFQjCNEmPqoyGFpHexpC9ABfbbBJXWdyhg\">many<\/a>\u00a0of the\u00a0<a style=\"color: #000000\" href=\"http:\/\/mit.pr-optout.com\/Tracking.aspx?Data=HHL%3d8274%3c4-%3eLCE9%3b4%3b8%3f%26SDG%3c90%3a.&amp;RE=MC&amp;RI=4334046&amp;Preview=False&amp;DistributionActionID=48100&amp;Action=Follow+Link\" target=\"_blank\" rel=\"noopener noreferrer\" data-saferedirecturl=\"https:\/\/www.google.com\/url?hl=en&amp;q=http:\/\/mit.pr-optout.com\/Tracking.aspx?Data%3DHHL%253d8274%253c4-%253eLCE9%253b4%253b8%253f%2526SDG%253c90%253a.%26RE%3DMC%26RI%3D4334046%26Preview%3DFalse%26DistributionActionID%3D48100%26Action%3DFollow%2BLink&amp;source=gmail&amp;ust=1521788133385000&amp;usg=AFQjCNGgD55JWX-pZ2ocybfC0n8myKni0w\">projects<\/a>\u00a0<a style=\"color: #000000\" href=\"http:\/\/mit.pr-optout.com\/Tracking.aspx?Data=HHL%3d8274%3c4-%3eLCE9%3b4%3b8%3f%26SDG%3c90%3a.&amp;RE=MC&amp;RI=4334046&amp;Preview=False&amp;DistributionActionID=48099&amp;Action=Follow+Link\" target=\"_blank\" rel=\"noopener noreferrer\" data-saferedirecturl=\"https:\/\/www.google.com\/url?hl=en&amp;q=http:\/\/mit.pr-optout.com\/Tracking.aspx?Data%3DHHL%253d8274%253c4-%253eLCE9%253b4%253b8%253f%2526SDG%253c90%253a.%26RE%3DMC%26RI%3D4334046%26Preview%3DFalse%26DistributionActionID%3D48099%26Action%3DFollow%2BLink&amp;source=gmail&amp;ust=1521788133385000&amp;usg=AFQjCNFqA9qPFW11eGZ8WmsjspsPCrQR3g\">undertaken<\/a>\u00a0in Raskar\u2019s Camera Culture Group, the new system uses a time-of-flight camera, which fires ultrashort bursts of laser light into a scene and measures the time it takes their reflections to return.<\/span><\/p>\n<p style=\"text-align: justify\"><span style=\"color: #000000\">On a clear day, the light\u2019s return time faithfully indicates the distances of the objects that reflected it. But fog causes light to \u201cscatter,\u201d or bounce around in random ways. In foggy weather, most of the light that reaches the camera\u2019s sensor will have been reflected by airborne water droplets, not by the types of objects that autonomous vehicles need to avoid. And even the light that does reflect from potential obstacles will arrive at different times, having been deflected by water droplets on both the way out and the way back.<\/span><\/p>\n<p style=\"text-align: justify\"><span style=\"color: #000000\">The MIT system gets around this problem by using statistics. The patterns produced by fog-reflected light vary according to the fog\u2019s density: On average, light penetrates less deeply into a thick fog than it does into a light fog. But the MIT researchers were able to show that, no matter how thick the fog, the arrival times of the reflected light adhere to a statistical pattern known as a gamma distribution.<\/span><\/p>\n<p style=\"text-align: justify\"><span style=\"color: #000000\">Gamma distributions are somewhat more complex than Gaussian distributions, the common distributions that yield the familiar bell curve: They can be asymmetrical, and they can take on a wider variety of shapes. But like Gaussian distributions, they\u2019re completely described by two variables. The MIT system estimates the values of those variables on the fly and uses the resulting distribution to filter fog reflection out of the light signal that reaches the time-of-flight camera\u2019s sensor.<\/span><\/p>\n<p style=\"text-align: justify\"><span style=\"color: #000000\">Crucially, the system calculates a different gamma distribution for each of the 1,024 pixels in the sensor. That\u2019s why it\u2019s able to handle the variations in fog density that foiled earlier systems: It can handle circumstances in which each pixel sees a different type of fog.<\/span><\/p>\n<p style=\"text-align: justify\"><span style=\"color: #000000\"><strong>Signature shapes<\/strong><\/span><\/p>\n<p style=\"text-align: justify\"><span style=\"color: #000000\">The camera counts the number of light particles, or photons, that reach it every 56 picoseconds, or trillionths of a second. The MIT system uses those raw counts to produce a histogram \u2014 essentially a bar graph, with the heights of the bars indicating the photon counts for each interval. Then it finds the gamma distribution that best fits the shape of the bar graph and simply subtracts the associated photon counts from the measured totals. What remain are slight spikes at the distances that correlate with physical obstacles.<\/span><\/p>\n<p style=\"text-align: justify\"><span style=\"color: #000000\">\u201cWhat\u2019s nice about this is that it\u2019s pretty simple,\u201d Satat says. \u201cIf you look at the computation and the method, it\u2019s surprisingly not complex. We also don\u2019t need any prior knowledge about the fog and its density, which helps it to work in a wide range of fog conditions.\u201d<\/span><\/p>\n<p style=\"text-align: justify\"><span style=\"color: #000000\">Satat tested the system using a fog chamber a meter long. Inside the chamber, he mounted regularly spaced distance markers, which provided a rough measure of visibility. He also placed a series of small objects \u2014 a wooden figurine, wooden blocks, silhouettes of letters \u2014 that the system was able to image even when they were indiscernible to the naked eye.<\/span><\/p>\n<p style=\"text-align: justify\"><span style=\"color: #000000\">There are different ways to measure visibility, however: Objects with different colors and textures are visible through fog at different distances. So, to assess the system\u2019s performance, he used a more rigorous metric called optical depth, which describes the amount of light that penetrates the fog.<\/span><\/p>\n<p style=\"text-align: justify\"><span style=\"color: #000000\">Optical depth is independent of distance, so the performance of the system on fog that has a particular optical depth at a range of 1 meter should be a good predictor of its performance on fog that has the same optical depth at a range of 30 meters. In fact, the system may even fare better at longer distances, as the differences between photons\u2019 arrival times will be greater, which could make for more accurate histograms.<\/span><\/p>\n","protected":false},"excerpt":{"rendered":"<p>Computational photography could solve a problem that bedevils self-driving cars. CAMBRIDGE, MASS.&#8211;MIT researchers have developed a system that can produce images of objects shrouded by fog so thick that human vision can\u2019t penetrate it. It can also gauge the objects\u2019 distance. An inability to handle misty driving conditions has been one of the chief obstacles [&hellip;]<\/p>\n","protected":false},"author":6,"featured_media":14763,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[22,17,28],"tags":[],"class_list":["post-14762","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-other","category-research","category-techbiz"],"featured_image_urls":{"full":["https:\/\/www.revoscience.com\/en\/wp-content\/uploads\/2018\/03\/MIT-Seeing-Through-Fog-01_0.jpg",639,426,false],"thumbnail":["https:\/\/www.revoscience.com\/en\/wp-content\/uploads\/2018\/03\/MIT-Seeing-Through-Fog-01_0-150x150.jpg",150,150,true],"medium":["https:\/\/www.revoscience.com\/en\/wp-content\/uploads\/2018\/03\/MIT-Seeing-Through-Fog-01_0-300x200.jpg",300,200,true],"medium_large":["https:\/\/www.revoscience.com\/en\/wp-content\/uploads\/2018\/03\/MIT-Seeing-Through-Fog-01_0.jpg",639,426,false],"large":["https:\/\/www.revoscience.com\/en\/wp-content\/uploads\/2018\/03\/MIT-Seeing-Through-Fog-01_0.jpg",639,426,false],"1536x1536":["https:\/\/www.revoscience.com\/en\/wp-content\/uploads\/2018\/03\/MIT-Seeing-Through-Fog-01_0.jpg",639,426,false],"2048x2048":["https:\/\/www.revoscience.com\/en\/wp-content\/uploads\/2018\/03\/MIT-Seeing-Through-Fog-01_0.jpg",639,426,false],"ultp_layout_landscape_large":["https:\/\/www.revoscience.com\/en\/wp-content\/uploads\/2018\/03\/MIT-Seeing-Through-Fog-01_0.jpg",639,426,false],"ultp_layout_landscape":["https:\/\/www.revoscience.com\/en\/wp-content\/uploads\/2018\/03\/MIT-Seeing-Through-Fog-01_0.jpg",639,426,false],"ultp_layout_portrait":["https:\/\/www.revoscience.com\/en\/wp-content\/uploads\/2018\/03\/MIT-Seeing-Through-Fog-01_0.jpg",600,400,false],"ultp_layout_square":["https:\/\/www.revoscience.com\/en\/wp-content\/uploads\/2018\/03\/MIT-Seeing-Through-Fog-01_0.jpg",600,400,false],"newspaper-x-single-post":["https:\/\/www.revoscience.com\/en\/wp-content\/uploads\/2018\/03\/MIT-Seeing-Through-Fog-01_0.jpg",639,426,false],"newspaper-x-recent-post-big":["https:\/\/www.revoscience.com\/en\/wp-content\/uploads\/2018\/03\/MIT-Seeing-Through-Fog-01_0.jpg",540,360,false],"newspaper-x-recent-post-list-image":["https:\/\/www.revoscience.com\/en\/wp-content\/uploads\/2018\/03\/MIT-Seeing-Through-Fog-01_0.jpg",95,63,false],"web-stories-poster-portrait":["https:\/\/www.revoscience.com\/en\/wp-content\/uploads\/2018\/03\/MIT-Seeing-Through-Fog-01_0.jpg",639,426,false],"web-stories-publisher-logo":["https:\/\/www.revoscience.com\/en\/wp-content\/uploads\/2018\/03\/MIT-Seeing-Through-Fog-01_0.jpg",96,64,false],"web-stories-thumbnail":["https:\/\/www.revoscience.com\/en\/wp-content\/uploads\/2018\/03\/MIT-Seeing-Through-Fog-01_0.jpg",150,100,false]},"author_info":{"info":["Amrita Tuladhar"]},"category_info":"<a href=\"https:\/\/www.revoscience.com\/en\/category\/news\/other\/\" rel=\"category tag\">Other<\/a> <a href=\"https:\/\/www.revoscience.com\/en\/category\/news\/research\/\" rel=\"category tag\">Research<\/a> <a href=\"https:\/\/www.revoscience.com\/en\/category\/techbiz\/\" rel=\"category tag\">Tech<\/a>","tag_info":"Tech","comment_count":"0","_links":{"self":[{"href":"https:\/\/www.revoscience.com\/en\/wp-json\/wp\/v2\/posts\/14762","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.revoscience.com\/en\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.revoscience.com\/en\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.revoscience.com\/en\/wp-json\/wp\/v2\/users\/6"}],"replies":[{"embeddable":true,"href":"https:\/\/www.revoscience.com\/en\/wp-json\/wp\/v2\/comments?post=14762"}],"version-history":[{"count":0,"href":"https:\/\/www.revoscience.com\/en\/wp-json\/wp\/v2\/posts\/14762\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/www.revoscience.com\/en\/wp-json\/wp\/v2\/media\/14763"}],"wp:attachment":[{"href":"https:\/\/www.revoscience.com\/en\/wp-json\/wp\/v2\/media?parent=14762"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.revoscience.com\/en\/wp-json\/wp\/v2\/categories?post=14762"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.revoscience.com\/en\/wp-json\/wp\/v2\/tags?post=14762"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}