{"id":11859,"date":"2017-03-31T06:16:28","date_gmt":"2017-03-31T06:16:28","guid":{"rendered":"http:\/\/revoscience.com\/en\/?p=11859"},"modified":"2017-03-31T06:16:28","modified_gmt":"2017-03-31T06:16:28","slug":"faster-single-pixel-camera","status":"publish","type":"post","link":"https:\/\/www.revoscience.com\/en\/faster-single-pixel-camera\/","title":{"rendered":"A faster single-pixel camera"},"content":{"rendered":"<p style=\"text-align: justify;\"><span style=\"color: #000000;\"><em><strong>New technique greatly reduces the number of exposures necessary for \u201clensless imaging.\u201d<\/strong><\/em><\/span><\/p>\n<figure id=\"attachment_11860\" aria-describedby=\"caption-attachment-11860\" style=\"width: 639px\" class=\"wp-caption alignnone\"><img loading=\"lazy\" decoding=\"async\" class=\"size-full wp-image-11860\" src=\"http:\/\/revoscience.com\/en\/wp-content\/uploads\/2017\/03\/MIT-Lenseless-Image_0.jpg\" alt=\"\" width=\"639\" height=\"426\" title=\"\" srcset=\"https:\/\/www.revoscience.com\/en\/wp-content\/uploads\/2017\/03\/MIT-Lenseless-Image_0.jpg 639w, https:\/\/www.revoscience.com\/en\/wp-content\/uploads\/2017\/03\/MIT-Lenseless-Image_0-300x200.jpg 300w\" sizes=\"auto, (max-width: 639px) 100vw, 639px\" \/><figcaption id=\"caption-attachment-11860\" class=\"wp-caption-text\">Researchers from the MIT Media Lab developed a new technique that makes image acquisition using compressed sensing 50 times as efficient. In the case of the single-pixel camera, it could get the number of exposures down from thousands to dozens. Examples of this compressive ultrafast imaging technique are show on the bottom rows.<br \/>Courtesy of the researchers<\/figcaption><\/figure>\n<p style=\"text-align: justify;\"><span style=\"color: #000000;\">CAMBRIDGE, Mass. &#8212;\u00a0Compressed sensing is an exciting new computational technique for extracting large amounts of information from a signal. In one high-profile demonstration, for instance, researchers at Rice University built a camera that could produce 2-D images using only a single light sensor rather than the millions of light sensors found in a commodity camera.<\/span><\/p>\n<p style=\"text-align: justify;\"><span style=\"color: #000000;\">But using compressed sensing for image acquisition is inefficient: That \u201csingle-pixel camera\u201d needed thousands of exposures to produce a reasonably clear image. Reporting their results in the journal<\/span> <em><a href=\"http:\/\/mit.pr-optout.com\/Tracking.aspx?Data=HHL%3d814%2f%409-%3eLCE9%3b4%3b8%3f%26SDG%3c90%3a.&amp;RE=MC&amp;RI=4334046&amp;Preview=False&amp;DistributionActionID=35701&amp;Action=Follow+Link\" target=\"_blank\" data-saferedirecturl=\"https:\/\/www.google.com\/url?hl=en&amp;q=http:\/\/mit.pr-optout.com\/Tracking.aspx?Data%3DHHL%253d814%252f%25409-%253eLCE9%253b4%253b8%253f%2526SDG%253c90%253a.%26RE%3DMC%26RI%3D4334046%26Preview%3DFalse%26DistributionActionID%3D35701%26Action%3DFollow%2BLink&amp;source=gmail&amp;ust=1491026401275000&amp;usg=AFQjCNEWo7CFSi_Rst7PiQeFYdbuZIJ97w\" rel=\"noopener\">IEEE Transactions on Computational Imaging<\/a><\/em><span style=\"color: #000000;\">, researchers from the MIT Media Lab now describe a new technique that makes image acquisition using compressed sensing 50 times as efficient. In the case of the single-pixel camera, it could get the number of exposures down from thousands to dozens.<\/span><\/p>\n<p style=\"text-align: justify;\"><span style=\"color: #000000;\">One intriguing aspect of compressed-sensing imaging systems is that, unlike conventional cameras, they don\u2019t require lenses. That could make them useful in harsh environments or in applications that use wavelengths of light outside the visible spectrum. Getting rid of the lens opens new prospects for the design of imaging systems.<\/span><\/p>\n<p style=\"text-align: justify;\"><span style=\"color: #000000;\">&#8220;Formerly, imaging required a lens, and the lens would map pixels in space to sensors in an array, with everything precisely structured and engineered,&#8221; says Guy Satat, a graduate student at the Media Lab and first author on the new paper. \u00a0&#8220;With computational imaging, we began to ask: Is a lens necessary?\u00a0 Does the sensor have to be a structured array? How many pixels should the sensor have? Is a single pixel sufficient? These questions essentially break down the fundamental idea of what a camera is.\u00a0 The fact that only a single pixel is required and a lens is no longer necessary relaxes major design constraints, and enables the development of novel imaging systems. Using ultrafast sensing makes the measurement significantly more efficient.&#8221; \u00a0<\/span><\/p>\n<p style=\"text-align: justify;\"><span style=\"color: #000000;\"><strong>Recursive applications<\/strong><\/span><\/p>\n<p style=\"text-align: justify;\"><span style=\"color: #000000;\">One of Satat\u2019s coauthors on the new paper is his thesis advisor, associate professor of media arts and sciences Ramesh Raskar. Like many projects from Raskar\u2019s group, the new compressed-sensing technique depends on time-of-flight imaging, in which a short burst of light is projected into a scene, and ultrafast sensors measure how long the light takes to reflect back.<\/span><\/p>\n<p style=\"text-align: justify;\"><span style=\"color: #000000;\">The technique uses time-of-flight imaging, but somewhat circularly, one of its potential applications is improving the performance of time-of-flight cameras. It could thus have implications for a number of other projects from Raskar\u2019s group, such as a camera that can see<\/span> <a href=\"http:\/\/mit.pr-optout.com\/Tracking.aspx?Data=HHL%3d814%2f%409-%3eLCE9%3b4%3b8%3f%26SDG%3c90%3a.&amp;RE=MC&amp;RI=4334046&amp;Preview=False&amp;DistributionActionID=35700&amp;Action=Follow+Link\" target=\"_blank\" data-saferedirecturl=\"https:\/\/www.google.com\/url?hl=en&amp;q=http:\/\/mit.pr-optout.com\/Tracking.aspx?Data%3DHHL%253d814%252f%25409-%253eLCE9%253b4%253b8%253f%2526SDG%253c90%253a.%26RE%3DMC%26RI%3D4334046%26Preview%3DFalse%26DistributionActionID%3D35700%26Action%3DFollow%2BLink&amp;source=gmail&amp;ust=1491026401275000&amp;usg=AFQjCNH2W3yS3N1Y8IjSrJxwZMODwwc_eA\" rel=\"noopener\">around corners<\/a> <span style=\"color: #000000;\">and<\/span> <a href=\"http:\/\/mit.pr-optout.com\/Tracking.aspx?Data=HHL%3d814%2f%409-%3eLCE9%3b4%3b8%3f%26SDG%3c90%3a.&amp;RE=MC&amp;RI=4334046&amp;Preview=False&amp;DistributionActionID=35699&amp;Action=Follow+Link\" target=\"_blank\" data-saferedirecturl=\"https:\/\/www.google.com\/url?hl=en&amp;q=http:\/\/mit.pr-optout.com\/Tracking.aspx?Data%3DHHL%253d814%252f%25409-%253eLCE9%253b4%253b8%253f%2526SDG%253c90%253a.%26RE%3DMC%26RI%3D4334046%26Preview%3DFalse%26DistributionActionID%3D35699%26Action%3DFollow%2BLink&amp;source=gmail&amp;ust=1491026401275000&amp;usg=AFQjCNF94E3ZW-f4hfkszO7ie-hd16yQ1w\" rel=\"noopener\">visible-light<\/a> <span style=\"color: #000000;\">imaging systems for medical diagnosis and vehicular navigation.<\/span><\/p>\n<p style=\"text-align: justify;\"><span style=\"color: #000000;\">Many prototype systems from Raskar\u2019s Camera Culture group at the Media Lab have used time-of-flight cameras called streak cameras, which are expensive and difficult to use: They capture only one row of image pixels at a time. But the past few years have seen the advent of commercial time-of-flight cameras called SPADs, for single-photon avalanche diodes.<\/span><\/p>\n<p style=\"text-align: justify;\"><span style=\"color: #000000;\">Though not nearly as fast as streak cameras, SPADs are still fast enough for many time-of-flight applications, and they can capture a full 2-D image in a single exposure. Furthermore, their sensors are built using manufacturing techniques common in the computer chip industry, so they should be cost-effective to mass produce.<\/span><\/p>\n<p style=\"text-align: justify;\"><span style=\"color: #000000;\">With SPADs, the electronics required to drive each sensor pixel take up so much space that the pixels end up far apart from each other on the sensor chip. In a conventional camera, this limits the resolution. But with compressed sensing, it actually increases it.<\/span><\/p>\n<p style=\"text-align: justify;\"><span style=\"color: #000000;\"><strong>Getting some distance<\/strong><\/span><\/p>\n<p style=\"text-align: justify;\"><span style=\"color: #000000;\">The reason the single-pixel camera can make do with one light sensor is that the light that strikes it is patterned. One way to pattern light is to put a filter, kind of like a randomized black-and-white checkerboard, in front of the flash illuminating the scene. Another way is to bounce the returning light off of an array of tiny micromirrors, some of which are aimed at the light sensor and some of which aren\u2019t.<\/span><\/p>\n<p style=\"text-align: justify;\"><span style=\"color: #000000;\">The sensor makes only a single measurement \u2014 the cumulative intensity of the incoming light. But if it repeats the measurement enough times, and if the light has a different pattern each time, software can deduce the intensities of the light reflected from individual points in the scene.<\/span><\/p>\n<p style=\"text-align: justify;\"><span style=\"color: #000000;\">The single-pixel camera was a media-friendly demonstration, but in fact, compressed sensing works better the more pixels the sensor has. And the farther apart the pixels are, the less redundancy there is in the measurements they make, much the way you see more of the visual scene before you if you take two steps to your right rather than one. And, of course, the more measurements the sensor performs, the higher the resolution of the reconstructed image.<\/span><\/p>\n<p style=\"text-align: justify;\"><span style=\"color: #000000;\"><strong>Economies of scale<\/strong><\/span><\/p>\n<p style=\"text-align: justify;\"><span style=\"color: #000000;\">Time-of-flight imaging essentially turns one measurement \u2014 with one light pattern \u2014 into dozens of measurements, separated by trillionths of seconds. Moreover, each measurement corresponds with only a subset of pixels in the final image \u2014 those depicting objects at the same distance. That means there\u2019s less information to decode in each measurement.<\/span><\/p>\n<p style=\"text-align: justify;\"><span style=\"color: #000000;\">In their paper, Satat, Raskar, and Matthew Tancik, an MIT graduate student in electrical engineering and computer science, present a theoretical analysis of compressed sensing that uses time-of-flight information. Their analysis shows how efficiently the technique can extract information about a visual scene, at different resolutions and with different numbers of sensors and distances between them.<\/span><\/p>\n<p style=\"text-align: justify;\"><span style=\"color: #000000;\">They also describe a procedure for computing light patterns that minimizes the number of exposures.\u00a0 And, using synthetic data, they compare the performance of their reconstruction algorithm to that of existing compressed-sensing algorithms. But in ongoing work, they are developing a prototype of the system so that they can test their algorithm on real data.<\/span><\/p>\n","protected":false},"excerpt":{"rendered":"<p>New technique greatly reduces the number of exposures necessary for \u201clensless imaging.\u201d CAMBRIDGE, Mass. &#8212;\u00a0Compressed sensing is an exciting new computational technique for extracting large amounts of information from a signal. In one high-profile demonstration, for instance, researchers at Rice University built a camera that could produce 2-D images using only a single light sensor [&hellip;]<\/p>\n","protected":false},"author":6,"featured_media":11860,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[22,17,28],"tags":[],"class_list":["post-11859","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-other","category-research","category-techbiz"],"featured_image_urls":{"full":["https:\/\/www.revoscience.com\/en\/wp-content\/uploads\/2017\/03\/MIT-Lenseless-Image_0.jpg",639,426,false],"thumbnail":["https:\/\/www.revoscience.com\/en\/wp-content\/uploads\/2017\/03\/MIT-Lenseless-Image_0-150x150.jpg",150,150,true],"medium":["https:\/\/www.revoscience.com\/en\/wp-content\/uploads\/2017\/03\/MIT-Lenseless-Image_0-300x200.jpg",300,200,true],"medium_large":["https:\/\/www.revoscience.com\/en\/wp-content\/uploads\/2017\/03\/MIT-Lenseless-Image_0.jpg",639,426,false],"large":["https:\/\/www.revoscience.com\/en\/wp-content\/uploads\/2017\/03\/MIT-Lenseless-Image_0.jpg",639,426,false],"1536x1536":["https:\/\/www.revoscience.com\/en\/wp-content\/uploads\/2017\/03\/MIT-Lenseless-Image_0.jpg",639,426,false],"2048x2048":["https:\/\/www.revoscience.com\/en\/wp-content\/uploads\/2017\/03\/MIT-Lenseless-Image_0.jpg",639,426,false],"ultp_layout_landscape_large":["https:\/\/www.revoscience.com\/en\/wp-content\/uploads\/2017\/03\/MIT-Lenseless-Image_0.jpg",639,426,false],"ultp_layout_landscape":["https:\/\/www.revoscience.com\/en\/wp-content\/uploads\/2017\/03\/MIT-Lenseless-Image_0.jpg",639,426,false],"ultp_layout_portrait":["https:\/\/www.revoscience.com\/en\/wp-content\/uploads\/2017\/03\/MIT-Lenseless-Image_0.jpg",600,400,false],"ultp_layout_square":["https:\/\/www.revoscience.com\/en\/wp-content\/uploads\/2017\/03\/MIT-Lenseless-Image_0.jpg",600,400,false],"newspaper-x-single-post":["https:\/\/www.revoscience.com\/en\/wp-content\/uploads\/2017\/03\/MIT-Lenseless-Image_0.jpg",639,426,false],"newspaper-x-recent-post-big":["https:\/\/www.revoscience.com\/en\/wp-content\/uploads\/2017\/03\/MIT-Lenseless-Image_0.jpg",540,360,false],"newspaper-x-recent-post-list-image":["https:\/\/www.revoscience.com\/en\/wp-content\/uploads\/2017\/03\/MIT-Lenseless-Image_0.jpg",95,63,false],"web-stories-poster-portrait":["https:\/\/www.revoscience.com\/en\/wp-content\/uploads\/2017\/03\/MIT-Lenseless-Image_0.jpg",639,426,false],"web-stories-publisher-logo":["https:\/\/www.revoscience.com\/en\/wp-content\/uploads\/2017\/03\/MIT-Lenseless-Image_0.jpg",96,64,false],"web-stories-thumbnail":["https:\/\/www.revoscience.com\/en\/wp-content\/uploads\/2017\/03\/MIT-Lenseless-Image_0.jpg",150,100,false]},"author_info":{"info":["Amrita Tuladhar"]},"category_info":"<a href=\"https:\/\/www.revoscience.com\/en\/category\/news\/other\/\" rel=\"category tag\">Other<\/a> <a href=\"https:\/\/www.revoscience.com\/en\/category\/news\/research\/\" rel=\"category tag\">Research<\/a> <a href=\"https:\/\/www.revoscience.com\/en\/category\/techbiz\/\" rel=\"category tag\">Tech<\/a>","tag_info":"Tech","comment_count":"0","_links":{"self":[{"href":"https:\/\/www.revoscience.com\/en\/wp-json\/wp\/v2\/posts\/11859","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.revoscience.com\/en\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.revoscience.com\/en\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.revoscience.com\/en\/wp-json\/wp\/v2\/users\/6"}],"replies":[{"embeddable":true,"href":"https:\/\/www.revoscience.com\/en\/wp-json\/wp\/v2\/comments?post=11859"}],"version-history":[{"count":0,"href":"https:\/\/www.revoscience.com\/en\/wp-json\/wp\/v2\/posts\/11859\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/www.revoscience.com\/en\/wp-json\/wp\/v2\/media\/11860"}],"wp:attachment":[{"href":"https:\/\/www.revoscience.com\/en\/wp-json\/wp\/v2\/media?parent=11859"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.revoscience.com\/en\/wp-json\/wp\/v2\/categories?post=11859"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.revoscience.com\/en\/wp-json\/wp\/v2\/tags?post=11859"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}