{"id":19415,"date":"2020-11-16T14:59:28","date_gmt":"2020-11-16T09:14:28","guid":{"rendered":"https:\/\/www.revoscience.com\/en\/?p=19415"},"modified":"2020-11-16T15:03:03","modified_gmt":"2020-11-16T09:18:03","slug":"system-brings-deep-learning-to-internet-of-things-devices","status":"publish","type":"post","link":"https:\/\/www.revoscience.com\/en\/system-brings-deep-learning-to-internet-of-things-devices\/","title":{"rendered":"System brings deep learning to \u201cinternet of things\u201d devices"},"content":{"rendered":"\n<p><strong>Daniel Ackerman<\/strong><\/p>\n\n\n\n<figure class=\"wp-block-image size-large is-style-default\"><img loading=\"lazy\" decoding=\"async\" width=\"675\" height=\"450\" sizes=\"auto, (max-width: 675px) 100vw, 675px\" src=\"https:\/\/www.revoscience.com\/en\/wp-content\/uploads\/2020\/11\/Tiny-ai-675x450.jpg\" alt=\"\" class=\"wp-image-19416\" title=\"\" srcset=\"https:\/\/www.revoscience.com\/en\/wp-content\/uploads\/2020\/11\/Tiny-ai-675x450.jpg 675w, https:\/\/www.revoscience.com\/en\/wp-content\/uploads\/2020\/11\/Tiny-ai-600x400.jpg 600w, https:\/\/www.revoscience.com\/en\/wp-content\/uploads\/2020\/11\/Tiny-ai-768x512.jpg 768w, https:\/\/www.revoscience.com\/en\/wp-content\/uploads\/2020\/11\/Tiny-ai-174x116.jpg 174w, https:\/\/www.revoscience.com\/en\/wp-content\/uploads\/2020\/11\/Tiny-ai.jpg 900w\" \/><\/figure>\n\n\n\n<p>CAMBRIDGE, Mass(MITnews office) &#8212;\u00a0Deep learning is everywhere. This branch of artificial intelligence curates your social media and serves your Google search results. Soon, deep learning could also check your vitals or set your thermostat. MIT researchers have developed a system that could bring deep learning neural networks to new \u2014 and much smaller \u2014 places, like the tiny computer chips in wearable medical devices, household appliances, and the 250 billion other objects that constitute the \u201cinternet of things\u201d (IoT).<\/p>\n\n\n\n<p>The system, called&nbsp;<a href=\"http:\/\/mit.pr-optout.com\/Tracking.aspx?Data=HHL%3d83%3c2%3e3-%3eLCE9%3b4%3b8%3f%26SDG%3c90%3a.&amp;RE=MC&amp;RI=4334046&amp;Preview=False&amp;DistributionActionID=91362&amp;Action=Follow+Link\" target=\"_blank\" rel=\"noreferrer noopener\">MCUNet<\/a>, designs compact neural networks that deliver unprecedented speed and accuracy for deep learning on IoT devices, despite limited memory and processing power. The technology could facilitate the expansion of the IoT universe while saving energy and improving data security.<\/p>\n\n\n\n<p>The research will be presented at next month\u2019s Conference on Neural Information Processing Systems. The lead author is Ji Lin, a PhD student in Song Han\u2019s lab in MIT\u2019s Department of Electrical Engineering and Computer Science. Co-authors include Han and Yujun Lin of MIT, Wei-Ming Chen of MIT and National University Taiwan, and John Cohn and Chuang Gan of the MIT-IBM Watson AI Lab.<\/p>\n\n\n\n<p><strong>The Internet of Things<\/strong><\/p>\n\n\n\n<p>The IoT was born in the early 1980s. Grad students at Carnegie Mellon University, including Mike Kazar \u201978, connected a Cola-Cola machine to the internet. The group\u2019s motivation was simple: laziness. They wanted to use their computers to confirm the machine was stocked before trekking from their office to make a purchase. It was the world\u2019s first internet-connected appliance. \u201cThis was pretty much treated as the punchline of a joke,\u201d says Kazar, now a Microsoft engineer. \u201cNo one expected billions of devices on the internet.\u201d<\/p>\n\n\n\n<p>Since that Coke machine, everyday objects have become increasingly networked into the growing IoT. That includes everything from wearable heart monitors to smart fridges that tell you when you\u2019re low on milk. IoT devices often run on microcontrollers \u2014 simple computer chips with no operating system, minimal processing power, and less than one thousandth of the memory of a typical smartphone. So pattern-recognition tasks like deep learning are difficult to run locally on IoT devices. For complex analysis, IoT-collected data is often sent to the cloud, making it vulnerable to hacking.<\/p>\n\n\n\n<p>\u201cHow do we deploy neural nets directly on these tiny devices? It\u2019s a new research area that\u2019s getting very hot,\u201d says Han. \u201cCompanies like Google and ARM are all working in this direction.\u201d Han is too.<\/p>\n\n\n\n<p>With MCUNet, Han\u2019s group codesigned two components needed for \u201ctiny deep learning\u201d \u2014 the operation of neural networks on microcontrollers. One component is TinyEngine, an inference engine that directs resource management, akin to an operating system. TinyEngine is optimized to run a particular neural network structure, which is selected by MCUNet\u2019s other component: TinyNAS, a neural architecture search algorithm.<\/p>\n\n\n\n<p><strong>System-algorithm codesign<\/strong><\/p>\n\n\n\n<p>Designing a deep network for microcontrollers isn\u2019t easy. Existing neural architecture search techniques start with a big pool of possible network structures based on a predefined template, then they gradually find the one with high accuracy and low cost. \u201cIt can work pretty well for GPUs or smartphones,\u201d says Lin. \u201cBut it\u2019s been difficult to directly apply these techniques to tiny microcontrollers, because they are too small.\u201d<\/p>\n\n\n\n<p>So Lin developed TinyNAS, a neural architecture search method that creates custom-sized networks. \u201cWe have a lot of microcontrollers that come with different power capacities and different memory sizes,\u201d says Lin. \u201cSo we developed the algorithm [TinyNAS] to optimize the search space for different microcontrollers.\u201d The customized nature of TinyNAS means it can generate compact neural networks with the best possible performance for a given microcontroller \u2014 with no unnecessary parameters. \u201cThen we deliver the final, efficient model to the microcontroller,\u201d say Lin.<\/p>\n\n\n\n<p>To run that tiny neural network, a microcontroller also needs a lean inference engine. A typical inference engine carries some dead weight \u2014 instructions for tasks it may rarely run. The extra code poses no problem for a laptop or smartphone, but it could easily overwhelm a microcontroller. \u201cIt doesn\u2019t have off-chip memory, and it doesn\u2019t have a disk,\u201d says Han. \u201cEverything put together is just one megabyte of flash, so we have to really carefully manage such a small resource.\u201d Cue TinyEngine.<\/p>\n\n\n\n<p>The researchers developed their inference engine in conjunction with TinyNAS. TinyEngine generates the essential code necessary to run TinyNAS\u2019 customized neural network. Any deadweight code is discarded, which cuts down on compile-time. \u201cWe keep only what we need,\u201d says Han. \u201cAnd since we designed the neural network, we know exactly what we need. That\u2019s the advantage of system-algorithm codesign.\u201d In the group\u2019s tests of TinyEngine, the size of the compiled binary code was between 1.9 and five times smaller than comparable microcontroller inference engines from Google and ARM. TinyEngine also contains innovations that reduce runtime, including in-place depth-wise convolution, which cuts peak memory usage nearly in half. After codesigning TinyNAS and TinyEngine, Han\u2019s team put MCUNet to the test.<\/p>\n\n\n\n<p>MCUNet\u2019s first challenge was image classification. The researchers used the ImageNet database to train the system with labeled images, then to test its ability to classify novel ones. On a commercial microcontroller they tested, MCUNet successfully classified 70.7 percent of the novel images \u2014 the previous state-of-the-art neural network and inference engine combo was just 54 percent accurate. \u201cEven a 1 percent improvement is considered significant,\u201d says Lin. \u201cSo this is a giant leap for microcontroller settings.\u201d<\/p>\n\n\n\n<p>The team found similar results in ImageNet tests of three other microcontrollers. And on both speed and accuracy, MCUNet beat the competition for audio and visual \u201cwake-word\u201d tasks, where a user initiates an interaction with a computer using vocal cues (think: \u201cHey, Siri\u201d) or simply by entering a room. The experiments highlight MCUNet\u2019s adaptability to numerous applications.<\/p>\n\n\n\n<p><strong>\u201cHuge potential\u201d<\/strong><\/p>\n\n\n\n<p>The promising test results give Han hope that it will become the new industry standard for microcontrollers. \u201cIt has huge potential,\u201d he says.<\/p>\n\n\n\n<p>The advance \u201cextends the frontier of deep neural network design even farther into the computational domain of small energy-efficient microcontrollers,\u201d says Kurt Keutzer, a computer scientist at the University of California at Berkeley, who was not involved in the work. He adds that MCUNet could \u201cbring intelligent computer-vision capabilities to even the simplest kitchen appliances, or enable more intelligent motion sensors.\u201d<\/p>\n\n\n\n<p>MCUNet could also make IoT devices more secure. \u201cA key advantage is preserving privacy,\u201d says Han. \u201cYou don\u2019t need to transmit the data to the cloud.\u201d<\/p>\n\n\n\n<p>Analyzing data locally reduces the risk of personal information being stolen \u2014 including personal health data. Han envisions smart watches with MCUNet that don\u2019t just sense users\u2019 heartbeat, blood pressure, and oxygen levels, but also analyze and help them understand that information. MCUNet could also bring deep learning to IoT devices in vehicles and rural areas with limited internet access.<\/p>\n\n\n\n<p>Plus, MCUNet\u2019s slim computing footprint translates into a slim carbon footprint. \u201cOur big dream is for green AI,\u201d says Han, adding that training a large neural network can burn carbon equivalent to the lifetime emissions of five cars. MCUNet on a microcontroller would require a small fraction of that energy. \u201cOur end goal is to enable efficient, tiny AI with less computational resources, less human resources, and less data,\u201d says Han.<\/p>\n","protected":false},"excerpt":{"rendered":"<p>Deep learning is everywhere. This branch of artificial intelligence curates your social media and serves your Google search results. Soon, deep learning could also check your vitals or set your thermostat.<\/p>\n","protected":false},"author":2,"featured_media":19416,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[47],"tags":[],"class_list":["post-19415","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-it"],"featured_image_urls":{"full":["https:\/\/www.revoscience.com\/en\/wp-content\/uploads\/2020\/11\/Tiny-ai.jpg",900,600,false],"thumbnail":["https:\/\/www.revoscience.com\/en\/wp-content\/uploads\/2020\/11\/Tiny-ai-200x200.jpg",200,200,true],"medium":["https:\/\/www.revoscience.com\/en\/wp-content\/uploads\/2020\/11\/Tiny-ai-600x400.jpg",600,400,true],"medium_large":["https:\/\/www.revoscience.com\/en\/wp-content\/uploads\/2020\/11\/Tiny-ai-768x512.jpg",750,500,true],"large":["https:\/\/www.revoscience.com\/en\/wp-content\/uploads\/2020\/11\/Tiny-ai-675x450.jpg",675,450,true],"1536x1536":["https:\/\/www.revoscience.com\/en\/wp-content\/uploads\/2020\/11\/Tiny-ai.jpg",900,600,false],"2048x2048":["https:\/\/www.revoscience.com\/en\/wp-content\/uploads\/2020\/11\/Tiny-ai.jpg",900,600,false],"ultp_layout_landscape_large":["https:\/\/www.revoscience.com\/en\/wp-content\/uploads\/2020\/11\/Tiny-ai.jpg",900,600,false],"ultp_layout_landscape":["https:\/\/www.revoscience.com\/en\/wp-content\/uploads\/2020\/11\/Tiny-ai.jpg",855,570,false],"ultp_layout_portrait":["https:\/\/www.revoscience.com\/en\/wp-content\/uploads\/2020\/11\/Tiny-ai.jpg",600,400,false],"ultp_layout_square":["https:\/\/www.revoscience.com\/en\/wp-content\/uploads\/2020\/11\/Tiny-ai.jpg",600,400,false],"newspaper-x-single-post":["https:\/\/www.revoscience.com\/en\/wp-content\/uploads\/2020\/11\/Tiny-ai-760x490.jpg",760,490,true],"newspaper-x-recent-post-big":["https:\/\/www.revoscience.com\/en\/wp-content\/uploads\/2020\/11\/Tiny-ai-550x360.jpg",550,360,true],"newspaper-x-recent-post-list-image":["https:\/\/www.revoscience.com\/en\/wp-content\/uploads\/2020\/11\/Tiny-ai-95x65.jpg",95,65,true],"web-stories-poster-portrait":["https:\/\/www.revoscience.com\/en\/wp-content\/uploads\/2020\/11\/Tiny-ai.jpg",640,427,false],"web-stories-publisher-logo":["https:\/\/www.revoscience.com\/en\/wp-content\/uploads\/2020\/11\/Tiny-ai.jpg",96,64,false],"web-stories-thumbnail":["https:\/\/www.revoscience.com\/en\/wp-content\/uploads\/2020\/11\/Tiny-ai.jpg",150,100,false]},"author_info":{"info":["RevoScience"]},"category_info":"<a href=\"https:\/\/www.revoscience.com\/en\/category\/news\/it\/\" rel=\"category tag\">IT<\/a>","tag_info":"IT","comment_count":"0","_links":{"self":[{"href":"https:\/\/www.revoscience.com\/en\/wp-json\/wp\/v2\/posts\/19415","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.revoscience.com\/en\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.revoscience.com\/en\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.revoscience.com\/en\/wp-json\/wp\/v2\/users\/2"}],"replies":[{"embeddable":true,"href":"https:\/\/www.revoscience.com\/en\/wp-json\/wp\/v2\/comments?post=19415"}],"version-history":[{"count":0,"href":"https:\/\/www.revoscience.com\/en\/wp-json\/wp\/v2\/posts\/19415\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/www.revoscience.com\/en\/wp-json\/wp\/v2\/media\/19416"}],"wp:attachment":[{"href":"https:\/\/www.revoscience.com\/en\/wp-json\/wp\/v2\/media?parent=19415"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.revoscience.com\/en\/wp-json\/wp\/v2\/categories?post=19415"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.revoscience.com\/en\/wp-json\/wp\/v2\/tags?post=19415"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}