{"id":14435,"date":"2018-02-15T06:45:05","date_gmt":"2018-02-15T06:45:05","guid":{"rendered":"https:\/\/www.revoscience.com\/en\/?p=14435"},"modified":"2020-05-27T06:10:56","modified_gmt":"2020-05-27T06:10:56","slug":"neural-networks-everywhere","status":"publish","type":"post","link":"https:\/\/www.revoscience.com\/en\/neural-networks-everywhere\/","title":{"rendered":"Neural networks everywhere"},"content":{"rendered":"<p style=\"text-align: justify\"><span style=\"color: #000000\"><strong><em>New chip reduces neural networks\u2019 power consumption by up to 95 percent, making them practical for battery-powered devices.<\/em><\/strong><\/span><\/p>\n<figure id=\"attachment_14436\" aria-describedby=\"caption-attachment-14436\" style=\"width: 654px\" class=\"wp-caption alignnone\"><img loading=\"lazy\" decoding=\"async\" class=\" wp-image-14436\" src=\"https:\/\/www.revoscience.com\/en\/wp-content\/uploads\/2018\/02\/MIT-Neural-Network-Chip_0.jpg\" alt=\"\" width=\"654\" height=\"440\" title=\"\"><figcaption id=\"caption-attachment-14436\" class=\"wp-caption-text\">MIT researchers have developed a special-purpose chip that increases the speed of neural-network computations by three to seven times over its predecessors, while reducing power consumption 93 to 96 percent. That could make it practical to run neural networks locally on smartphones or even to embed them in household appliances.<br \/>Image: Chelsea Turner\/MIT<\/figcaption><\/figure>\n<p style=\"text-align: justify\"><span style=\"color: #000000\">CAMBRIDGE, MASS.&#8211;Most recent advances in artificial-intelligence systems such as speech- or face-recognition programs have come courtesy of neural networks, densely interconnected meshes of simple information processors that learn to perform tasks by analyzing huge sets of training data.<\/span><\/p>\n<p style=\"text-align: justify\"><span style=\"color: #000000\">But neural nets are large, and their computations are energy intensive, so they\u2019re not very practical for handheld devices. Most smartphone apps that rely on neural nets simply upload data to internet servers, which process it and send the results back to the phone.<\/span><\/p>\n<p style=\"text-align: justify\"><span style=\"color: #000000\">Now, MIT researchers have developed a special-purpose chip that increases the speed of neural-network computations by three to seven times over its predecessors, while reducing power consumption 94 to 95 percent. That could make it practical to run neural networks locally on smartphones or even to embed them in household appliances.<\/span><\/p>\n<p style=\"text-align: justify\"><span style=\"color: #000000\">\u201cThe general processor model is that there is a memory in some part of the chip, and there is a processor in another part of the chip, and you move the data back and forth between them when you do these computations,\u201d says Avishek Biswas, an MIT graduate student in electrical engineering and computer science, who led the new chip\u2019s development.<\/span><\/p>\n<p style=\"text-align: justify\"><span style=\"color: #000000\">\u201cSince these machine-learning algorithms need so many computations, this transferring back and forth of data is the dominant portion of the energy consumption. But the computation these algorithms do can be simplified to one specific operation, called the dot product. Our approach was, can we implement this dot-product functionality inside the memory so that you don\u2019t need to transfer this data back and forth?\u201d<\/span><\/p>\n<p style=\"text-align: justify\"><span style=\"color: #000000\">Biswas and his thesis advisor, Anantha Chandrakasan, dean of MIT\u2019s School of Engineering and the Vannevar Bush Professor of Electrical Engineering and Computer Science, describe the new chip in a paper that Biswas is presenting this week at the International Solid State Circuits Conference.<\/span><\/p>\n<p style=\"text-align: justify\"><span style=\"color: #000000\"><strong>Back to analog<\/strong><\/span><\/p>\n<p style=\"text-align: justify\"><span style=\"color: #000000\">Neural networks are typically arranged into layers. A single processing node in one layer of the network will generally receive data from several nodes in the layer below and pass data to several nodes in the layer above. Each connection between nodes has its own \u201cweight,\u201d which indicates how large a role the output of one node will play in the computation performed by the next. Training the network is a matter of setting those weights.<\/span><\/p>\n<p style=\"text-align: justify\"><span style=\"color: #000000\">A node receiving data from multiple nodes in the layer below will multiply each input by the weight of the corresponding connection and sum the results. That operation \u2014 the summation of multiplications \u2014 is the definition of a dot product. If the dot product exceeds some threshold value, the node will transmit it to nodes in the next layer, over connections with their own weights.<\/span><\/p>\n<p style=\"text-align: justify\"><span style=\"color: #000000\">A neural net is an abstraction: The \u201cnodes\u201d are just weights stored in a computer\u2019s memory. Calculating a dot product usually involves fetching a weight from memory, fetching the associated data item, multiplying the two, storing the result somewhere, and then repeating the operation for every input to a node. Given that a neural net will have thousands or even millions of nodes, that\u2019s a lot of data to move around.<\/span><\/p>\n<p style=\"text-align: justify\"><span style=\"color: #000000\">But that sequence of operations is just a digital approximation of what happens in the brain, where signals traveling along multiple neurons meet at a \u201csynapse,\u201d or a gap between bundles of neurons. The neurons\u2019 firing rates and the electrochemical signals that cross the synapse correspond to the data values and weights. The MIT researchers\u2019 new chip improves efficiency by replicating the brain more faithfully.<\/span><\/p>\n<p style=\"text-align: justify\"><span style=\"color: #000000\">In the chip, a node\u2019s input values are converted into electrical voltages and then multiplied by the appropriate weights. Only the combined voltages are converted back into a digital representation and stored for further processing.<\/span><\/p>\n<p style=\"text-align: justify\"><span style=\"color: #000000\">The chip can thus calculate dot products for multiple nodes \u2014 16 at a time, in the prototype \u2014 in a single step, instead of shuttling between a processor and memory for every computation.<\/span><\/p>\n<p style=\"text-align: justify\"><span style=\"color: #000000\"><strong>All or nothing<\/strong><\/span><\/p>\n<p style=\"text-align: justify\"><span style=\"color: #000000\">One of the keys to the system is that all the weights are either 1 or -1. That means that they can be implemented within the memory itself as simple switches that either close a circuit or leave it open. Recent theoretical work suggests that neural nets trained with only two weights should lose little accuracy \u2014 somewhere between 1 and 2 percent.<\/span><\/p>\n<p style=\"text-align: justify\"><span style=\"color: #000000\">Biswas and Chandrakasan\u2019s research bears that prediction out. In experiments, they ran the full implementation of a neural network on a conventional computer and the binary-weight equivalent on their chip. Their chip\u2019s results were generally within 2 to 3 percent of the conventional network\u2019s.<\/span><\/p>\n","protected":false},"excerpt":{"rendered":"<p>New chip reduces neural networks\u2019 power consumption by up to 95 percent, making them practical for battery-powered devices. CAMBRIDGE, MASS.&#8211;Most recent advances in artificial-intelligence systems such as speech- or face-recognition programs have come courtesy of neural networks, densely interconnected meshes of simple information processors that learn to perform tasks by analyzing huge sets of training [&hellip;]<\/p>\n","protected":false},"author":6,"featured_media":14436,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[22,17,28],"tags":[],"class_list":["post-14435","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-other","category-research","category-techbiz"],"featured_image_urls":{"full":["https:\/\/www.revoscience.com\/en\/wp-content\/uploads\/2018\/02\/MIT-Neural-Network-Chip_0.jpg",639,426,false],"thumbnail":["https:\/\/www.revoscience.com\/en\/wp-content\/uploads\/2018\/02\/MIT-Neural-Network-Chip_0-150x150.jpg",150,150,true],"medium":["https:\/\/www.revoscience.com\/en\/wp-content\/uploads\/2018\/02\/MIT-Neural-Network-Chip_0-300x200.jpg",300,200,true],"medium_large":["https:\/\/www.revoscience.com\/en\/wp-content\/uploads\/2018\/02\/MIT-Neural-Network-Chip_0.jpg",639,426,false],"large":["https:\/\/www.revoscience.com\/en\/wp-content\/uploads\/2018\/02\/MIT-Neural-Network-Chip_0.jpg",639,426,false],"1536x1536":["https:\/\/www.revoscience.com\/en\/wp-content\/uploads\/2018\/02\/MIT-Neural-Network-Chip_0.jpg",639,426,false],"2048x2048":["https:\/\/www.revoscience.com\/en\/wp-content\/uploads\/2018\/02\/MIT-Neural-Network-Chip_0.jpg",639,426,false],"ultp_layout_landscape_large":["https:\/\/www.revoscience.com\/en\/wp-content\/uploads\/2018\/02\/MIT-Neural-Network-Chip_0.jpg",639,426,false],"ultp_layout_landscape":["https:\/\/www.revoscience.com\/en\/wp-content\/uploads\/2018\/02\/MIT-Neural-Network-Chip_0.jpg",639,426,false],"ultp_layout_portrait":["https:\/\/www.revoscience.com\/en\/wp-content\/uploads\/2018\/02\/MIT-Neural-Network-Chip_0.jpg",600,400,false],"ultp_layout_square":["https:\/\/www.revoscience.com\/en\/wp-content\/uploads\/2018\/02\/MIT-Neural-Network-Chip_0.jpg",600,400,false],"newspaper-x-single-post":["https:\/\/www.revoscience.com\/en\/wp-content\/uploads\/2018\/02\/MIT-Neural-Network-Chip_0.jpg",639,426,false],"newspaper-x-recent-post-big":["https:\/\/www.revoscience.com\/en\/wp-content\/uploads\/2018\/02\/MIT-Neural-Network-Chip_0.jpg",540,360,false],"newspaper-x-recent-post-list-image":["https:\/\/www.revoscience.com\/en\/wp-content\/uploads\/2018\/02\/MIT-Neural-Network-Chip_0.jpg",95,63,false],"web-stories-poster-portrait":["https:\/\/www.revoscience.com\/en\/wp-content\/uploads\/2018\/02\/MIT-Neural-Network-Chip_0.jpg",639,426,false],"web-stories-publisher-logo":["https:\/\/www.revoscience.com\/en\/wp-content\/uploads\/2018\/02\/MIT-Neural-Network-Chip_0.jpg",96,64,false],"web-stories-thumbnail":["https:\/\/www.revoscience.com\/en\/wp-content\/uploads\/2018\/02\/MIT-Neural-Network-Chip_0.jpg",150,100,false]},"author_info":{"info":["Amrita Tuladhar"]},"category_info":"<a href=\"https:\/\/www.revoscience.com\/en\/category\/news\/other\/\" rel=\"category tag\">Other<\/a> <a href=\"https:\/\/www.revoscience.com\/en\/category\/news\/research\/\" rel=\"category tag\">Research<\/a> <a href=\"https:\/\/www.revoscience.com\/en\/category\/techbiz\/\" rel=\"category tag\">Tech<\/a>","tag_info":"Tech","comment_count":"0","_links":{"self":[{"href":"https:\/\/www.revoscience.com\/en\/wp-json\/wp\/v2\/posts\/14435","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.revoscience.com\/en\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.revoscience.com\/en\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.revoscience.com\/en\/wp-json\/wp\/v2\/users\/6"}],"replies":[{"embeddable":true,"href":"https:\/\/www.revoscience.com\/en\/wp-json\/wp\/v2\/comments?post=14435"}],"version-history":[{"count":0,"href":"https:\/\/www.revoscience.com\/en\/wp-json\/wp\/v2\/posts\/14435\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/www.revoscience.com\/en\/wp-json\/wp\/v2\/media\/14436"}],"wp:attachment":[{"href":"https:\/\/www.revoscience.com\/en\/wp-json\/wp\/v2\/media?parent=14435"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.revoscience.com\/en\/wp-json\/wp\/v2\/categories?post=14435"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.revoscience.com\/en\/wp-json\/wp\/v2\/tags?post=14435"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}