{"id":10430,"date":"2016-11-08T06:24:55","date_gmt":"2016-11-08T06:24:55","guid":{"rendered":"http:\/\/revoscience.com\/en\/?p=10430"},"modified":"2016-11-08T06:24:55","modified_gmt":"2016-11-08T06:24:55","slug":"faster-programs-easier-programming","status":"publish","type":"post","link":"https:\/\/www.revoscience.com\/en\/faster-programs-easier-programming\/","title":{"rendered":"Faster programs, easier programming"},"content":{"rendered":"<p style=\"text-align: justify;\"><span style=\"color: #000000;\"><em><strong style=\"color: #222222;\">New system lets nonexperts optimize programs that run on multiprocessor chips.<\/strong><\/em><\/span><\/p>\n<figure id=\"attachment_10431\" aria-describedby=\"caption-attachment-10431\" style=\"width: 639px\" class=\"wp-caption alignnone\"><a href=\"http:\/\/revoscience.com\/en\/wp-content\/uploads\/2016\/11\/MIT-Laymans-Parellel_0.jpg\" target=\"_blank\" rel=\"noopener\"><img loading=\"lazy\" decoding=\"async\" class=\"size-full wp-image-10431\" src=\"http:\/\/revoscience.com\/en\/wp-content\/uploads\/2016\/11\/MIT-Laymans-Parellel_0.jpg\" alt=\"A system developed by researchers at MIT and Stony Brook University should make it easier for researchers to solve complex computational problems using dynamic programming optimized for multicore chips \u2014 without the expertise that such programming typically requires. Image: MIT News (figures courtesy of the researchers)\" width=\"639\" height=\"426\" title=\"\" srcset=\"https:\/\/www.revoscience.com\/en\/wp-content\/uploads\/2016\/11\/MIT-Laymans-Parellel_0.jpg 639w, https:\/\/www.revoscience.com\/en\/wp-content\/uploads\/2016\/11\/MIT-Laymans-Parellel_0-300x200.jpg 300w\" sizes=\"auto, (max-width: 639px) 100vw, 639px\" \/><\/a><figcaption id=\"caption-attachment-10431\" class=\"wp-caption-text\">A system developed by researchers at MIT and Stony Brook University should make it easier for researchers to solve complex computational problems using dynamic programming optimized for multicore chips \u2014 without the expertise that such programming typically requires.<br \/>Image: MIT News (figures courtesy of the researchers)<\/figcaption><\/figure>\n<p style=\"text-align: justify;\"><span style=\"color: #000000;\"><strong>CAMBRIDGE, Mass<\/strong>. &#8212;\u00a0Dynamic programming is a technique that can yield relatively efficient solutions to computational problems in economics, genomic analysis, and other fields. But adapting it to computer chips with multiple \u201ccores,\u201d or processing units, requires a level of programming expertise that few economists and biologists have.<\/span><\/p>\n<p style=\"text-align: justify;\"><span style=\"color: #000000;\">Researchers from MIT\u2019s Computer Science and Artificial Intelligence Laboratory (CSAIL) and Stony Brook University aim to change that, with a new system that allows users to describe what they want their programs to do in very general terms. It then automatically produces versions of those programs that are optimized to run on multicore chips. It also guarantees that the new versions will yield exactly the same results that the single-core versions would, albeit much faster.<\/span><\/p>\n<p style=\"text-align: justify;\"><span style=\"color: #000000;\">In experiments, the researchers used the system to \u201cparallelize\u201d several algorithms that used dynamic programming, splitting them up so that they would run on multicore chips. The resulting programs were between three and 11 times as fast as those produced by earlier techniques for automatic parallelization, and they were generally as efficient as those that were hand-parallelized by computer scientists.<\/span><\/p>\n<p style=\"text-align: justify;\">[pullquote]Dynamic programming offers exponential speedups on a certain class of problems because it stores and reuses the results of computations, rather than recomputing them every time they\u2019re required.[\/pullquote]<\/p>\n<p style=\"text-align: justify;\"><span style=\"color: #000000;\">The researchers\u00a0<a style=\"color: #1155cc;\" href=\"http:\/\/mit.pr-optout.com\/Tracking.aspx?Data=HHL%3d809083-%3eLCE9%3b4%3b8%3f%26SDG%3c90%3a.&amp;RE=MC&amp;RI=4334046&amp;Preview=False&amp;DistributionActionID=32806&amp;Action=Follow+Link\" target=\"_blank\" data-saferedirecturl=\"https:\/\/www.google.com\/url?hl=en&amp;q=http:\/\/mit.pr-optout.com\/Tracking.aspx?Data%3DHHL%253d809083-%253eLCE9%253b4%253b8%253f%2526SDG%253c90%253a.%26RE%3DMC%26RI%3D4334046%26Preview%3DFalse%26DistributionActionID%3D32806%26Action%3DFollow%2BLink&amp;source=gmail&amp;ust=1478672226323000&amp;usg=AFQjCNFN__QIBowuARi9sCS3ScpOKGFrPw\" rel=\"noopener\"><span style=\"color: #000000;\">presented their new system<\/span><\/a>\u00a0last week at the Association for Computing Machinery\u2019s conference on Systems, Programming, Languages and Applications: Software for Humanity.<\/span><\/p>\n<p style=\"text-align: justify;\"><span style=\"color: #000000;\">Dynamic programming offers exponential speedups on a certain class of problems because it stores and reuses the results of computations, rather than recomputing them every time they\u2019re required.<\/span><\/p>\n<p style=\"text-align: justify;\"><span style=\"color: #000000;\">\u201cBut you need more memory, because you store the results of intermediate computations,\u201d says Shachar Itzhaky, first author on the new paper and a postdoc in the group of Armando Solar-Lezama, an associate professor of electrical engineering and computer science at MIT. \u201cWhen you come to implement it, you realize that you don&#8217;t get as much speedup as you thought you would, because the memory is slow. When you store and fetch, of course, it\u2019s still faster than redoing the computation, but it\u2019s not as fast as it could have been.\u201d<\/span><\/p>\n<p style=\"text-align: justify;\"><span style=\"color: #000000;\"><strong>Outsourcing complexity<\/strong><\/span><\/p>\n<p style=\"text-align: justify;\"><span style=\"color: #000000;\">Computer scientists avoid this problem by reordering computations so that those requiring a particular stored value are executed in sequence, minimizing the number of times that the value has to be recalled from memory. That\u2019s relatively easy to do with a single-core computer, but with multicore computers, when multiple cores are sharing data stored at multiple locations, memory management become much more complex. A hand-optimized, parallel version of a dynamic-programming algorithm is typically 10 times as long as the single-core version, and the individual lines of code are more complex, to boot.<\/span><\/p>\n<p style=\"text-align: justify;\"><span style=\"color: #000000;\">The CSAIL researchers\u2019 new system \u2014 dubbed Bellmania, after Richard Bellman, the applied mathematician who pioneered dynamic programming \u2014 adopts a parallelization strategy called recursive divide-and-conquer. Suppose that the task of a parallel algorithm is to perform a sequence of computations on a grid of numbers, known as a matrix. Its first task might be to divide the grid into four parts, each to be processed separately.<\/span><\/p>\n<p style=\"text-align: justify;\"><span style=\"color: #000000;\">But then it might divide each of those four parts into four parts, and each of those into another four parts, and so on. Because this approach \u2014 recursion \u2014 involves breaking a problem into smaller subproblems, it naturally lends itself to parallelization.<\/span><\/p>\n<p style=\"text-align: justify;\"><span style=\"color: #000000;\">Joining Itzhaky on the new paper are Solar-Lezama; Charles Leiserson, the Edwin Sibley Webster Professor of Electrical Engineering and Computer Science; Rohit Singh and Kuat Yessenov, who were MIT both graduate students in electrical engineering and computer science when the work was done; Yongquan Lu, an MIT undergraduate who participated in the project through MIT\u2019s Undergraduate Research Opportunities Program; and Rezaul Chowdhury, an assistant professor of computer science at Stony Brook, who was formerly a research affiliate in Leiserson\u2019s group.<\/span><\/p>\n<p style=\"text-align: justify;\"><span style=\"color: #000000;\">Leiserson\u2019s group specializes in divide-and-conquer parallelization techniques; Solar-Lezama\u2019s specializes in program synthesis, or automatically generating code from high-level specifications. With Bellmania, the user simply has to describe the first step of the process \u2014 the division of the matrix and the procedures to be applied to the resulting segments. Bellmania then determines how to continue subdividing the problem so as to use memory efficiently.<\/span><\/p>\n<p style=\"text-align: justify;\"><span style=\"color: #000000;\"><strong>Rapid search<\/strong><\/span><\/p>\n<p style=\"text-align: justify;\"><span style=\"color: #000000;\">At each level of recursion \u2014 with each successively smaller subdivision of the matrix \u2014 a program generated by Bellmania will typically perform some operation on some segment of the matrix and farm the rest out to subroutines, which can be performed in parallel. Each of those subroutines, in turn, will perform some operation on some segment of the data and farm the rest out to further subroutines, and so on.<\/span><\/p>\n<p style=\"text-align: justify;\"><span style=\"color: #000000;\">Bellmania determines how much data should be processed at each level and which subroutines should handle the rest. \u201cThe goal is to arrange the memory accesses such that when you read a cell [of the matrix], you do as much computation as you can with it, so that you will not have to read it again later,\u201d Itzhaky says.<\/span><\/p>\n<p style=\"text-align: justify;\"><span style=\"color: #000000;\">Finding the optimal division of tasks requires canvassing a wide range of possibilities. Solar-Lezama\u2019s group has developed a suite of tools to make that type of search more efficient; even so, Bellmania takes about 15 minutes to parallelize a typical dynamic-programming algorithm. That\u2019s still much faster than a human programmer could perform the same task, however. And the result is guaranteed to be correct; hand-optimized code is so complex that it\u2019s easy for errors to creep in.<\/span><\/p>\n","protected":false},"excerpt":{"rendered":"<p>Dynamic programming is a technique that can yield relatively efficient solutions to computational problems in economics, genomic analysis, and other fields. But adapting it to computer chips with multiple \u201ccores,\u201d or processing units, requires a level of programming expertise that few economists and biologists have.<\/p>\n","protected":false},"author":6,"featured_media":10431,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[43,17],"tags":[],"class_list":["post-10430","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-computer-science","category-research"],"featured_image_urls":{"full":["https:\/\/www.revoscience.com\/en\/wp-content\/uploads\/2016\/11\/MIT-Laymans-Parellel_0.jpg",639,426,false],"thumbnail":["https:\/\/www.revoscience.com\/en\/wp-content\/uploads\/2016\/11\/MIT-Laymans-Parellel_0-150x150.jpg",150,150,true],"medium":["https:\/\/www.revoscience.com\/en\/wp-content\/uploads\/2016\/11\/MIT-Laymans-Parellel_0-300x200.jpg",300,200,true],"medium_large":["https:\/\/www.revoscience.com\/en\/wp-content\/uploads\/2016\/11\/MIT-Laymans-Parellel_0.jpg",639,426,false],"large":["https:\/\/www.revoscience.com\/en\/wp-content\/uploads\/2016\/11\/MIT-Laymans-Parellel_0.jpg",639,426,false],"1536x1536":["https:\/\/www.revoscience.com\/en\/wp-content\/uploads\/2016\/11\/MIT-Laymans-Parellel_0.jpg",639,426,false],"2048x2048":["https:\/\/www.revoscience.com\/en\/wp-content\/uploads\/2016\/11\/MIT-Laymans-Parellel_0.jpg",639,426,false],"ultp_layout_landscape_large":["https:\/\/www.revoscience.com\/en\/wp-content\/uploads\/2016\/11\/MIT-Laymans-Parellel_0.jpg",639,426,false],"ultp_layout_landscape":["https:\/\/www.revoscience.com\/en\/wp-content\/uploads\/2016\/11\/MIT-Laymans-Parellel_0.jpg",639,426,false],"ultp_layout_portrait":["https:\/\/www.revoscience.com\/en\/wp-content\/uploads\/2016\/11\/MIT-Laymans-Parellel_0.jpg",600,400,false],"ultp_layout_square":["https:\/\/www.revoscience.com\/en\/wp-content\/uploads\/2016\/11\/MIT-Laymans-Parellel_0.jpg",600,400,false],"newspaper-x-single-post":["https:\/\/www.revoscience.com\/en\/wp-content\/uploads\/2016\/11\/MIT-Laymans-Parellel_0.jpg",639,426,false],"newspaper-x-recent-post-big":["https:\/\/www.revoscience.com\/en\/wp-content\/uploads\/2016\/11\/MIT-Laymans-Parellel_0.jpg",540,360,false],"newspaper-x-recent-post-list-image":["https:\/\/www.revoscience.com\/en\/wp-content\/uploads\/2016\/11\/MIT-Laymans-Parellel_0.jpg",95,63,false],"web-stories-poster-portrait":["https:\/\/www.revoscience.com\/en\/wp-content\/uploads\/2016\/11\/MIT-Laymans-Parellel_0.jpg",639,426,false],"web-stories-publisher-logo":["https:\/\/www.revoscience.com\/en\/wp-content\/uploads\/2016\/11\/MIT-Laymans-Parellel_0.jpg",96,64,false],"web-stories-thumbnail":["https:\/\/www.revoscience.com\/en\/wp-content\/uploads\/2016\/11\/MIT-Laymans-Parellel_0.jpg",150,100,false]},"author_info":{"info":["Amrita Tuladhar"]},"category_info":"<a href=\"https:\/\/www.revoscience.com\/en\/category\/computer-science\/\" rel=\"category tag\">Computer Science<\/a> <a href=\"https:\/\/www.revoscience.com\/en\/category\/news\/research\/\" rel=\"category tag\">Research<\/a>","tag_info":"Research","comment_count":"0","_links":{"self":[{"href":"https:\/\/www.revoscience.com\/en\/wp-json\/wp\/v2\/posts\/10430","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.revoscience.com\/en\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.revoscience.com\/en\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.revoscience.com\/en\/wp-json\/wp\/v2\/users\/6"}],"replies":[{"embeddable":true,"href":"https:\/\/www.revoscience.com\/en\/wp-json\/wp\/v2\/comments?post=10430"}],"version-history":[{"count":0,"href":"https:\/\/www.revoscience.com\/en\/wp-json\/wp\/v2\/posts\/10430\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/www.revoscience.com\/en\/wp-json\/wp\/v2\/media\/10431"}],"wp:attachment":[{"href":"https:\/\/www.revoscience.com\/en\/wp-json\/wp\/v2\/media?parent=10430"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.revoscience.com\/en\/wp-json\/wp\/v2\/categories?post=10430"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.revoscience.com\/en\/wp-json\/wp\/v2\/tags?post=10430"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}