{"id":10738,"date":"2016-12-02T04:53:38","date_gmt":"2016-12-02T04:53:38","guid":{"rendered":"http:\/\/revoscience.com\/en\/?p=10738"},"modified":"2016-12-02T04:53:38","modified_gmt":"2016-12-02T04:53:38","slug":"brain-recognizes-faces","status":"publish","type":"post","link":"https:\/\/www.revoscience.com\/en\/brain-recognizes-faces\/","title":{"rendered":"How the brain recognizes faces"},"content":{"rendered":"<p style=\"text-align: justify;\"><span style=\"color: #000000;\"><em><strong>Machine-learning system spontaneously reproduces aspects of human neurology.<\/strong><\/em><\/span><\/p>\n<figure id=\"attachment_10739\" aria-describedby=\"caption-attachment-10739\" style=\"width: 639px\" class=\"wp-caption alignnone\"><img loading=\"lazy\" decoding=\"async\" class=\"size-full wp-image-10739\" src=\"http:\/\/revoscience.com\/en\/wp-content\/uploads\/2016\/12\/MIT-Face-Representation_0.jpg\" alt=\"Tomaso Poggio, a professor of brain and cognitive sciences at MIT and director of the Center for Brains, Minds, and Machines, has long thought that the brain must produce \u201cinvariant\u201d representations of faces and other objects, meaning representations that are indifferent to objects\u2019 orientation in space, their distance from the viewer, or their location in the visual field. Image: MIT News\" width=\"639\" height=\"426\" title=\"\" srcset=\"https:\/\/www.revoscience.com\/en\/wp-content\/uploads\/2016\/12\/MIT-Face-Representation_0.jpg 639w, https:\/\/www.revoscience.com\/en\/wp-content\/uploads\/2016\/12\/MIT-Face-Representation_0-300x200.jpg 300w\" sizes=\"auto, (max-width: 639px) 100vw, 639px\" \/><figcaption id=\"caption-attachment-10739\" class=\"wp-caption-text\">Tomaso Poggio, a professor of brain and cognitive sciences at MIT and director of the Center for Brains, Minds, and Machines, has long thought that the brain must produce \u201cinvariant\u201d representations of faces and other objects, meaning representations that are indifferent to objects\u2019 orientation in space, their distance from the viewer, or their location in the visual field.<br \/>Image: MIT News<\/figcaption><\/figure>\n<p style=\"text-align: justify;\"><span style=\"color: #000000;\"><strong>CAMBRIDGE, Mass.<\/strong> &#8212;\u00a0MIT researchers and their colleagues have developed a new computational model of the human brain\u2019s face-recognition mechanism that seems to capture aspects of human neurology that previous models have missed.<\/span><\/p>\n<p style=\"text-align: justify;\"><span style=\"color: #000000;\">The researchers designed a machine-learning system that implemented their model, and they trained it to recognize particular faces by feeding it a battery of sample images. They found that the trained system included an intermediate processing step that represented a face\u2019s degree of rotation \u2014 say, 45 degrees from center \u2014 but not the direction \u2014 left or right.<\/span><\/p>\n<p style=\"text-align: justify;\"><span style=\"color: #000000;\">This property wasn\u2019t built into the system; it emerged spontaneously from the training process. But it duplicates an experimentally observed feature of the primate face-processing mechanism. The researchers consider this an indication that their system and the brain are doing something similar.<\/span><\/p>\n<p style=\"text-align: justify;\"><span style=\"color: #000000;\">\u201cThis is not a proof that we understand what\u2019s going on,\u201d says Tomaso Poggio, a professor of brain and cognitive sciences at MIT and director of the <a style=\"color: #000000;\" href=\"http:\/\/mit.pr-optout.com\/Tracking.aspx?Data=HHL%3d8098%3c9-%3eLCE9%3b4%3b8%3f%26SDG%3c90%3a.&amp;RE=MC&amp;RI=4334046&amp;Preview=False&amp;DistributionActionID=33207&amp;Action=Follow+Link\" target=\"_blank\" data-saferedirecturl=\"https:\/\/www.google.com\/url?hl=en&amp;q=http:\/\/mit.pr-optout.com\/Tracking.aspx?Data%3DHHL%253d8098%253c9-%253eLCE9%253b4%253b8%253f%2526SDG%253c90%253a.%26RE%3DMC%26RI%3D4334046%26Preview%3DFalse%26DistributionActionID%3D33207%26Action%3DFollow%2BLink&amp;source=gmail&amp;ust=1480739593287000&amp;usg=AFQjCNGiq_9CRmqisyxxvI6QcPSQkXMiPA\" rel=\"noopener\">Center for Brains, Minds, and Machines (CBMM)<\/a>, a multi-institution research consortium funded by the National Science Foundation and headquartered at MIT. \u201cModels are kind of cartoons of reality, especially in biology. So I would be surprised if things turn out to be this simple. But I think it\u2019s strong evidence that we are on the right track.\u201d<\/span><\/p>\n<p style=\"text-align: justify;\"><span style=\"color: #000000;\">Indeed, the researchers\u2019 new paper includes a mathematical proof that the particular type of machine-learning system they use, which was intended to offer what Poggio calls a \u201cbiologically plausible\u201d model of the nervous system, will inevitably yield intermediary representations that are indifferent to angle of rotation.<\/span><\/p>\n<p style=\"text-align: justify;\"><span style=\"color: #000000;\">Poggio, who is also a primary investigator at MIT\u2019s McGovern Institute for Brain Research, is the senior author on a paper describing the new work, which appeared today in the journal <em>Computational Biology<\/em>. He\u2019s joined on the paper by several other members of both the CBMM and the McGovern Institute: first author Joel Leibo, a researcher at Google DeepMind, who earned his PhD in brain and cognitive sciences from MIT with Poggio as his advisor; Qianli Liao, an MIT graduate student in electrical engineering and computer science; Fabio Anselmi, a postdoc in the IIT@MIT Laboratory for Computational and Statistical Learning, a joint venture of MIT and the Italian Institute of Technology; and Winrich Freiwald, an associate professor at the Rockefeller University.<\/span><\/p>\n<p style=\"text-align: justify;\"><span style=\"color: #000000;\"><strong>Emergent properties<\/strong><\/span><\/p>\n<p style=\"text-align: justify;\"><span style=\"color: #000000;\">The new paper is \u201ca nice illustration of what we want to do in [CBMM], which is this integration of machine learning and computer science on one hand, neurophysiology on the other, and aspects of human behavior,\u201d Poggio says. \u201cThat means not only what algorithms does the brain use, but what are the circuits in the brain that implement these algorithms.\u201d<\/span><\/p>\n<p style=\"text-align: justify;\">[pullquote]In earlier work, Poggio\u2019s group had trained neural networks to produce invariant representations by, essentially, memorizing a representative set of orientations for just a handful of faces, which Poggio calls \u201ctemplates.\u201d When the network was presented with a new face, it would measure its difference from these templates.[\/pullquote]<\/p>\n<p style=\"text-align: justify;\"><span style=\"color: #000000;\">Poggio has long believed that the brain must produce \u201cinvariant\u201d representations of faces and other objects, meaning representations that are indifferent to objects\u2019 orientation in space, their distance from the viewer, or their location in the visual field. Magnetic resonance scans of human and monkey brains suggested as much, but in 2010, Freiwald published a study describing the neuroanatomy of macaque monkeys\u2019 face-recognition mechanism in much greater detail.<\/span><\/p>\n<p style=\"text-align: justify;\"><span style=\"color: #000000;\">Freiwald showed that information from the monkey\u2019s optic nerves passes through a series of brain locations, each of which is less sensitive to face orientation than the last. Neurons in the first region fire only in response to particular face orientations; neurons in the final region fire regardless of the face\u2019s orientation \u2014 an invariant representation.<\/span><\/p>\n<p style=\"text-align: justify;\"><span style=\"color: #000000;\">But neurons in an intermediate region appear to be \u201cmirror symmetric\u201d: That is, they\u2019re sensitive to the angle of face rotation without respect to direction. In the first region, one cluster of neurons will fire if a face is rotated 45 degrees to the left, and a different cluster will fire if it\u2019s rotated 45 degrees to the right. In the final region, the same cluster of neurons will fire whether the face is rotated 30 degrees, 45 degrees, 90 degrees, or anywhere in-between. But in the intermediate region, a particular cluster of neurons will fire if the face is rotated by 45 degrees in either direction, another if it\u2019s rotated 30 degrees, and so on.<\/span><\/p>\n<p style=\"text-align: justify;\"><span style=\"color: #000000;\">This is the behavior that the researchers\u2019 machine-learning system reproduced. \u201cIt was not a model that was trying to explain mirror symmetry,\u201d Poggio says. \u201cThis model was trying to explain invariance, and in the process, there is this other property that pops out.\u201d<\/span><\/p>\n<p style=\"text-align: justify;\"><span style=\"color: #000000;\"><strong>Neural training<\/strong><\/span><\/p>\n<p style=\"text-align: justify;\"><span style=\"color: #000000;\">The researchers\u2019 machine-learning system is a neural network, so called because it roughly approximates the architecture of the human brain. A neural network consists of very simple processing units, arranged into layers, that are densely connected to the processing units \u2014 or nodes \u2014 in the layers above and below. Data are fed into the bottom layer of the network, which processes them in some way and feeds them to the next layer, and so on. During training, the output of the top layer is correlated with some classification criterion \u2014 say, correctly determining whether a given image depicts a particular person.<\/span><\/p>\n<p style=\"text-align: justify;\"><span style=\"color: #000000;\">In earlier work, Poggio\u2019s group had trained neural networks to produce invariant representations by, essentially, memorizing a representative set of orientations for just a handful of faces, which Poggio calls \u201ctemplates.\u201d When the network was presented with a new face, it would measure its difference from these templates. That difference would be smallest for the templates whose orientations were the same as that of the new face, and the output of their associated nodes would end up dominating the information signal by the time it reached the top layer. The measured difference between the new face and the stored faces gives the new face a kind of identifying signature.<\/span><\/p>\n<p style=\"text-align: justify;\"><span style=\"color: #000000;\">In experiments, this approach produced invariant representations: A face\u2019s signature turned out to be roughly the same no matter its orientation. But the mechanism \u2014 memorizing templates \u2014 was not, Poggio says, biologically plausible.<\/span><\/p>\n<p style=\"text-align: justify;\"><span style=\"color: #000000;\">So instead, the new network uses a variation on Hebb\u2019s rule, which is often described in the neurological literature as \u201cneurons that fire together wire together.\u201d That means that during training, as the weights of the connections between nodes are being adjusted to produce more accurate outputs, nodes that react in concert to particular stimuli end up contributing more to the final output than nodes that react independently (or not at all).<\/span><\/p>\n<p style=\"text-align: justify;\"><span style=\"color: #000000;\">This approach, too, ended up yielding invariant representations. But the middle layers of the network also duplicated the mirror-symmetric responses of the intermediate visual-processing regions of the primate brain.<\/span><\/p>\n","protected":false},"excerpt":{"rendered":"<p>MIT researchers and their colleagues have developed a new computational model of the human brain\u2019s face-recognition mechanism that seems to capture aspects of human neurology that previous models have missed.<\/p>\n","protected":false},"author":6,"featured_media":10739,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[17,28],"tags":[],"class_list":["post-10738","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-research","category-techbiz"],"featured_image_urls":{"full":["https:\/\/www.revoscience.com\/en\/wp-content\/uploads\/2016\/12\/MIT-Face-Representation_0.jpg",639,426,false],"thumbnail":["https:\/\/www.revoscience.com\/en\/wp-content\/uploads\/2016\/12\/MIT-Face-Representation_0-150x150.jpg",150,150,true],"medium":["https:\/\/www.revoscience.com\/en\/wp-content\/uploads\/2016\/12\/MIT-Face-Representation_0-300x200.jpg",300,200,true],"medium_large":["https:\/\/www.revoscience.com\/en\/wp-content\/uploads\/2016\/12\/MIT-Face-Representation_0.jpg",639,426,false],"large":["https:\/\/www.revoscience.com\/en\/wp-content\/uploads\/2016\/12\/MIT-Face-Representation_0.jpg",639,426,false],"1536x1536":["https:\/\/www.revoscience.com\/en\/wp-content\/uploads\/2016\/12\/MIT-Face-Representation_0.jpg",639,426,false],"2048x2048":["https:\/\/www.revoscience.com\/en\/wp-content\/uploads\/2016\/12\/MIT-Face-Representation_0.jpg",639,426,false],"ultp_layout_landscape_large":["https:\/\/www.revoscience.com\/en\/wp-content\/uploads\/2016\/12\/MIT-Face-Representation_0.jpg",639,426,false],"ultp_layout_landscape":["https:\/\/www.revoscience.com\/en\/wp-content\/uploads\/2016\/12\/MIT-Face-Representation_0.jpg",639,426,false],"ultp_layout_portrait":["https:\/\/www.revoscience.com\/en\/wp-content\/uploads\/2016\/12\/MIT-Face-Representation_0.jpg",600,400,false],"ultp_layout_square":["https:\/\/www.revoscience.com\/en\/wp-content\/uploads\/2016\/12\/MIT-Face-Representation_0.jpg",600,400,false],"newspaper-x-single-post":["https:\/\/www.revoscience.com\/en\/wp-content\/uploads\/2016\/12\/MIT-Face-Representation_0.jpg",639,426,false],"newspaper-x-recent-post-big":["https:\/\/www.revoscience.com\/en\/wp-content\/uploads\/2016\/12\/MIT-Face-Representation_0.jpg",540,360,false],"newspaper-x-recent-post-list-image":["https:\/\/www.revoscience.com\/en\/wp-content\/uploads\/2016\/12\/MIT-Face-Representation_0.jpg",95,63,false],"web-stories-poster-portrait":["https:\/\/www.revoscience.com\/en\/wp-content\/uploads\/2016\/12\/MIT-Face-Representation_0.jpg",639,426,false],"web-stories-publisher-logo":["https:\/\/www.revoscience.com\/en\/wp-content\/uploads\/2016\/12\/MIT-Face-Representation_0.jpg",96,64,false],"web-stories-thumbnail":["https:\/\/www.revoscience.com\/en\/wp-content\/uploads\/2016\/12\/MIT-Face-Representation_0.jpg",150,100,false]},"author_info":{"info":["Amrita Tuladhar"]},"category_info":"<a href=\"https:\/\/www.revoscience.com\/en\/category\/news\/research\/\" rel=\"category tag\">Research<\/a> <a href=\"https:\/\/www.revoscience.com\/en\/category\/techbiz\/\" rel=\"category tag\">Tech<\/a>","tag_info":"Tech","comment_count":"0","_links":{"self":[{"href":"https:\/\/www.revoscience.com\/en\/wp-json\/wp\/v2\/posts\/10738","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.revoscience.com\/en\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.revoscience.com\/en\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.revoscience.com\/en\/wp-json\/wp\/v2\/users\/6"}],"replies":[{"embeddable":true,"href":"https:\/\/www.revoscience.com\/en\/wp-json\/wp\/v2\/comments?post=10738"}],"version-history":[{"count":0,"href":"https:\/\/www.revoscience.com\/en\/wp-json\/wp\/v2\/posts\/10738\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/www.revoscience.com\/en\/wp-json\/wp\/v2\/media\/10739"}],"wp:attachment":[{"href":"https:\/\/www.revoscience.com\/en\/wp-json\/wp\/v2\/media?parent=10738"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.revoscience.com\/en\/wp-json\/wp\/v2\/categories?post=10738"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.revoscience.com\/en\/wp-json\/wp\/v2\/tags?post=10738"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}