{"id":19420,"date":"2020-11-23T21:17:54","date_gmt":"2020-11-23T15:32:54","guid":{"rendered":"https:\/\/www.revoscience.com\/en\/?p=19420"},"modified":"2020-11-23T21:19:35","modified_gmt":"2020-11-23T15:34:35","slug":"a-neural-network-learns-when-it-should-not-be-trusted","status":"publish","type":"post","link":"https:\/\/www.revoscience.com\/en\/a-neural-network-learns-when-it-should-not-be-trusted\/","title":{"rendered":"A neural network learns when it should not be trusted"},"content":{"rendered":"\n<p><strong><em>A faster way to estimate uncertainty in AI-assisted decision-making could lead to safer outcomes.<\/em><\/strong><\/p>\n\n\n\n<p><strong>Daniel Ackerman<\/strong><\/p>\n\n\n\n<figure class=\"wp-block-image size-large is-resized is-style-default\"><img loading=\"lazy\" decoding=\"async\" sizes=\"auto, (max-width: 675px) 100vw, 675px\" src=\"https:\/\/www.revoscience.com\/en\/wp-content\/uploads\/2020\/11\/Network-Confidence-01-Press-675x450.jpg\" alt=\"\" class=\"wp-image-19421\" width=\"972\" height=\"648\" title=\"\" srcset=\"https:\/\/www.revoscience.com\/en\/wp-content\/uploads\/2020\/11\/Network-Confidence-01-Press-675x450.jpg 675w, https:\/\/www.revoscience.com\/en\/wp-content\/uploads\/2020\/11\/Network-Confidence-01-Press-600x400.jpg 600w, https:\/\/www.revoscience.com\/en\/wp-content\/uploads\/2020\/11\/Network-Confidence-01-Press-768x512.jpg 768w, https:\/\/www.revoscience.com\/en\/wp-content\/uploads\/2020\/11\/Network-Confidence-01-Press-174x116.jpg 174w, https:\/\/www.revoscience.com\/en\/wp-content\/uploads\/2020\/11\/Network-Confidence-01-Press.jpg 900w\" \/><\/figure>\n\n\n\n<p>CAMBRIDGE, Mass. &#8212;&nbsp;Increasingly, artificial intelligence systems known as deep learning neural networks are used to inform decisions vital to human health and safety, such as in autonomous driving or medical diagnosis. These networks are good at recognizing patterns in large, complex datasets to aid in decision-making. But how do we know they\u2019re correct? Alexander Amini and his colleagues at MIT and Harvard University wanted to find out.<\/p>\n\n\n\n<p>They\u2019ve developed a quick way for a neural network to crunch data, and output not just a prediction but also the model\u2019s confidence level based on the quality of the available data. The advance might save lives, as deep learning is already being deployed in the real world today. A network\u2019s level of certainty can be the difference between an autonomous vehicle determining that \u201cit\u2019s all clear to proceed through the intersection\u201d and \u201cit\u2019s probably clear, so stop just in case.\u201d&nbsp;<\/p>\n\n\n\n<p>Current methods of uncertainty estimation for neural networks tend to be computationally expensive and relatively slow for split-second decisions. But Amini\u2019s approach, dubbed \u201cdeep evidential regression,\u201d accelerates the process and could lead to safer outcomes. \u201cWe need the ability to not only have high-performance models, but also to understand when we cannot trust those models,\u201d says Amini, a PhD student in Professor Daniela Rus\u2019 group at the MIT Computer Science and Artificial Intelligence Laboratory (CSAIL).<\/p>\n\n\n\n<p>\u201cThis idea is important and applicable broadly. It can be used to assess products that rely on learned models. By estimating the uncertainty of a learned model, we also learn how much error to expect from the model, and what missing data could improve the model,\u201d says Rus.<\/p>\n\n\n\n<p>Amini will present the research at next month\u2019s NeurIPS conference, along with Rus, who is the Andrew and Erna Viterbi Professor of Electrical Engineering and Computer Science, director of CSAIL, and deputy dean of research for the MIT Stephen A. Schwarzman College of Computing; and graduate students Wilko Schwarting of MIT and Ava Soleimany of MIT and Harvard.<\/p>\n\n\n\n<p><strong>Efficient uncertainty<\/strong><\/p>\n\n\n\n<p>After an&nbsp;<a href=\"http:\/\/mit.pr-optout.com\/Tracking.aspx?Data=HHL%3d83%3c3%3b7-%3eLCE9%3b4%3b8%3f%26SDG%3c90%3a.&amp;RE=MC&amp;RI=4334046&amp;Preview=False&amp;DistributionActionID=91604&amp;Action=Follow+Link\" target=\"_blank\" rel=\"noreferrer noopener\">up-and-down history<\/a>, deep learning has demonstrated remarkable performance on a variety of tasks, in some cases even surpassing human accuracy. And nowadays, deep learning seems to go wherever computers go. It fuels search engine results, social media feeds, and facial recognition. \u201cWe\u2019ve had huge successes using deep learning,\u201d says Amini. \u201cNeural networks are really good at knowing the right answer 99 percent of the time.\u201d But 99 percent won\u2019t cut it when lives are on the line.<\/p>\n\n\n\n<p>\u201cOne thing that has eluded researchers is the ability of these models to know and tell us when they might be wrong,\u201d says Amini. \u201cWe really care about that 1 percent of the time, and how we can detect those situations reliably and efficiently.\u201d<\/p>\n\n\n\n<p>Neural networks can be massive, sometimes brimming with billions of parameters. So it can be a heavy computational lift just to get an answer, let alone a confidence level. Uncertainty analysis in neural networks isn\u2019t new. But previous approaches, stemming from Bayesian deep learning, have relied on running, or sampling, a neural network many times over to understand its confidence. That process takes time and memory, a luxury that might not exist in high-speed traffic.<\/p>\n\n\n\n<p>The researchers devised a way to estimate uncertainty from only a single run of the neural network. They designed the network with bulked up output, producing not only a decision but also a new probabilistic distribution capturing the evidence in support of that decision. These distributions, termed evidential distributions, directly capture the model&#8217;s confidence in its prediction. This includes any uncertainty present in the underlying input data, as well as in the model\u2019s final decision. This distinction can signal whether uncertainty can be reduced by tweaking the neural network itself, or whether the input data are just noisy.<\/p>\n\n\n\n<p><strong>Confidence check<\/strong><\/p>\n\n\n\n<p>To put their approach to the test, the researchers started with a challenging computer vision task. They trained their neural network to analyze a monocular color image and estimate a depth value (i.e. distance from the camera lens) for each pixel. An autonomous vehicle might use similar calculations to estimate its proximity to a pedestrian or to another vehicle, which is no simple task.<\/p>\n\n\n\n<p>Their network\u2019s performance was on par with previous state-of-the-art models, but it also gained the ability to estimate its own uncertainty. As the researchers had hoped, the network projected high uncertainty for pixels where it predicted the wrong depth. \u201cIt was very calibrated to the errors that the network makes, which we believe was one of the most important things in judging the quality of a new uncertainty estimator,\u201d Amini says.<\/p>\n\n\n\n<p>To stress-test their calibration, the team also showed that the network projected higher uncertainty for \u201cout-of-distribution\u201d data \u2014 completely new types of images never encountered during training. After they trained the network on indoor home scenes, they fed it a batch of outdoor driving scenes. The network consistently warned that its responses to the novel outdoor scenes were uncertain. The test highlighted the network\u2019s ability to flag when users should not place full trust in its decisions. In these cases, \u201cif this is a health care application, maybe we don\u2019t trust the diagnosis that the model is giving, and instead seek a second opinion,\u201d says Amini.<\/p>\n\n\n\n<p>The network even knew when photos had been doctored, potentially hedging against data-manipulation attacks. In another trial, the researchers boosted adversarial noise levels in a batch of images they fed to the network. The effect was subtle \u2014 barely perceptible to the human eye \u2014 but the network sniffed out those images, tagging its output with high levels of uncertainty. This ability to sound the alarm on falsified data could help detect and deter adversarial attacks, a growing concern in the age of&nbsp;<a href=\"http:\/\/mit.pr-optout.com\/Tracking.aspx?Data=HHL%3d83%3c3%3b7-%3eLCE9%3b4%3b8%3f%26SDG%3c90%3a.&amp;RE=MC&amp;RI=4334046&amp;Preview=False&amp;DistributionActionID=91603&amp;Action=Follow+Link\" target=\"_blank\" rel=\"noreferrer noopener\">deepfakes<\/a>.<\/p>\n\n\n\n<p>Deep evidential regression is \u201ca simple and elegant approach that advances the field of uncertainty estimation, which is important for robotics and other real-world control systems,\u201d says Raia Hadsell, an artificial intelligence researcher at DeepMind who was not involved with the work. \u201cThis is done in a novel way that avoids some of the messy aspects of other approaches \u2014 &nbsp;e.g. sampling or ensembles \u2014 which makes it not only elegant but also computationally more efficient \u2014 a winning combination.\u201d<\/p>\n\n\n\n<p>Deep evidential regression could enhance safety in AI-assisted decision making. \u201cWe\u2019re starting to see a lot more of these [neural network] models trickle out of the research lab and into the real world, into situations that are touching humans with potentially life-threatening consequences,\u201d says Amini. \u201cAny user of the method, whether it\u2019s a doctor or a person in the passenger seat of a vehicle, needs to be aware of any risk or uncertainty associated with that decision.\u201d He envisions the system not only quickly flagging uncertainty, but also using it to make more conservative decision making in risky scenarios like an autonomous vehicle approaching an intersection.<\/p>\n\n\n\n<p>\u201cAny field that is going to have deployable machine learning ultimately needs to have reliable uncertainty awareness,\u201d he says.<\/p>\n\n\n\n<p>This work was supported, in part, by the National Science Foundation and Toyota Research Institute through the Toyota-CSAIL Joint Research Center.<\/p>\n\n\n\n<p>(Provided by MIT News office)<\/p>\n","protected":false},"excerpt":{"rendered":"<p>A faster way to estimate uncertainty in AI-assisted decision-making could lead to safer outcomes.<\/p>\n","protected":false},"author":2,"featured_media":19421,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[42,47],"tags":[],"class_list":["post-19420","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-feature","category-it"],"featured_image_urls":{"full":["https:\/\/www.revoscience.com\/en\/wp-content\/uploads\/2020\/11\/Network-Confidence-01-Press.jpg",900,600,false],"thumbnail":["https:\/\/www.revoscience.com\/en\/wp-content\/uploads\/2020\/11\/Network-Confidence-01-Press-200x200.jpg",200,200,true],"medium":["https:\/\/www.revoscience.com\/en\/wp-content\/uploads\/2020\/11\/Network-Confidence-01-Press-600x400.jpg",600,400,true],"medium_large":["https:\/\/www.revoscience.com\/en\/wp-content\/uploads\/2020\/11\/Network-Confidence-01-Press-768x512.jpg",750,500,true],"large":["https:\/\/www.revoscience.com\/en\/wp-content\/uploads\/2020\/11\/Network-Confidence-01-Press-675x450.jpg",675,450,true],"1536x1536":["https:\/\/www.revoscience.com\/en\/wp-content\/uploads\/2020\/11\/Network-Confidence-01-Press.jpg",900,600,false],"2048x2048":["https:\/\/www.revoscience.com\/en\/wp-content\/uploads\/2020\/11\/Network-Confidence-01-Press.jpg",900,600,false],"ultp_layout_landscape_large":["https:\/\/www.revoscience.com\/en\/wp-content\/uploads\/2020\/11\/Network-Confidence-01-Press.jpg",900,600,false],"ultp_layout_landscape":["https:\/\/www.revoscience.com\/en\/wp-content\/uploads\/2020\/11\/Network-Confidence-01-Press.jpg",855,570,false],"ultp_layout_portrait":["https:\/\/www.revoscience.com\/en\/wp-content\/uploads\/2020\/11\/Network-Confidence-01-Press.jpg",600,400,false],"ultp_layout_square":["https:\/\/www.revoscience.com\/en\/wp-content\/uploads\/2020\/11\/Network-Confidence-01-Press.jpg",600,400,false],"newspaper-x-single-post":["https:\/\/www.revoscience.com\/en\/wp-content\/uploads\/2020\/11\/Network-Confidence-01-Press-760x490.jpg",760,490,true],"newspaper-x-recent-post-big":["https:\/\/www.revoscience.com\/en\/wp-content\/uploads\/2020\/11\/Network-Confidence-01-Press-550x360.jpg",550,360,true],"newspaper-x-recent-post-list-image":["https:\/\/www.revoscience.com\/en\/wp-content\/uploads\/2020\/11\/Network-Confidence-01-Press-95x65.jpg",95,65,true],"web-stories-poster-portrait":["https:\/\/www.revoscience.com\/en\/wp-content\/uploads\/2020\/11\/Network-Confidence-01-Press.jpg",640,427,false],"web-stories-publisher-logo":["https:\/\/www.revoscience.com\/en\/wp-content\/uploads\/2020\/11\/Network-Confidence-01-Press.jpg",96,64,false],"web-stories-thumbnail":["https:\/\/www.revoscience.com\/en\/wp-content\/uploads\/2020\/11\/Network-Confidence-01-Press.jpg",150,100,false]},"author_info":{"info":["RevoScience"]},"category_info":"<a href=\"https:\/\/www.revoscience.com\/en\/category\/feature\/\" rel=\"category tag\">Feature<\/a> <a href=\"https:\/\/www.revoscience.com\/en\/category\/news\/it\/\" rel=\"category tag\">IT<\/a>","tag_info":"IT","comment_count":"0","_links":{"self":[{"href":"https:\/\/www.revoscience.com\/en\/wp-json\/wp\/v2\/posts\/19420","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.revoscience.com\/en\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.revoscience.com\/en\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.revoscience.com\/en\/wp-json\/wp\/v2\/users\/2"}],"replies":[{"embeddable":true,"href":"https:\/\/www.revoscience.com\/en\/wp-json\/wp\/v2\/comments?post=19420"}],"version-history":[{"count":0,"href":"https:\/\/www.revoscience.com\/en\/wp-json\/wp\/v2\/posts\/19420\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/www.revoscience.com\/en\/wp-json\/wp\/v2\/media\/19421"}],"wp:attachment":[{"href":"https:\/\/www.revoscience.com\/en\/wp-json\/wp\/v2\/media?parent=19420"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.revoscience.com\/en\/wp-json\/wp\/v2\/categories?post=19420"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.revoscience.com\/en\/wp-json\/wp\/v2\/tags?post=19420"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}