{"id":25459,"date":"2024-11-28T19:30:57","date_gmt":"2024-11-28T13:45:57","guid":{"rendered":"https:\/\/www.revoscience.com\/en\/?p=25459"},"modified":"2024-11-28T19:31:00","modified_gmt":"2024-11-28T13:46:00","slug":"new-ai-tool-generates-realistic-satellite-images-of-future-flooding","status":"publish","type":"post","link":"https:\/\/www.revoscience.com\/en\/new-ai-tool-generates-realistic-satellite-images-of-future-flooding\/","title":{"rendered":"New AI tool generates realistic satellite images of future flooding"},"content":{"rendered":"\n<p><em><strong>The method could help communities visualize and prepare for approaching storms.<\/strong><\/em><\/p>\n\n\n\n<figure class=\"wp-block-image size-large is-resized\"><img loading=\"lazy\" decoding=\"async\" width=\"675\" height=\"450\" sizes=\"auto, (max-width: 675px) 100vw, 675px\" src=\"https:\/\/www.revoscience.com\/en\/wp-content\/uploads\/2024\/11\/MIT-AI-Flood-press_0-675x450.jpg\" alt=\"\" class=\"wp-image-25460\" style=\"width:839px;height:auto\" title=\"\" srcset=\"https:\/\/www.revoscience.com\/en\/wp-content\/uploads\/2024\/11\/MIT-AI-Flood-press_0-675x450.jpg 675w, https:\/\/www.revoscience.com\/en\/wp-content\/uploads\/2024\/11\/MIT-AI-Flood-press_0-600x400.jpg 600w, https:\/\/www.revoscience.com\/en\/wp-content\/uploads\/2024\/11\/MIT-AI-Flood-press_0-768x512.jpg 768w, https:\/\/www.revoscience.com\/en\/wp-content\/uploads\/2024\/11\/MIT-AI-Flood-press_0.jpg 900w\" \/><\/figure>\n\n\n<div class=\"wp-block-post-author\"><div class=\"wp-block-post-author__content\"><p class=\"wp-block-post-author__name\">Jennifer Chu<\/p><\/div><\/div>\n\n\n<p>CAMBRIDGE, Mass. &#8212; Visualizing the potential impacts of a hurricane on people\u2019s homes before it hits can help residents prepare and decide whether to evacuate.&nbsp;<\/p>\n\n\n\n<p>MIT scientists have developed a method that generates satellite imagery from the future to depict how a region would look after a potential flooding event. The method combines a generative artificial intelligence model with a physics-based flood model to create realistic, birds-eye-view images of a region, showing where flooding is likely to occur given the strength of an oncoming storm.&nbsp;<\/p>\n\n\n\n<p>As a test case, the team applied the method to Houston and generated satellite images depicting what certain locations around the city would look like after a storm comparable to Hurricane Harvey, which hit the region in 2017. The team compared these generated images with actual satellite images taken of the same regions after Harvey hit. They also compared AI-generated images that did not include a physics-based flood model.&nbsp;<\/p>\n\n\n\n<p>The team\u2019s physics-reinforced method generated satellite images of future flooding that were more realistic and accurate. The AI-only method, in contrast, generated images of flooding in places where flooding is not physically possible.&nbsp;<\/p>\n\n\n\n<p>The team\u2019s method is a proof-of-concept, meant to demonstrate a case in which generative AI models can generate realistic, trustworthy content when paired with a physics-based model.&nbsp;In order to apply the method to other regions to depict flooding from future storms, it will need to be trained on many more satellite images to learn how flooding would look in other regions.<\/p>\n\n\n\n<p>\u201cThe idea is: One day, we could use this before a hurricane, where it provides an additional visualization layer for the public,\u201d says Bj\u00f6rn L\u00fctjens, a postdoc in MIT\u2019s Department of Earth, Atmospheric and Planetary Sciences, who led the research while he was a doctoral student in MIT\u2019s Department of Aeronautics and Astronautics (AeroAstro). \u201cOne of the biggest challenges is encouraging people to evacuate when they are at risk. Maybe this could be another visualization to help increase that readiness.\u201d<\/p>\n\n\n\n<p>To illustrate the potential of the new method, which they have dubbed the \u201cEarth Intelligence Engine,\u201d the team has made it&nbsp;<a href=\"https:\/\/link.mediaoutreach.meltwater.com\/ls\/click?upn=u001.aGL2w8mpmadAd46sBDLfbFcT96jVsrB6u9-2BpPGNj3OWvCjWJD-2FyBuCuIWCOT52va6QZl_Gmh-2FjktplCfWo1o-2BFbkY3J9eYBJUJc-2BSUmMkHo42Dqe4Z0qTEKCmSFnQfWCe8-2B8jgXgQQcW-2Fb1rLKfKZRu-2BLLGScwMYc-2FOCX9RDmpXEBR4BY9i7y-2BNgpMuREG7n76alZEzQ2efjEv9GvjpyfTwAi14BJIkoVS4nwVc4-2FeveK60rQNctnb-2BOwF7IsFnwZpFBwXoiVriTWXnt7yQwPqAeiUvvng3Y7MIrNxMG8esPehjHgIx-2FhIILDaEbmk0iewfXhqlFjCNMi-2FYrJZegmj-2FX33Fn2fwc4pui4JABpPW5vuy0LLs0hWVBI9oS98AIjyjwZ1ybUM32YnrSUut62HDOZWHin3e0vXt-2F5nhwhhpp7r0rYQXuvlFGi-2BCkoa7d4NpbCgu9sdMkQ8OFsiLZSDz7Uww-3D-3D\" target=\"_blank\" rel=\"noreferrer noopener\">available<\/a>&nbsp;as an online resource for others to try.<\/p>\n\n\n\n<p>The researchers report their results in the journal&nbsp;<a href=\"https:\/\/link.mediaoutreach.meltwater.com\/ls\/click?upn=u001.aGL2w8mpmadAd46sBDLfbLKFbSET-2F02Oi2R4DjtKXdmW0x3x0i-2BHjJRGIsXqpjLO4LnGVZQfHeuy8HALNQxdSQ-3D-3DQhEi_Gmh-2FjktplCfWo1o-2BFbkY3J9eYBJUJc-2BSUmMkHo42Dqe4Z0qTEKCmSFnQfWCe8-2B8jgXgQQcW-2Fb1rLKfKZRu-2BLLGScwMYc-2FOCX9RDmpXEBR4BY9i7y-2BNgpMuREG7n76alZEzQ2efjEv9GvjpyfTwAi14BJIkoVS4nwVc4-2FeveK60rQNctnb-2BOwF7IsFnwZpFBwXoiVriTWXnt7yQwPqAeiUvvng3Y7MIrNxMG8esPehjEYAcx-2FlhxWIypdB8uSVY59kFe1BppoQZpdFI5QkIoi0Dr2AkZ7OCPacgM3GD-2BjUwrauFlBGPtww8KRpZqATqCl5XL00r7ewDMkz8h5ujkjW5aGL1TcAa6j6nkzl5Dugh16zqG-2BFcMAYdVYuXzdxWHg3BPzmpE6g507zeSkcJpbdA-3D-3D\" target=\"_blank\" rel=\"noreferrer noopener\"><em>IEEE Transactions on Geoscience and Remote Sensing<\/em><\/a>.&nbsp;The study\u2019s MIT co-authors include Brandon Leschchinskiy; Aruna Sankaranarayanan; and Dava Newman, professor of AeroAstro and director of the MIT Media Lab; along with collaborators from multiple institutions.<\/p>\n\n\n\n<p><strong>Generative adversarial images<\/strong><\/p>\n\n\n\n<p>The new study is an extension of the team\u2019s efforts to apply generative AI tools to visualize future climate scenarios.&nbsp;<\/p>\n\n\n\n<p>\u201cProviding a hyper-local perspective of climate seems to be the most effective way to communicate our scientific results,\u201d says Newman, the study\u2019s senior author. \u201cPeople relate to their own zip code, their local environment where their family and friends live. Providing local climate simulations becomes intuitive, personal, and relatable.\u201d<\/p>\n\n\n\n<p>For this study, the authors use a conditional generative adversarial network, or GAN, a type of machine learning method that can generate realistic images using two competing, or \u201cadversarial,\u201d neural networks. The first \u201cgenerator\u201d network is trained on pairs of real data, such as satellite images before and after a hurricane. The second \u201cdiscriminator\u201d network is then trained to distinguish between the real satellite imagery and the one synthesized by the first network.<\/p>\n\n\n\n<p>Each network automatically improves its performance based on feedback from the other network. The idea, then, is that such an adversarial push and pull should ultimately produce synthetic images that are indistinguishable from the real thing. Nevertheless, GANs can still produce \u201challucinations,\u201d or factually incorrect features in an otherwise realistic image that shouldn\u2019t be there.&nbsp;<\/p>\n\n\n\n<p>\u201cHallucinations can mislead viewers,\u201d says L\u00fctjens, who began to wonder whether such hallucinations could be avoided, such that generative AI tools can be trusted to help inform people, particularly in risk-sensitive scenarios. \u201cWe were thinking: How can we use these generative AI models in a climate-impact setting,&nbsp;where having trusted data sources is so important?\u201d&nbsp;<\/p>\n\n\n\n<p><strong>Flood hallucinations<\/strong><\/p>\n\n\n\n<p>In their new work, the researchers considered a risk-sensitive scenario in which generative AI is tasked with creating satellite images of future flooding that could be trustworthy enough to inform decisions of how to prepare and potentially evacuate people out of harm\u2019s way.<\/p>\n\n\n\n<p>Typically, policymakers can get an idea of where flooding might occur based on visualizations in the form of color-coded maps. These maps are the final product of a pipeline of physical models that usually begins with a hurricane track model, which then feeds into a wind model that simulates the pattern and strength of winds over a local region.&nbsp;This is combined with a flood or storm surge model that forecasts how wind might push any nearby body of water onto land. A hydraulic model then maps out where flooding will occur based on the local flood infrastructure and generates a visual, color-coded&nbsp;map of flood elevations over a particular region.&nbsp;<\/p>\n\n\n\n<p>\u201cThe question is: Can visualizations of satellite imagery add another level to this, that is a bit more tangible and emotionally engaging than a color-coded map of reds, yellows, and blues, while still being trustworthy?\u201d L\u00fctjens says.&nbsp;<\/p>\n\n\n\n<p>The team first tested how generative AI alone would produce satellite images of future flooding. They trained a GAN on actual satellite images taken by&nbsp;satellites as they passed over Houston before and after Hurricane Harvey. When they tasked the generator to produce new flood images of the same regions, they found that the images resembled typical satellite imagery, but a closer look revealed hallucinations in some images, in the form of floods where flooding should not be possible (for instance, in locations at higher elevation).&nbsp;<\/p>\n\n\n\n<p>To reduce hallucinations and increase the trustworthiness of the AI-generated images, the team paired the GAN with a physics-based flood model that incorporates real, physical parameters and phenomena, such as an approaching hurricane\u2019s trajectory, storm surge, and flood patterns.&nbsp;With this physics-reinforced method, the team generated satellite images around Houston that depict the same flood extent, pixel by pixel, as forecasted by the flood model.<\/p>\n\n\n\n<p>\u201cWe show a tangible way to combine machine learning with physics for a use case that\u2019s risk-sensitive, which requires us to analyze the complexity of Earth\u2019s systems and project future actions and possible scenarios to keep people out of harm\u2019s way,\u201d Newman says. \u201cWe can\u2019t wait to get our generative AI tools into the hands of decision-makers at the local community level, which could make a significant difference and perhaps save lives.\u201d&nbsp;<\/p>\n\n\n\n<p>The research was supported, in part, by the MIT Portugal Program,&nbsp;the DAF-MIT Artificial Intelligence Accelerator, NASA, and Google Cloud.<\/p>\n","protected":false},"excerpt":{"rendered":"<p>The method could help communities visualize and prepare for approaching storms.<\/p>\n","protected":false},"author":2,"featured_media":25460,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[15,17],"tags":[],"class_list":["post-25459","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-environment","category-research"],"featured_image_urls":{"full":["https:\/\/www.revoscience.com\/en\/wp-content\/uploads\/2024\/11\/MIT-AI-Flood-press_0.jpg",900,600,false],"thumbnail":["https:\/\/www.revoscience.com\/en\/wp-content\/uploads\/2024\/11\/MIT-AI-Flood-press_0-200x200.jpg",200,200,true],"medium":["https:\/\/www.revoscience.com\/en\/wp-content\/uploads\/2024\/11\/MIT-AI-Flood-press_0-600x400.jpg",600,400,true],"medium_large":["https:\/\/www.revoscience.com\/en\/wp-content\/uploads\/2024\/11\/MIT-AI-Flood-press_0-768x512.jpg",750,500,true],"large":["https:\/\/www.revoscience.com\/en\/wp-content\/uploads\/2024\/11\/MIT-AI-Flood-press_0-675x450.jpg",675,450,true],"1536x1536":["https:\/\/www.revoscience.com\/en\/wp-content\/uploads\/2024\/11\/MIT-AI-Flood-press_0.jpg",900,600,false],"2048x2048":["https:\/\/www.revoscience.com\/en\/wp-content\/uploads\/2024\/11\/MIT-AI-Flood-press_0.jpg",900,600,false],"ultp_layout_landscape_large":["https:\/\/www.revoscience.com\/en\/wp-content\/uploads\/2024\/11\/MIT-AI-Flood-press_0.jpg",900,600,false],"ultp_layout_landscape":["https:\/\/www.revoscience.com\/en\/wp-content\/uploads\/2024\/11\/MIT-AI-Flood-press_0-870x570.jpg",870,570,true],"ultp_layout_portrait":["https:\/\/www.revoscience.com\/en\/wp-content\/uploads\/2024\/11\/MIT-AI-Flood-press_0-600x600.jpg",600,600,true],"ultp_layout_square":["https:\/\/www.revoscience.com\/en\/wp-content\/uploads\/2024\/11\/MIT-AI-Flood-press_0-600x600.jpg",600,600,true],"newspaper-x-single-post":["https:\/\/www.revoscience.com\/en\/wp-content\/uploads\/2024\/11\/MIT-AI-Flood-press_0-760x490.jpg",760,490,true],"newspaper-x-recent-post-big":["https:\/\/www.revoscience.com\/en\/wp-content\/uploads\/2024\/11\/MIT-AI-Flood-press_0-550x360.jpg",550,360,true],"newspaper-x-recent-post-list-image":["https:\/\/www.revoscience.com\/en\/wp-content\/uploads\/2024\/11\/MIT-AI-Flood-press_0-95x65.jpg",95,65,true],"web-stories-poster-portrait":["https:\/\/www.revoscience.com\/en\/wp-content\/uploads\/2024\/11\/MIT-AI-Flood-press_0.jpg",640,427,false],"web-stories-publisher-logo":["https:\/\/www.revoscience.com\/en\/wp-content\/uploads\/2024\/11\/MIT-AI-Flood-press_0.jpg",96,64,false],"web-stories-thumbnail":["https:\/\/www.revoscience.com\/en\/wp-content\/uploads\/2024\/11\/MIT-AI-Flood-press_0.jpg",150,100,false]},"author_info":{"info":["Jennifer Chu"]},"category_info":"<a href=\"https:\/\/www.revoscience.com\/en\/category\/environment\/\" rel=\"category tag\">Environment<\/a> <a href=\"https:\/\/www.revoscience.com\/en\/category\/news\/research\/\" rel=\"category tag\">Research<\/a>","tag_info":"Research","comment_count":"0","_links":{"self":[{"href":"https:\/\/www.revoscience.com\/en\/wp-json\/wp\/v2\/posts\/25459","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.revoscience.com\/en\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.revoscience.com\/en\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.revoscience.com\/en\/wp-json\/wp\/v2\/users\/2"}],"replies":[{"embeddable":true,"href":"https:\/\/www.revoscience.com\/en\/wp-json\/wp\/v2\/comments?post=25459"}],"version-history":[{"count":1,"href":"https:\/\/www.revoscience.com\/en\/wp-json\/wp\/v2\/posts\/25459\/revisions"}],"predecessor-version":[{"id":25461,"href":"https:\/\/www.revoscience.com\/en\/wp-json\/wp\/v2\/posts\/25459\/revisions\/25461"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/www.revoscience.com\/en\/wp-json\/wp\/v2\/media\/25460"}],"wp:attachment":[{"href":"https:\/\/www.revoscience.com\/en\/wp-json\/wp\/v2\/media?parent=25459"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.revoscience.com\/en\/wp-json\/wp\/v2\/categories?post=25459"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.revoscience.com\/en\/wp-json\/wp\/v2\/tags?post=25459"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}