{"id":26984,"date":"2025-07-09T18:12:08","date_gmt":"2025-07-09T12:27:08","guid":{"rendered":"https:\/\/www.revoscience.com\/en\/?p=26984"},"modified":"2025-07-09T18:12:10","modified_gmt":"2025-07-09T12:27:10","slug":"study-could-lead-to-llms-that-are-better-at-complex-reasoning","status":"publish","type":"post","link":"https:\/\/www.revoscience.com\/en\/study-could-lead-to-llms-that-are-better-at-complex-reasoning\/","title":{"rendered":"Study could lead to LLMs that are better at complex reasoning"},"content":{"rendered":"\n<p><strong><em>Researchers developed a way to make large language models more adaptable to challenging tasks like strategic planning or process optimization.&nbsp;<\/em><\/strong><\/p>\n\n\n\n<figure class=\"wp-block-image size-full\"><img data-dominant-color=\"7f878a\" data-has-transparency=\"false\" style=\"--dominant-color: #7f878a;\" loading=\"lazy\" decoding=\"async\" width=\"900\" height=\"600\" sizes=\"auto, (max-width: 900px) 100vw, 900px\" src=\"https:\/\/www.revoscience.com\/en\/wp-content\/uploads\/2025\/07\/MIT-fewshot-01-press_0.webp\" alt=\"\" class=\"wp-image-26985 not-transparent\" title=\"\" srcset=\"https:\/\/www.revoscience.com\/en\/wp-content\/uploads\/2025\/07\/MIT-fewshot-01-press_0.webp 900w, https:\/\/www.revoscience.com\/en\/wp-content\/uploads\/2025\/07\/MIT-fewshot-01-press_0-675x450.webp 675w, https:\/\/www.revoscience.com\/en\/wp-content\/uploads\/2025\/07\/MIT-fewshot-01-press_0-768x512.webp 768w, https:\/\/www.revoscience.com\/en\/wp-content\/uploads\/2025\/07\/MIT-fewshot-01-press_0-150x100.webp 150w\" \/><figcaption class=\"wp-element-caption\"><em><sup>MIT researchers have shown how strategically applying a method known as test-time training with task-specific examples can boost the accuracy of an LLM more than sixfold. Credit: Jose-Luis Olivares, MIT; Stock<\/sup><\/em><\/figcaption><\/figure>\n\n\n<div class=\"wp-block-post-author\"><div class=\"wp-block-post-author__content\"><p class=\"wp-block-post-author__name\">Adam Zewe<\/p><\/div><\/div>\n\n\n<p>CAMBRIDGE, MA \u2013 For all their impressive capabilities, large language models (LLMs) often fall short when given challenging new tasks that require complex reasoning skills.<\/p>\n\n\n\n<p>While an accounting firm\u2019s LLM might excel at summarizing financial reports, that same model could fail unexpectedly if tasked with predicting market trends or identifying fraudulent transactions.<\/p>\n\n\n\n<p>To make LLMs more adaptable, MIT researchers investigated how a certain training technique can be strategically deployed to boost a model\u2019s performance on unfamiliar, difficult problems.<\/p>\n\n\n\n<p>They show that test-time training, a method that involves temporarily updating some of a model\u2019s inner workings during deployment, can lead to a sixfold improvement in accuracy. The researchers developed a framework for implementing a test-time training strategy that uses examples of the new task to maximize these gains.<\/p>\n\n\n\n<p>Their work could improve a model\u2019s flexibility, enabling an off-the-shelf LLM to adapt to complex tasks that require planning or abstraction. This could lead to LLMs that would be more accurate in many applications that require logical deduction, from medical diagnostics to supply chain management.<\/p>\n\n\n\n<p>\u201cGenuine learning\u2014what we did here with test-time training\u2014is something these models can\u2019t do on their own after they are shipped. They can\u2019t gain new skills or get better at a task. But we have shown that if you push the model a little bit to do actual learning, you see that huge improvements in performance can happen,\u201d says Ekin Aky\u00fcrek, PhD \u201925, lead author of the study.<\/p>\n\n\n\n<p>Aky\u00fcrek is joined on the&nbsp;<a href=\"https:\/\/link.mediaoutreach.meltwater.com\/ls\/click?upn=u001.aGL2w8mpmadAd46sBDLfbJQfXi-2BgjtsRXhSuJl6mKAgGVxIPXcZdtZpu3SJHn4k2q0Or_Gmh-2FjktplCfWo1o-2BFbkY3J9eYBJUJc-2BSUmMkHo42Dqe4Z0qTEKCmSFnQfWCe8-2B8jgXgQQcW-2Fb1rLKfKZRu-2BLLGScwMYc-2FOCX9RDmpXEBR4BY9i7y-2BNgpMuREG7n76alZrfus4iwyTx-2B3xYDFfp6jme-2FigZt7DgSsaOL9KLZULBJM-2FwCfZ4SFbN2Z4ocYhJzgQTAij0Ji9DVb19ie1XRXQn4Dy-2FfGzTpK-2BRPQs098syPQIbaTmMUHB6bWDa5g9ECpbCdH4ZaZWd-2BAxXyok5gV-2FTGDTsK70u6W8oiXXhV6FjuASPAUkLlEya-2F49ATVd8635ANlGGs54TyOzxLNlpNuIxHyWp47-2Bl3dugRhiQfE6fwVYM-2FXf13W56ObcRsrkUIXp0Qh5MSJOxjYTVVOcf40QQ-3D-3D\" rel=\"noreferrer noopener\" target=\"_blank\">paper<\/a>&nbsp;by graduate students Mehul Damani, Linlu Qiu, Han Guo, and Jyothish Pari; undergraduate Adam Zweiger; and senior authors Yoon Kim, an assistant professor of Electrical Engineering and Computer Science (EECS) and a member of the Computer Science and Artificial Intelligence Laboratory (CSAIL); and Jacob Andreas, an associate professor in EECS and a member of CSAIL. The research will be presented at the International Conference on Machine Learning.<\/p>\n\n\n\n<p><strong>Tackling hard domains<\/strong><\/p>\n\n\n\n<p>LLM users often try to improve the performance of their model on a new task using a technique called in-context learning. They feed the model a few examples of the new task as text prompts, which guide the model\u2019s outputs.<\/p>\n\n\n\n<p>But in-context learning doesn\u2019t always work for problems that require logic and reasoning.<\/p>\n\n\n\n<p>The MIT researchers investigated how test-time training can be used in conjunction with in-context learning to boost performance on these challenging tasks. Test-time training involves updating some model parameters\u2014the internal variables it uses to make predictions\u2014using a small amount of new data specific to the task at hand.<\/p>\n\n\n\n<p>The researchers explored how test-time training interacts with in-context learning. They studied design choices that maximize the performance improvements one can coax out of a general-purpose LLM.<\/p>\n\n\n\n<p>\u201cWe find that test-time training is a much stronger form of learning.&nbsp;While simply providing examples can modestly boost accuracy, actually updating the model with those examples can lead to significantly better performance, particularly&nbsp;in challenging domains,\u201d Damani says.<\/p>\n\n\n\n<p>In-context learning requires a small set of task examples, including problems and their solutions. The researchers use these examples to create a task-specific dataset needed for test-time training.<\/p>\n\n\n\n<p>To expand the size of this dataset, they create new inputs by slightly changing the problems and solutions in the examples, such as by horizontally flipping some input data. They find that training the model on the outputs of this new dataset leads to the best performance.<\/p>\n\n\n\n<p>In addition, the researchers only update a small number of model parameters using a technique called low-rank adaptation, which improves the efficiency of the test-time training process.<\/p>\n\n\n\n<p>\u201cThis is important because our method needs to be efficient if it is going to be deployed in the real world. We find that you can get huge improvements in accuracy with a very small amount of parameter training,\u201d Aky\u00fcrek says.<\/p>\n\n\n\n<p><strong>Developing new skills<\/strong><\/p>\n\n\n\n<p>Streamlining the process is key, since test-time training is employed on a per-instance basis, meaning a user would need to do this for each individual task. The updates to the model are only temporary, and the model reverts to its original form after making a prediction.<\/p>\n\n\n\n<p>A model that usually takes less than a minute to answer a query might take five or 10 minutes to provide an answer with test-time training, Aky\u00fcrek adds.<\/p>\n\n\n\n<p>\u201cWe wouldn\u2019t want to do this for all user queries, but it is useful if you have a very hard task that you want to the model to solve well. There also might be tasks that are too challenging for an LLM to solve without this method,\u201d he says.<\/p>\n\n\n\n<p>The researchers tested their approach on two benchmark datasets of extremely complex problems, such as IQ puzzles. It boosted accuracy as much as sixfold over techniques that use only in-context learning.<\/p>\n\n\n\n<p>Tasks that involved structured patterns or those that used completely unfamiliar types of data showed the largest performance improvements.<\/p>\n\n\n\n<p>\u201cFor simpler tasks, in-context learning might be OK. But updating the parameters themselves might develop a new skill in the model,\u201d Damani says.<\/p>\n\n\n\n<p>In the future, the researchers want to use these insights toward the development of models that continually learn.<\/p>\n\n\n\n<p>The long-term goal is an LLM that, given a query, can automatically determine if it needs to use test-time training to update parameters or if it can solve the task using in-context learning and then implement the best test-time training strategy without the need for human intervention.<\/p>\n\n\n\n<p>This work is supported, in part, by the MIT-IBM Watson AI Lab and the National Science Foundation.<\/p>\n","protected":false},"excerpt":{"rendered":"<p>Researchers developed a way to make large language models more adaptable to challenging tasks like strategic planning or process optimization.\u00a0<\/p>\n","protected":false},"author":2,"featured_media":26985,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[163],"tags":[],"class_list":["post-26984","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-ai"],"featured_image_urls":{"full":["https:\/\/www.revoscience.com\/en\/wp-content\/uploads\/2025\/07\/MIT-fewshot-01-press_0.webp",900,600,false],"thumbnail":["https:\/\/www.revoscience.com\/en\/wp-content\/uploads\/2025\/07\/MIT-fewshot-01-press_0-200x200.webp",200,200,true],"medium":["https:\/\/www.revoscience.com\/en\/wp-content\/uploads\/2025\/07\/MIT-fewshot-01-press_0-675x450.webp",675,450,true],"medium_large":["https:\/\/www.revoscience.com\/en\/wp-content\/uploads\/2025\/07\/MIT-fewshot-01-press_0-768x512.webp",750,500,true],"large":["https:\/\/www.revoscience.com\/en\/wp-content\/uploads\/2025\/07\/MIT-fewshot-01-press_0.webp",750,500,false],"1536x1536":["https:\/\/www.revoscience.com\/en\/wp-content\/uploads\/2025\/07\/MIT-fewshot-01-press_0.webp",900,600,false],"2048x2048":["https:\/\/www.revoscience.com\/en\/wp-content\/uploads\/2025\/07\/MIT-fewshot-01-press_0.webp",900,600,false],"ultp_layout_landscape_large":["https:\/\/www.revoscience.com\/en\/wp-content\/uploads\/2025\/07\/MIT-fewshot-01-press_0.webp",900,600,false],"ultp_layout_landscape":["https:\/\/www.revoscience.com\/en\/wp-content\/uploads\/2025\/07\/MIT-fewshot-01-press_0-870x570.webp",870,570,true],"ultp_layout_portrait":["https:\/\/www.revoscience.com\/en\/wp-content\/uploads\/2025\/07\/MIT-fewshot-01-press_0-600x600.webp",600,600,true],"ultp_layout_square":["https:\/\/www.revoscience.com\/en\/wp-content\/uploads\/2025\/07\/MIT-fewshot-01-press_0-600x600.webp",600,600,true],"newspaper-x-single-post":["https:\/\/www.revoscience.com\/en\/wp-content\/uploads\/2025\/07\/MIT-fewshot-01-press_0-760x490.webp",760,490,true],"newspaper-x-recent-post-big":["https:\/\/www.revoscience.com\/en\/wp-content\/uploads\/2025\/07\/MIT-fewshot-01-press_0-550x360.webp",550,360,true],"newspaper-x-recent-post-list-image":["https:\/\/www.revoscience.com\/en\/wp-content\/uploads\/2025\/07\/MIT-fewshot-01-press_0-95x65.webp",95,65,true],"web-stories-poster-portrait":["https:\/\/www.revoscience.com\/en\/wp-content\/uploads\/2025\/07\/MIT-fewshot-01-press_0-640x600.webp",640,600,true],"web-stories-publisher-logo":["https:\/\/www.revoscience.com\/en\/wp-content\/uploads\/2025\/07\/MIT-fewshot-01-press_0-96x96.webp",96,96,true],"web-stories-thumbnail":["https:\/\/www.revoscience.com\/en\/wp-content\/uploads\/2025\/07\/MIT-fewshot-01-press_0-150x100.webp",150,100,true]},"author_info":{"info":["Adam Zewe"]},"category_info":"<a href=\"https:\/\/www.revoscience.com\/en\/category\/techbiz\/ai\/\" rel=\"category tag\">AI<\/a>","tag_info":"AI","comment_count":"0","_links":{"self":[{"href":"https:\/\/www.revoscience.com\/en\/wp-json\/wp\/v2\/posts\/26984","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.revoscience.com\/en\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.revoscience.com\/en\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.revoscience.com\/en\/wp-json\/wp\/v2\/users\/2"}],"replies":[{"embeddable":true,"href":"https:\/\/www.revoscience.com\/en\/wp-json\/wp\/v2\/comments?post=26984"}],"version-history":[{"count":1,"href":"https:\/\/www.revoscience.com\/en\/wp-json\/wp\/v2\/posts\/26984\/revisions"}],"predecessor-version":[{"id":26986,"href":"https:\/\/www.revoscience.com\/en\/wp-json\/wp\/v2\/posts\/26984\/revisions\/26986"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/www.revoscience.com\/en\/wp-json\/wp\/v2\/media\/26985"}],"wp:attachment":[{"href":"https:\/\/www.revoscience.com\/en\/wp-json\/wp\/v2\/media?parent=26984"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.revoscience.com\/en\/wp-json\/wp\/v2\/categories?post=26984"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.revoscience.com\/en\/wp-json\/wp\/v2\/tags?post=26984"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}