{"id":35882,"date":"2026-02-23T21:48:27","date_gmt":"2026-02-23T16:03:27","guid":{"rendered":"https:\/\/www.revoscience.com\/en\/?p=35882"},"modified":"2026-02-23T21:51:04","modified_gmt":"2026-02-23T16:06:04","slug":"exposing-biases-moods-personalities-and-abstract-concepts-hidden-in-large-language-models","status":"publish","type":"post","link":"https:\/\/www.revoscience.com\/en\/exposing-biases-moods-personalities-and-abstract-concepts-hidden-in-large-language-models\/","title":{"rendered":"Exposing biases, moods, personalities, and abstract concepts hidden in large language models"},"content":{"rendered":"\n<p><em><strong>A new method developed at MIT could root out vulnerabilities and improve LLM safety and performance.<\/strong><\/em><\/p>\n\n\n<div class=\"wp-block-post-author\"><div class=\"wp-block-post-author__content\"><p class=\"wp-block-post-author__name\">Jennifer Chu<\/p><\/div><\/div>\n\n\n<figure class=\"wp-block-image size-full\"><img data-dominant-color=\"543832\" data-has-transparency=\"false\" style=\"--dominant-color: #543832;\" loading=\"lazy\" decoding=\"async\" width=\"900\" height=\"600\" sizes=\"auto, (max-width: 900px) 100vw, 900px\" src=\"https:\/\/www.revoscience.com\/en\/wp-content\/uploads\/2026\/02\/MIT-LLM-Bias-01_0.gif\" alt=\"\" class=\"wp-image-35883 not-transparent\" title=\"\"><figcaption class=\"wp-element-caption\"><sup><em>Credit: Christine Daniloff, MIT<\/em><\/sup><\/figcaption><\/figure>\n\n\n\n<p>CAMBRIDGE, MA &#8212; By now, ChatGPT, Claude, and other large language models have accumulated so much human knowledge that they\u2019re far from simple answer-generators; they can also express abstract concepts, such as certain tones, personalities, biases, and moods. However, it\u2019s not obvious exactly how these models represent abstract concepts to begin with from the knowledge they contain.&nbsp;<\/p>\n\n\n\n<p>Now a team from MIT and the University of California San Diego has developed a way to test whether a large language model (LLM) contains hidden biases, personalities, moods, or&nbsp; other abstract concepts. Their method can zero in on connections within a model that encode for a concept of interest. What\u2019s more, the method can then manipulate, or \u201csteer\u201d these connections, to strengthen or weaken the concept in any answer a model is prompted to give.<\/p>\n\n\n\n<p>The team proved their method could quickly root out and steer more than 500 general concepts in some of the largest LLMs used today. For instance, the researchers could home in on a model\u2019s representations for personalities such as \u201csocial influencer\u201d and \u201cconspiracy theorist,\u201d and stances such as \u201cfear of marriage\u201d and \u201cfan of Boston.\u201d They could then tune these representations to enhance or minimize the concepts in any answers that a model generates.<\/p>\n\n\n\n<p>In the case of the \u201cconspiracy theorist\u201d concept, the team successfully identified a representation of this concept within one of the largest vision language models available today. When they enhanced the representation, and then prompted the model to explain the origins of the famous \u201cBlue Marble\u201d image of Earth taken from Apollo 17, the model generated an answer with the tone and perspective of a conspiracy theorist.<\/p>\n\n\n\n<p>The team acknowledges there are risks to extracting certain concepts, which they also illustrate (and caution against). Overall, however, they see the new approach as a way to illuminate hidden concepts and potential vulnerabilities in LLMs, that could then be turned up or down to improve a model\u2019s safety or enhance its performance.<\/p>\n\n\n\n<p>\u201cWhat this really says about LLMs is that they have these concepts in them, but they\u2019re not all actively exposed,\u201d says Adityanarayanan \u201cAdit\u201d Radhakrishnan, assistant professor of mathematics at MIT. \u201cWith our method, there\u2019s ways to extract these different concepts and activate them in ways that prompting cannot give you answers to.\u201d<\/p>\n\n\n\n<p>The team published their findings today in a study appearing in the journal <a href=\"https:\/\/link.mediaoutreach.meltwater.com\/ls\/click?upn=u001.aGL2w8mpmadAd46sBDLfbDsYwNn5A85XL3FXR6YQ5oOW6G-2BeSDaQNKEBH1RjSt9r1L7OpYxJ-2BzRELRL0YLVb0w-3D-3DI2nF_Gmh-2FjktplCfWo1o-2BFbkY3J9eYBJUJc-2BSUmMkHo42Dqe4Z0qTEKCmSFnQfWCe8-2B8jgXgQQcW-2Fb1rLKfKZRu-2BLLGScwMYc-2FOCX9RDmpXEBR4BY9i7y-2BNgpMuREG7n76alZsnaUEFDx5wQjtvhu36anA2-2BX7PS59w6o4FGraRi43Hiisw048kmTPxzNtXKaT8inhmoVF2TZtQX7onlgDYKvpazMoQeNOsxrm-2BYlZYFd-2BIJBVGb-2BI3l4VmxRP5mhXVvvmEttbQRuOO77lzD5L-2BR-2BLAwpMUsOrTBVkFZLNMMwyIsordYErXpWef51X0omxNadlVb4kr-2BpK0C7s3otLAWWC-2FKHwsPh6weVtMjEWIP2VE6Uymp0gWjub6ZeuxyklVsaevjn4qJIYmdIoRm91Vm07w-3D-3D\" target=\"_blank\" rel=\"noreferrer noopener\"><em>Science<\/em><\/a>. The study\u2019s co-authors include Radhakrishnan, Daniel Beaglehole and Mikhail Belkin of UC San Diego, and<strong> <\/strong>Enric Boix-Adser\u00e0 of the University of Pennsylvania.<\/p>\n\n\n\n<p><strong>A fish in a black box<\/strong><\/p>\n\n\n\n<p>As use of OpenAI\u2019s ChatGPT, Google\u2019s Gemini, Anthropic\u2019s Claude, and other artificial intelligence assistants has exploded, scientists are racing to understand how models represent certain abstract concepts such as \u201challucination\u201d and \u201cdeception.\u201d In the context of an LLM, a hallucination is a response that is false or contains misleading information, which the model has \u201challucinated,\u201d or constructed erroneously as fact.<\/p>\n\n\n\n<p>To find out whether a concept such as \u201challucination\u201d is encoded in an LLM, scientists have often taken an approach of \u201cunsupervised learning\u201d \u2014 a type of machine learning in which algorithms broadly trawl through unlabeled representations to find patterns that might relate to a concept such as \u201challucination.\u201d But to Radhakrishnan, such an approach can be too broad and computationally expensive.<\/p>\n\n\n\n<p>\u201cIt\u2019s like going fishing with a big net, trying to catch one species of fish. You\u2019re gonna get a lot of fish that you have to look through to find the right one,\u201d he says. \u201cInstead, we\u2019re going in with bait for the right species of fish.\u201d<\/p>\n\n\n\n<p>He and his colleagues had previously developed the beginnings of a more targeted approach with a type of predictive modeling algorithm known as a recursive feature machine (RFM). An RFM is designed to directly identify features or patterns within data by leveraging a mathematical mechanism that neural networks \u2014 a broad category of AI models that includes LLMs \u2014 implicitly use to learn features.<\/p>\n\n\n\n<p>Since the algorithm was an effective, efficient approach for capturing features in general, the team wondered whether they could use it to root out representations of concepts, in LLMs, which are by far the most widely used type of neural network and perhaps the least well-understood.<\/p>\n\n\n\n<p>\u201cWe wanted to apply our feature learning algorithms to LLMs to, in a targeted way, discover representations of concepts in these large and complex models,\u201d Radhakrishnan says.<\/p>\n\n\n\n<p><strong>Converging on a concept<\/strong><\/p>\n\n\n\n<p>The team\u2019s new approach identifies any concept of interest within a LLM and \u201csteers\u201d or guides a model\u2019s response based on this concept. The researchers looked for 512 concepts within five classes: fears (such as of marriage, insects, and even buttons); experts (social influencer, medievalist); moods (boastful, detachedly amused); a preference for locations (Boston, Kuala Lumpur); and personas (Ada Lovelace, Neil deGrasse Tyson).<\/p>\n\n\n\n<p>The researchers then searched for representations of each concept in several of today\u2019s large language and vision models. They did so by training RFMs to recognize numerical patterns in an LLM that could represent a particular concept of interest.<\/p>\n\n\n\n<p>A standard large language model is, broadly, a <a href=\"https:\/\/link.mediaoutreach.meltwater.com\/ls\/click?upn=u001.aGL2w8mpmadAd46sBDLfbO9-2BvfSNt10TDlykjxxOUgyUReMG9nqMKMl1uBNjAuJ9ODFOfWd4aTwSIqr1lHZmbm5uLG7swFJJwf0jinVGYEw-3DTZfb_Gmh-2FjktplCfWo1o-2BFbkY3J9eYBJUJc-2BSUmMkHo42Dqe4Z0qTEKCmSFnQfWCe8-2B8jgXgQQcW-2Fb1rLKfKZRu-2BLLGScwMYc-2FOCX9RDmpXEBR4BY9i7y-2BNgpMuREG7n76alZsnaUEFDx5wQjtvhu36anA2-2BX7PS59w6o4FGraRi43Hiisw048kmTPxzNtXKaT8inhmoVF2TZtQX7onlgDYKvpazMoQeNOsxrm-2BYlZYFd-2BILdqMjK4ZUoZZ0hQJv9naKlvaC6mjfYZDKk3GxHlbV-2Bp5-2FLiSTfRLapOIu0-2FkX1g2ggN75-2FhokgK6zbrYoKoFXCVrcx-2BFbsLwouOeqrqfMKkE3xtwAIjwQNT-2Fdamhv1-2B8oIYwq-2FxvyO1s7aXL6lRi5hDQ8tcMz4AWfXT1WHxyUb9A-3D-3D\" target=\"_blank\" rel=\"noreferrer noopener\">neural network<\/a> that takes a natural language prompt, such as \u201cWhy is the sky blue?\u201d and divides the prompt into individual words, each of which is encoded mathematically as a list, or vector, of numbers. The model takes these vectors through a series of computational layers, creating matrices of many numbers that, throughout each layer, are used to identify other words that are most likely to be used to respond to the original prompt. Eventually, the layers converge on a set of numbers that is decoded back into text, in the form of a natural language response.<\/p>\n\n\n\n<p>The team\u2019s approach trains RFMs to recognize numerical patterns in an LLM that could be associated with a specific concept. As an example, to see whether an LLM contains any representation of a \u201cconspiracy theorist,\u201d the researchers would first train the algorithm to identify patterns among LLM representations of 100 prompts that are clearly related to conspiracies, and 100 other prompts that are not. In this way, the algorithm would learn patterns associated with the conspiracy theorist concept. Then, the researchers can mathematically modulate the activity of the conspiracy theorist concept by perturbing LLM representations with these identified patterns.&nbsp;<\/p>\n\n\n\n<p>The method can be applied to search for and manipulate any general concept in an LLM. Among many examples, the researchers identified representations and manipulated an LLM to give answers in the tone and perspective of a \u201cconspiracy theorist.\u201d They also identified and enhanced the concept of \u201canti-refusal,\u201d and showed that whereas normally, a model would be programmed to refuse certain prompts, it instead answered, for instance giving instructions on how to rob a bank.<\/p>\n\n\n\n<p>Radhakrishnan says the approach can be used to quickly search for and minimize vulnerabilities in LLMs. It can also be used to enhance certain traits, personalities, moods, or preferences, such as emphasizing the concept of \u201cbrevity\u201d or \u201creasoning\u201d in any response an LLM generates. The team has made the method\u2019s underlying code publicly available.<\/p>\n\n\n\n<p>\u201cLLMs clearly have a lot of these abstract concepts stored within them, in some representation,\u201d Radhakrishnan says.<strong> \u201c<\/strong>There are ways where, if we understand these representations well enough, we can build highly specialized LLMs that are still safe to use but really effective at certain tasks.\u201d<\/p>\n\n\n\n<p>This work was supported, in part, by the National Science Foundation, the Simons Foundation, the TILOS institute, and the U.S. Office of Naval Research.<\/p>\n","protected":false},"excerpt":{"rendered":"<p>CAMBRIDGE, MA &#8212; By now, ChatGPT, Claude, and other large language models have accumulated so much human knowledge that they\u2019re far from simple answer-generators; they can also express abstract concepts, such as certain tones, personalities, biases, and moods. <\/p>\n","protected":false},"author":2,"featured_media":35883,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[163,47],"tags":[],"class_list":["post-35882","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-ai","category-it"],"featured_image_urls":{"full":["https:\/\/www.revoscience.com\/en\/wp-content\/uploads\/2026\/02\/MIT-LLM-Bias-01_0.gif",900,600,false],"thumbnail":["https:\/\/www.revoscience.com\/en\/wp-content\/uploads\/2026\/02\/MIT-LLM-Bias-01_0-200x200.gif",200,200,true],"medium":["https:\/\/www.revoscience.com\/en\/wp-content\/uploads\/2026\/02\/MIT-LLM-Bias-01_0-675x450.gif",675,450,true],"medium_large":["https:\/\/www.revoscience.com\/en\/wp-content\/uploads\/2026\/02\/MIT-LLM-Bias-01_0-768x512.gif",750,500,true],"large":["https:\/\/www.revoscience.com\/en\/wp-content\/uploads\/2026\/02\/MIT-LLM-Bias-01_0.gif",750,500,false],"1536x1536":["https:\/\/www.revoscience.com\/en\/wp-content\/uploads\/2026\/02\/MIT-LLM-Bias-01_0.gif",900,600,false],"2048x2048":["https:\/\/www.revoscience.com\/en\/wp-content\/uploads\/2026\/02\/MIT-LLM-Bias-01_0.gif",900,600,false],"ultp_layout_landscape_large":["https:\/\/www.revoscience.com\/en\/wp-content\/uploads\/2026\/02\/MIT-LLM-Bias-01_0.gif",900,600,false],"ultp_layout_landscape":["https:\/\/www.revoscience.com\/en\/wp-content\/uploads\/2026\/02\/MIT-LLM-Bias-01_0-870x570.gif",870,570,true],"ultp_layout_portrait":["https:\/\/www.revoscience.com\/en\/wp-content\/uploads\/2026\/02\/MIT-LLM-Bias-01_0-600x600.gif",600,600,true],"ultp_layout_square":["https:\/\/www.revoscience.com\/en\/wp-content\/uploads\/2026\/02\/MIT-LLM-Bias-01_0-600x600.gif",600,600,true],"newspaper-x-single-post":["https:\/\/www.revoscience.com\/en\/wp-content\/uploads\/2026\/02\/MIT-LLM-Bias-01_0-760x490.gif",760,490,true],"newspaper-x-recent-post-big":["https:\/\/www.revoscience.com\/en\/wp-content\/uploads\/2026\/02\/MIT-LLM-Bias-01_0-550x360.gif",550,360,true],"newspaper-x-recent-post-list-image":["https:\/\/www.revoscience.com\/en\/wp-content\/uploads\/2026\/02\/MIT-LLM-Bias-01_0-95x65.gif",95,65,true],"web-stories-poster-portrait":["https:\/\/www.revoscience.com\/en\/wp-content\/uploads\/2026\/02\/MIT-LLM-Bias-01_0-640x600.gif",640,600,true],"web-stories-publisher-logo":["https:\/\/www.revoscience.com\/en\/wp-content\/uploads\/2026\/02\/MIT-LLM-Bias-01_0-96x96.gif",96,96,true],"web-stories-thumbnail":["https:\/\/www.revoscience.com\/en\/wp-content\/uploads\/2026\/02\/MIT-LLM-Bias-01_0-150x100.gif",150,100,true]},"author_info":{"info":["Jennifer Chu"]},"category_info":"<a href=\"https:\/\/www.revoscience.com\/en\/category\/techbiz\/ai\/\" rel=\"category tag\">AI<\/a> <a href=\"https:\/\/www.revoscience.com\/en\/category\/news\/it\/\" rel=\"category tag\">IT<\/a>","tag_info":"IT","comment_count":"0","_links":{"self":[{"href":"https:\/\/www.revoscience.com\/en\/wp-json\/wp\/v2\/posts\/35882","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.revoscience.com\/en\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.revoscience.com\/en\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.revoscience.com\/en\/wp-json\/wp\/v2\/users\/2"}],"replies":[{"embeddable":true,"href":"https:\/\/www.revoscience.com\/en\/wp-json\/wp\/v2\/comments?post=35882"}],"version-history":[{"count":2,"href":"https:\/\/www.revoscience.com\/en\/wp-json\/wp\/v2\/posts\/35882\/revisions"}],"predecessor-version":[{"id":35885,"href":"https:\/\/www.revoscience.com\/en\/wp-json\/wp\/v2\/posts\/35882\/revisions\/35885"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/www.revoscience.com\/en\/wp-json\/wp\/v2\/media\/35883"}],"wp:attachment":[{"href":"https:\/\/www.revoscience.com\/en\/wp-json\/wp\/v2\/media?parent=35882"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.revoscience.com\/en\/wp-json\/wp\/v2\/categories?post=35882"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.revoscience.com\/en\/wp-json\/wp\/v2\/tags?post=35882"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}