{"id":25160,"date":"2024-08-31T14:05:09","date_gmt":"2024-08-31T08:20:09","guid":{"rendered":"https:\/\/www.revoscience.com\/en\/?p=25160"},"modified":"2024-08-31T14:05:13","modified_gmt":"2024-08-31T08:20:13","slug":"study-transparency-is-often-lacking-in-datasets-used-to-train-large-language-models","status":"publish","type":"post","link":"https:\/\/www.revoscience.com\/en\/study-transparency-is-often-lacking-in-datasets-used-to-train-large-language-models\/","title":{"rendered":"Study: Transparency is often lacking in datasets used to train large language models"},"content":{"rendered":"\n<figure class=\"wp-block-image size-large\"><img loading=\"lazy\" decoding=\"async\" width=\"675\" height=\"450\" sizes=\"auto, (max-width: 675px) 100vw, 675px\" src=\"https:\/\/www.revoscience.com\/en\/wp-content\/uploads\/2024\/08\/MIT-Dataset-Transparency-01-press_0-675x450.jpg\" alt=\"\" class=\"wp-image-25161\" style=\"aspect-ratio:16\/9;object-fit:cover\" title=\"\" srcset=\"https:\/\/www.revoscience.com\/en\/wp-content\/uploads\/2024\/08\/MIT-Dataset-Transparency-01-press_0-675x450.jpg 675w, https:\/\/www.revoscience.com\/en\/wp-content\/uploads\/2024\/08\/MIT-Dataset-Transparency-01-press_0-600x400.jpg 600w, https:\/\/www.revoscience.com\/en\/wp-content\/uploads\/2024\/08\/MIT-Dataset-Transparency-01-press_0-768x512.jpg 768w, https:\/\/www.revoscience.com\/en\/wp-content\/uploads\/2024\/08\/MIT-Dataset-Transparency-01-press_0.jpg 900w\" \/><\/figure>\n\n\n<div class=\"wp-block-post-author\"><div class=\"wp-block-post-author__content\"><p class=\"wp-block-post-author__name\">By Adam Zewe<\/p><\/div><\/div>\n\n\n<p>CAMBRDIGE, MA \u2013 In order to train more powerful large language models, researchers use vast dataset collections that blend diverse data from thousands of web sources.<\/p>\n\n\n\n<p>But as these datasets are combined and recombined into multiple collections, important information about their origins and restrictions on how they can be used are often lost or confounded in the shuffle.<\/p>\n\n\n\n<p>Not only does this raise legal and ethical concerns, it can also damage a model\u2019s performance. For instance, if a dataset is miscategorized, someone training a machine-learning model for a certain task may end up unwittingly using data that are not designed for that task.<\/p>\n\n\n\n<p>In addition, data from unknown sources could contain biases that cause a model to make unfair predictions when deployed.<\/p>\n\n\n\n<p>To improve data transparency, a team of multidisciplinary researchers from MIT and elsewhere launched a systematic audit of more than 1,800 text datasets on popular hosting sites. They found that more than 70 percent of these datasets omitted some licensing information, while about 50 percent had information that contained errors.<\/p>\n\n\n\n<p>Building off these insights, they developed a user-friendly tool called the Data Provenance Explorer that automatically generates easy-to-read summaries of a dataset\u2019s creators, sources, licenses, and allowable uses.<\/p>\n\n\n\n<p>\u201cThese types of tools can help regulators and practitioners make informed decisions about AI deployment, and further the responsible development of AI,\u201d says Alex \u201cSandy\u201d Pentland, an MIT professor, leader of the Human Dynamics Group in the MIT Media Lab, and co-author of a new open-access paper about the project.<\/p>\n\n\n\n<p>The Data Provenance Explorer could help AI practitioners build more effective models by enabling them to select training datasets that fit their model\u2019s intended purpose. In the long run, this could improve the accuracy of AI models in real-world situations, such as those used to evaluate loan applications or respond to customer queries.<\/p>\n\n\n\n<p>\u201cOne of the best ways to understand the capabilities and limitations of an AI model is understanding what data it was trained on. When you have misattribution and confusion about where data came from, you have a serious transparency issue,\u201d says Robert Mahari, a graduate student in the MIT Human Dynamics Group, a JD candidate at Harvard Law School, and co-lead author on the paper.<\/p>\n\n\n\n<p>Mahari and Pentland are joined on the paper by co-lead author Shayne Longpre, a graduate student in the Media Lab; Sara Hooker, who leads the research lab Cohere for AI; as well as others at MIT, the University of California at Irvine, the University of Lille in France, the University of Colorado at Boulder, Olin College, Carnegie Mellon University, Contextual AI, ML Commons, and Tidelift. The research is published today in Nature Machine Intelligence.<\/p>\n\n\n\n<p>Focus on finetuning<\/p>\n\n\n\n<p>Researchers often use a technique called fine-tuning to improve the capabilities of a large language model that will be deployed for a specific task, like question-answering. For finetuning, they carefully build curated datasets designed to boost a model\u2019s performance for this one task.<\/p>\n\n\n\n<p>The MIT researchers focused on these fine-tuning datasets, which are often developed by researchers, academic organizations, or companies and licensed for specific uses.<\/p>\n\n\n\n<p>When crowdsourced platforms aggregate such datasets into larger collections for practitioners to use for fine-tuning, some of that original license information is often left behind.<\/p>\n\n\n\n<p>\u201cThese licenses ought to matter, and they should be enforceable,\u201d Mahari says.<\/p>\n\n\n\n<p>For instance, if the licensing terms of a dataset are wrong or missing, someone could spend a great deal of money and time developing a model they might be forced to take down later because some training data contained private information.<\/p>\n\n\n\n<p>\u201cPeople can end up training models where they don\u2019t even understand the capabilities, concerns, or risk of those models, which ultimately stem from the data,\u201d Longpre adds.<\/p>\n\n\n\n<p>To begin this study, the researchers formally defined data provenance as the combination of a dataset\u2019s sourcing, creating, and licensing heritage, as well as its characteristics. From there, they developed a structured auditing procedure to trace the data provenance of more than 1,800 text dataset collections from popular online repositories.<\/p>\n\n\n\n<p>After finding that more than 70 percent of these datasets contained \u201cunspecified\u201d licenses that omitted much information, the researchers worked backward to fill in the blanks. Through their efforts, they reduced the number of datasets with \u201cunspecified\u201d licenses to around 30 percent.<\/p>\n\n\n\n<p>Their work also revealed that the correct licenses were often more restrictive than those assigned by the repositories.<\/p>\n\n\n\n<p>In addition, they found that nearly all dataset creators were concentrated in the global north, which could limit a model\u2019s capabilities if it is trained for deployment in a different region. For instance, a Turkish language dataset created predominantly by people in the U.S. and China might not contain any culturally significant aspects, Mahari explains.<\/p>\n\n\n\n<p>\u201cWe almost delude ourselves into thinking the datasets are more diverse than they actually are,\u201d he says.<\/p>\n\n\n\n<p>Interestingly, the researchers also saw a dramatic spike in restrictions placed on datasets created in 2023 and 2024, which might be driven by concerns from academics that their datasets could be used for unintended commercial purposes.<\/p>\n\n\n\n<p>A user-friendly tool<\/p>\n\n\n\n<p>To help others obtain this information without the need for a manual audit, the researchers built the Data Provenance Explorer. In addition to sorting and filtering datasets based on certain criteria, the tool allows users to download a data provenance card that provides a succinct, structured overview of dataset characteristics.<\/p>\n\n\n\n<p>\u201cWe are hoping this is a step, not just to understand the landscape, but also help people going forward to make more informed choices about what data they are training on,\u201d Mahari says.<\/p>\n\n\n\n<p>In the future, the researchers want to expand their analysis to investigate data provenance for multimodal data, including video and speech. They also want to study how terms of service on websites that serve as data sources are echoed in datasets.<\/p>\n\n\n\n<p>As they expand their research, they are also reaching out to regulators to discuss their findings and the unique copyright implications of fine-tuning data.<\/p>\n\n\n\n<p>\u201cWe need data provenance and transparency from the outset, when people are creating and releasing these datasets, to make it easier for others to derive these insights,\u201d Longpre says.<\/p>\n","protected":false},"excerpt":{"rendered":"<p>In order to train more powerful large language models, researchers use vast dataset collections that blend diverse data from thousands of web sources.<\/p>\n","protected":false},"author":2,"featured_media":25161,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[3,17],"tags":[],"class_list":["post-25160","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-news","category-research"],"featured_image_urls":{"full":["https:\/\/www.revoscience.com\/en\/wp-content\/uploads\/2024\/08\/MIT-Dataset-Transparency-01-press_0.jpg",900,600,false],"thumbnail":["https:\/\/www.revoscience.com\/en\/wp-content\/uploads\/2024\/08\/MIT-Dataset-Transparency-01-press_0-200x200.jpg",200,200,true],"medium":["https:\/\/www.revoscience.com\/en\/wp-content\/uploads\/2024\/08\/MIT-Dataset-Transparency-01-press_0-600x400.jpg",600,400,true],"medium_large":["https:\/\/www.revoscience.com\/en\/wp-content\/uploads\/2024\/08\/MIT-Dataset-Transparency-01-press_0-768x512.jpg",750,500,true],"large":["https:\/\/www.revoscience.com\/en\/wp-content\/uploads\/2024\/08\/MIT-Dataset-Transparency-01-press_0-675x450.jpg",675,450,true],"1536x1536":["https:\/\/www.revoscience.com\/en\/wp-content\/uploads\/2024\/08\/MIT-Dataset-Transparency-01-press_0.jpg",900,600,false],"2048x2048":["https:\/\/www.revoscience.com\/en\/wp-content\/uploads\/2024\/08\/MIT-Dataset-Transparency-01-press_0.jpg",900,600,false],"ultp_layout_landscape_large":["https:\/\/www.revoscience.com\/en\/wp-content\/uploads\/2024\/08\/MIT-Dataset-Transparency-01-press_0.jpg",900,600,false],"ultp_layout_landscape":["https:\/\/www.revoscience.com\/en\/wp-content\/uploads\/2024\/08\/MIT-Dataset-Transparency-01-press_0-870x570.jpg",870,570,true],"ultp_layout_portrait":["https:\/\/www.revoscience.com\/en\/wp-content\/uploads\/2024\/08\/MIT-Dataset-Transparency-01-press_0-600x600.jpg",600,600,true],"ultp_layout_square":["https:\/\/www.revoscience.com\/en\/wp-content\/uploads\/2024\/08\/MIT-Dataset-Transparency-01-press_0-600x600.jpg",600,600,true],"newspaper-x-single-post":["https:\/\/www.revoscience.com\/en\/wp-content\/uploads\/2024\/08\/MIT-Dataset-Transparency-01-press_0-760x490.jpg",760,490,true],"newspaper-x-recent-post-big":["https:\/\/www.revoscience.com\/en\/wp-content\/uploads\/2024\/08\/MIT-Dataset-Transparency-01-press_0-550x360.jpg",550,360,true],"newspaper-x-recent-post-list-image":["https:\/\/www.revoscience.com\/en\/wp-content\/uploads\/2024\/08\/MIT-Dataset-Transparency-01-press_0-95x65.jpg",95,65,true],"web-stories-poster-portrait":["https:\/\/www.revoscience.com\/en\/wp-content\/uploads\/2024\/08\/MIT-Dataset-Transparency-01-press_0.jpg",640,427,false],"web-stories-publisher-logo":["https:\/\/www.revoscience.com\/en\/wp-content\/uploads\/2024\/08\/MIT-Dataset-Transparency-01-press_0.jpg",96,64,false],"web-stories-thumbnail":["https:\/\/www.revoscience.com\/en\/wp-content\/uploads\/2024\/08\/MIT-Dataset-Transparency-01-press_0.jpg",150,100,false]},"author_info":{"info":["By Adam Zewe"]},"category_info":"<a href=\"https:\/\/www.revoscience.com\/en\/category\/news\/\" rel=\"category tag\">News<\/a> <a href=\"https:\/\/www.revoscience.com\/en\/category\/news\/research\/\" rel=\"category tag\">Research<\/a>","tag_info":"Research","comment_count":"0","_links":{"self":[{"href":"https:\/\/www.revoscience.com\/en\/wp-json\/wp\/v2\/posts\/25160","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.revoscience.com\/en\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.revoscience.com\/en\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.revoscience.com\/en\/wp-json\/wp\/v2\/users\/2"}],"replies":[{"embeddable":true,"href":"https:\/\/www.revoscience.com\/en\/wp-json\/wp\/v2\/comments?post=25160"}],"version-history":[{"count":1,"href":"https:\/\/www.revoscience.com\/en\/wp-json\/wp\/v2\/posts\/25160\/revisions"}],"predecessor-version":[{"id":25162,"href":"https:\/\/www.revoscience.com\/en\/wp-json\/wp\/v2\/posts\/25160\/revisions\/25162"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/www.revoscience.com\/en\/wp-json\/wp\/v2\/media\/25161"}],"wp:attachment":[{"href":"https:\/\/www.revoscience.com\/en\/wp-json\/wp\/v2\/media?parent=25160"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.revoscience.com\/en\/wp-json\/wp\/v2\/categories?post=25160"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.revoscience.com\/en\/wp-json\/wp\/v2\/tags?post=25160"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}