{"id":2612,"date":"2015-02-12T05:29:05","date_gmt":"2015-02-12T05:29:05","guid":{"rendered":"http:\/\/revoscience.com\/en\/?p=2612"},"modified":"2015-02-12T05:29:05","modified_gmt":"2015-02-12T05:29:05","slug":"better-how-to-videos","status":"publish","type":"post","link":"https:\/\/www.revoscience.com\/en\/better-how-to-videos\/","title":{"rendered":"Better how-to videos"},"content":{"rendered":"<p style=\"text-align: justify;\"><span style=\"color: #000000;\"><em><strong style=\"color: #222222;\">System recruits learners to annotate videos, increasing their educational value.<\/strong><\/em><\/span><\/p>\n<p style=\"text-align: justify;\">\n<figure id=\"attachment_2613\" aria-describedby=\"caption-attachment-2613\" style=\"width: 300px\" class=\"wp-caption alignright\"><a href=\"http:\/\/revoscience.com\/en\/wp-content\/uploads\/2015\/02\/MIT-Instruction-Annotation-01.jpg\" target=\"_blank\" rel=\"noopener\"><img loading=\"lazy\" decoding=\"async\" class=\"size-medium wp-image-2613\" src=\"http:\/\/revoscience.com\/en\/wp-content\/uploads\/2015\/02\/MIT-Instruction-Annotation-01-300x200.jpg\" alt=\"A new system for crowdsourced video annotation could increase the educational value of instructional videos.  Image: Jose-Luis Olivares\/MIT (screenshots courtesy of the researchers)\" width=\"300\" height=\"200\" title=\"\" srcset=\"https:\/\/www.revoscience.com\/en\/wp-content\/uploads\/2015\/02\/MIT-Instruction-Annotation-01-300x200.jpg 300w, https:\/\/www.revoscience.com\/en\/wp-content\/uploads\/2015\/02\/MIT-Instruction-Annotation-01.jpg 639w\" sizes=\"auto, (max-width: 300px) 100vw, 300px\" \/><\/a><figcaption id=\"caption-attachment-2613\" class=\"wp-caption-text\">A new system for crowdsourced video annotation could increase the educational value of instructional videos.<br \/>Image: Jose-Luis Olivares\/MIT (screenshots courtesy of the researchers)<\/figcaption><\/figure>\n<p style=\"text-align: justify;\"><span style=\"color: #000000;\">CAMBRIDGE, Mass. &#8212;\u00a0Educational researchers have long held that presenting students with clear outlines of the material covered in lectures improves their retention.<\/span><\/p>\n<p style=\"text-align: justify;\"><span style=\"color: #000000;\">Recent studies indicate that the same is true of online how-to videos, and in a paper being presented at the Association for Computing Machinery\u2019s Conference on Computer-Supported Cooperative Work and Social Computing in March, researchers at MIT and Harvard University describe a new system that recruits viewers to create high-level conceptual outlines.<\/span><\/p>\n<p style=\"text-align: justify;\"><span style=\"color: #000000;\">Blind reviews by experts in the topics covered by the videos indicated that the outlines produced by the new system were as good as, or better than, those produced by other experts.<\/span><\/p>\n<p style=\"text-align: justify;\"><span style=\"color: #000000;\">The outlines also serve as navigation tools, so viewers already familiar with some of a video\u2019s content can skip ahead, while others can backtrack to review content they missed the first time around.<\/span><\/p>\n<p style=\"text-align: justify;\"><span style=\"color: #000000;\">\u201cThat addresses one of the fundamental problems with videos,\u201d says Juho Kim, an MIT graduate student in electrical engineering and computer science and one of the paper\u2019s co-authors. \u201cIt\u2019s really hard to find the exact spots that you want to watch. You end up scrubbing on the timeline carefully and looking at thumbnails. And with educational videos, especially, it\u2019s really hard, because it\u2019s not that visually dynamic. So we thought that having this semantic information about the video really helps.\u201d<\/span><\/p>\n<p style=\"text-align: justify;\"><span style=\"color: #000000;\">Kim is a member of the User Interface Design Group at MIT\u2019s Computer Science and Artificial Intelligence Laboratory, which is led by Rob Miller, a professor of computer science and engineering and another of the paper\u2019s co-authors. A\u00a0<a style=\"color: #1155cc;\" href=\"http:\/\/mit.pr-optout.com\/Tracking.aspx?Data=HHL%3d8.78%3d9-%3eLCE9%3b4%3b8%3f%26SDG%3c90%3a.&amp;RE=MC&amp;RI=4334046&amp;Preview=False&amp;DistributionActionID=24891&amp;Action=Follow+Link\" target=\"_blank\" rel=\"noopener\"><span style=\"color: #000000;\">major topic<\/span><\/a>\u00a0of research in Miller\u2019s group is the\u00a0<a style=\"color: #1155cc;\" href=\"http:\/\/mit.pr-optout.com\/Tracking.aspx?Data=HHL%3d8.78%3d9-%3eLCE9%3b4%3b8%3f%26SDG%3c90%3a.&amp;RE=MC&amp;RI=4334046&amp;Preview=False&amp;DistributionActionID=24890&amp;Action=Follow+Link\" target=\"_blank\" rel=\"noopener\"><span style=\"color: #000000;\">clever design<\/span><\/a>\u00a0of computer interfaces to harness the power of crowdsourcing, or distributing simple but time-consuming tasks among large numbers of paid or unpaid online volunteers.<\/span><\/p>\n<p style=\"text-align: justify;\"><span style=\"color: #000000;\">Joining Kim and Miller on the paper are first author Sarah Weir, an undergraduate who worked on the project through the MIT<a style=\"color: #1155cc;\" href=\"http:\/\/mit.pr-optout.com\/Tracking.aspx?Data=HHL%3d8.78%3d9-%3eLCE9%3b4%3b8%3f%26SDG%3c90%3a.&amp;RE=MC&amp;RI=4334046&amp;Preview=False&amp;DistributionActionID=24889&amp;Action=Follow+Link\" target=\"_blank\" rel=\"noopener\"><span style=\"color: #000000;\">Undergraduate Research Opportunities Program<\/span><\/a>, and Krzysztof Gajos, an associate professor of computer science at Harvard University.<\/span><\/p>\n<p style=\"text-align: justify;\"><span style=\"color: #000000;\"><strong>High-concept video<\/strong><\/span><\/p>\n<p style=\"text-align: justify;\"><span style=\"color: #000000;\">Several studies in the past five years, particularly those by\u00a0<a style=\"color: #1155cc;\" href=\"http:\/\/mit.pr-optout.com\/Tracking.aspx?Data=HHL%3d8.78%3d9-%3eLCE9%3b4%3b8%3f%26SDG%3c90%3a.&amp;RE=MC&amp;RI=4334046&amp;Preview=False&amp;DistributionActionID=24888&amp;Action=Follow+Link\" target=\"_blank\" rel=\"noopener\"><span style=\"color: #000000;\">Richard Catrambone<\/span><\/a>, a psychologist at Georgia Tech, have demonstrated that accompanying how-to videos with step-by-step instructions improves learners\u2019 mastery of the concepts presented. But before beginning work on their crowdsourced video annotation systems, the MIT and Harvard researchers conducted their own user study.<\/span><\/p>\n<p style=\"text-align: justify;\"><span style=\"color: #000000;\">They hand-annotated several video tutorials on the use of the graphics program Photoshop and presented the videos, either with or without the annotations, to study subjects. The subjects were then assigned a task that drew on their new skills, and the results were evaluated by Photoshop experts. The work of the subjects who\u2019d watched the annotated videos scored higher with the experts, and the subjects themselves reported greater confidence in their abilities and satisfaction with the tutorials.<\/span><\/p>\n<p style=\"text-align: justify;\"><span style=\"color: #000000;\">Last year, at the Association for Computing Machinery\u2019s Conference on Human Factors in Computing Systems, the researchers presented a\u00a0<a style=\"color: #1155cc;\" href=\"http:\/\/mit.pr-optout.com\/Tracking.aspx?Data=HHL%3d8.78%3d9-%3eLCE9%3b4%3b8%3f%26SDG%3c90%3a.&amp;RE=MC&amp;RI=4334046&amp;Preview=False&amp;DistributionActionID=24887&amp;Action=Follow+Link\" target=\"_blank\" rel=\"noopener\"><span style=\"color: #000000;\">system<\/span><\/a>\u00a0for distributing the video-annotation task among paid workers recruited through Amazon\u2019s Mechanical Turk crowdsourcing service. Their clever allocation and proofreading scheme got the cost of high-quality video annotation down to $1 a minute.<\/span><\/p>\n<p style=\"text-align: justify;\"><span style=\"color: #000000;\">That system produced low-level step-by-step instructions. But work by Catrambone and others had indicated that learners profited more from outlines that featured something called \u201csubgoal labeling.\u201d<\/span><\/p>\n<p style=\"text-align: justify;\"><span style=\"color: #000000;\">\u201cSubgoal labeling is an educational theory that says that people think in terms of hierarchical solution structures,\u201d Kim explains. \u201cSay there are 20 different steps to make a cake, such as adding sugar, salt, baking soda, egg, butter, and things like that. This could be just a random series of steps, if you\u2019re a novice. But what if the instruction instead said, \u2018First, deal with all the dry ingredients,\u2019 and then it talked about the specific steps. Then it moved onto the wet ingredients and talked about eggs and butter and milk. That way, your mental model of the solution is much better organized.\u201d<\/span><\/p>\n<p style=\"text-align: justify;\"><span style=\"color: #000000;\"><strong>Division of labor<\/strong><\/span><\/p>\n<p style=\"text-align: justify;\"><span style=\"color: #000000;\">The system reported in the new paper, dubbed \u201cCrowdy,\u201d produces subgoal labels \u2014 and does so essentially for free. Each of a video\u2019s first viewers will find it randomly paused at some point, whereupon the viewer will be asked to characterize the previous minute of instruction. After enough candidate descriptions have been amassed, each subsequent viewer will, at one of the same points, be offered three alternative characterizations of the preceding minute. Once a consensus emerges, Crowdy identifies successive minutes of video with similar characterizations and merges their labels. Finally, another group of viewers is asked whether the resulting labels are accurate and, if not, to provide alternatives.<\/span><\/p>\n<p style=\"text-align: justify;\"><span style=\"color: #000000;\">The researchers tested Crowdy with a group of 15 videos about three common Web programming languages, which were culled from YouTube. The videos were posted on the Crowdy website for a month, during which they attracted about 1,000 viewers. Roughly one-fifth of those viewers participated in the experiment, producing an average of eight subgoal labels per video.<\/span><\/p>\n<p style=\"text-align: justify;\"><span style=\"color: #000000;\">In ongoing work, the researchers are expanding the range of topics covered by the videos on the Crowdy website. They\u2019re also investigating whether occasionally pausing the videos and asking viewers to reflect on recently presented content actually improves retention. There\u2019s some evidence in the educational literature that it should, and if it does, it could provide a strong incentive for viewers to contribute to the annotation process.<\/span><\/p>\n","protected":false},"excerpt":{"rendered":"<p>System recruits learners to annotate videos, increasing their educational value. CAMBRIDGE, Mass. &#8212;\u00a0Educational researchers have long held that presenting students with clear outlines of the material covered in lectures improves their retention. Recent studies indicate that the same is true of online how-to videos, and in a paper being presented at the Association for Computing [&hellip;]<\/p>\n","protected":false},"author":6,"featured_media":2613,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[14,17],"tags":[],"class_list":["post-2612","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-innovation","category-research"],"featured_image_urls":{"full":["https:\/\/www.revoscience.com\/en\/wp-content\/uploads\/2015\/02\/MIT-Instruction-Annotation-01.jpg",639,426,false],"thumbnail":["https:\/\/www.revoscience.com\/en\/wp-content\/uploads\/2015\/02\/MIT-Instruction-Annotation-01-150x150.jpg",150,150,true],"medium":["https:\/\/www.revoscience.com\/en\/wp-content\/uploads\/2015\/02\/MIT-Instruction-Annotation-01-300x200.jpg",300,200,true],"medium_large":["https:\/\/www.revoscience.com\/en\/wp-content\/uploads\/2015\/02\/MIT-Instruction-Annotation-01.jpg",639,426,false],"large":["https:\/\/www.revoscience.com\/en\/wp-content\/uploads\/2015\/02\/MIT-Instruction-Annotation-01.jpg",639,426,false],"1536x1536":["https:\/\/www.revoscience.com\/en\/wp-content\/uploads\/2015\/02\/MIT-Instruction-Annotation-01.jpg",639,426,false],"2048x2048":["https:\/\/www.revoscience.com\/en\/wp-content\/uploads\/2015\/02\/MIT-Instruction-Annotation-01.jpg",639,426,false],"ultp_layout_landscape_large":["https:\/\/www.revoscience.com\/en\/wp-content\/uploads\/2015\/02\/MIT-Instruction-Annotation-01.jpg",639,426,false],"ultp_layout_landscape":["https:\/\/www.revoscience.com\/en\/wp-content\/uploads\/2015\/02\/MIT-Instruction-Annotation-01.jpg",639,426,false],"ultp_layout_portrait":["https:\/\/www.revoscience.com\/en\/wp-content\/uploads\/2015\/02\/MIT-Instruction-Annotation-01.jpg",600,400,false],"ultp_layout_square":["https:\/\/www.revoscience.com\/en\/wp-content\/uploads\/2015\/02\/MIT-Instruction-Annotation-01.jpg",600,400,false],"newspaper-x-single-post":["https:\/\/www.revoscience.com\/en\/wp-content\/uploads\/2015\/02\/MIT-Instruction-Annotation-01.jpg",639,426,false],"newspaper-x-recent-post-big":["https:\/\/www.revoscience.com\/en\/wp-content\/uploads\/2015\/02\/MIT-Instruction-Annotation-01.jpg",540,360,false],"newspaper-x-recent-post-list-image":["https:\/\/www.revoscience.com\/en\/wp-content\/uploads\/2015\/02\/MIT-Instruction-Annotation-01.jpg",95,63,false],"web-stories-poster-portrait":["https:\/\/www.revoscience.com\/en\/wp-content\/uploads\/2015\/02\/MIT-Instruction-Annotation-01.jpg",639,426,false],"web-stories-publisher-logo":["https:\/\/www.revoscience.com\/en\/wp-content\/uploads\/2015\/02\/MIT-Instruction-Annotation-01.jpg",96,64,false],"web-stories-thumbnail":["https:\/\/www.revoscience.com\/en\/wp-content\/uploads\/2015\/02\/MIT-Instruction-Annotation-01.jpg",150,100,false]},"author_info":{"info":["Amrita Tuladhar"]},"category_info":"<a href=\"https:\/\/www.revoscience.com\/en\/category\/innovation\/\" rel=\"category tag\">Innovation<\/a> <a href=\"https:\/\/www.revoscience.com\/en\/category\/news\/research\/\" rel=\"category tag\">Research<\/a>","tag_info":"Research","comment_count":"0","_links":{"self":[{"href":"https:\/\/www.revoscience.com\/en\/wp-json\/wp\/v2\/posts\/2612","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.revoscience.com\/en\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.revoscience.com\/en\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.revoscience.com\/en\/wp-json\/wp\/v2\/users\/6"}],"replies":[{"embeddable":true,"href":"https:\/\/www.revoscience.com\/en\/wp-json\/wp\/v2\/comments?post=2612"}],"version-history":[{"count":0,"href":"https:\/\/www.revoscience.com\/en\/wp-json\/wp\/v2\/posts\/2612\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/www.revoscience.com\/en\/wp-json\/wp\/v2\/media\/2613"}],"wp:attachment":[{"href":"https:\/\/www.revoscience.com\/en\/wp-json\/wp\/v2\/media?parent=2612"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.revoscience.com\/en\/wp-json\/wp\/v2\/categories?post=2612"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.revoscience.com\/en\/wp-json\/wp\/v2\/tags?post=2612"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}