{"id":274,"date":"2022-04-06T16:53:39","date_gmt":"2022-04-06T15:53:39","guid":{"rendered":"https:\/\/blog.lboro.ac.uk\/cmc\/?p=274"},"modified":"2023-05-05T11:40:36","modified_gmt":"2023-05-05T10:40:36","slug":"the-pitfalls-of-scaling-up-educational-interventions","status":"publish","type":"post","link":"https:\/\/blog.lboro.ac.uk\/cmc\/2022\/04\/06\/the-pitfalls-of-scaling-up-educational-interventions\/","title":{"rendered":"The pitfalls of scaling up educational interventions"},"content":{"rendered":"\n<p class=\"has-text-color\" style=\"color:#535a61\"><em>Written by Jacob Strauss and edited by Dr Jayne Pickering. Jacob is a PhD student at Loughborough University.&nbsp;Please see&nbsp;<\/em><a href=\"https:\/\/www.lboro.ac.uk\/departments\/mec\/staff\/jacob-strauss\/\"><em>here&nbsp;<\/em><\/a><em>for more information about Jacob and his work.<\/em><\/p>\n\n\n\n<p>How does education research transition to practice? The usual approach is something like this:<\/p>\n\n\n\n<p>Phase 1: start with a small-scale study<\/p>\n\n\n\n<p>Phase 2: repeat phase 1 using a much larger sample<\/p>\n\n\n\n<p>Phase 3: communicate research findings to schools, policymakers, and other educational professionals.<\/p>\n\n\n\n<p>Phase 1 is riddled with problems. Many interventions fail. Sometimes the theory is not strong. Sometimes the methodological design is not sound. That may feel obvious; if we already knew the best possible ways to do everything, then we wouldn\u2019t need research at all. What is perhaps less obvious, is that much promising research also collapses at phase 2.<\/p>\n\n\n\n<p>There are many examples of interventions which failed to scale up. The Parent Academy, a programme designed to equip toddler\u2019s parents with skills to support their children\u2019s learning, initially showed outstanding promise. The Educational Endowment Fund (EEF) spent nearly a million pounds on implementing Parent Academy, but the initiative failed miserably. The Collaborative Reading Strategy, a programme designed to increase reading comprehension, failed to reproduce the same benefits at a large-scale that were observed in initial trials. Project CRISS, a professional development programme for teachers, showed promising results in the initial research stages that were later overturned in a larger study. The infamous class-reduction-size study, Project STAR, failed to replicate the benefits of reduced classroom sizes in the large-scale and expensive Program Challenge and Basic Education Programme.&nbsp;<\/p>\n\n\n\n<p class=\"has-text-align-center has-medium-font-size\"><em><strong>In principle, scaling up seems like an easy, almost trivial, task. Simply take an existing intervention with proven success on a small scale and apply it to a larger scale. The reality is starkly different.<\/strong><\/em><\/p>\n\n\n\n<p>Each of the above examples illustrate some manifestation of the \u201cscaling effect\u201d. <a rel=\"noreferrer noopener\" href=\"https:\/\/ideas.repec.org\/p\/feb\/artefa\/00679.html\" target=\"_blank\">The scaling effect is the net change in a treatment effect as a result of scaling, encompassing both positive and negative changes.<\/a> Many people have attempted to generate models and theoretical frameworks that encapsulate the key factors contributing to the decline in efficacy of programmes at scale. For this post, I have combined these models into a single summary (below), which provides an overview of how a programme\u2019s scalability is under threat at each stage of the knowledge-creation process.\u00a0<\/p>\n\n\n\n<div class=\"wp-block-group\"><div class=\"wp-block-group__inner-container is-layout-flow wp-block-group-is-layout-flow\">\n<div class=\"wp-block-group has-very-light-gray-to-cyan-bluish-gray-gradient-background has-background\"><div class=\"wp-block-group__inner-container is-layout-flow wp-block-group-is-layout-flow\">\n<div class=\"wp-block-group\"><div class=\"wp-block-group__inner-container is-layout-flow wp-block-group-is-layout-flow\">\n<h2 class=\"has-text-align-center wp-block-heading\">Threats to scalability<\/h2>\n\n\n\n<h3 class=\"wp-block-heading\">Innovation<\/h3>\n<\/div><\/div>\n\n\n\n<p><em><a href=\"https:\/\/www.brookings.edu\/opinions\/the-challenges-of-scaling-up-findings-from-education-research\/\" target=\"_blank\" rel=\"noreferrer noopener\">The Innovation Myth<\/a>.\u00a0<\/em>Innovations are not always useful to schools.Whether a program is innovative is irrelevant; first and foremost it must be effective.<\/p>\n\n\n\n<h3 class=\"wp-block-heading\">Sampling<\/h3>\n\n\n\n<p><em><a href=\"https:\/\/ideas.repec.org\/p\/feb\/artefa\/00679.html\" target=\"_blank\" rel=\"noreferrer noopener\">Researcher Choice \/ Bias<\/a>.\u00a0<\/em>Researchers may select a sample that benefits most from the program to boost its measure effects.<\/p>\n\n\n\n<p><em><a href=\"https:\/\/www.taylorfrancis.com\/chapters\/edit\/10.4324\/9780203726556-21\/molehill-mountain-process-scaling-educational-interventions-firsthand-experience-upscaling-theory-successful-intelligence-robert-sternberg-damian-birney-linda-jarvin-alex-kirlik-steven-stemler-elena-grigorenko\">Homogenous\u00a0Sampling<\/a>.\u00a0<\/em>Data collection from a homogenous sample limits the study\u2019s applicability to other groups.<\/p>\n\n\n\n<p><em><a href=\"https:\/\/ideas.repec.org\/p\/feb\/artefa\/00679.html\" target=\"_blank\" rel=\"noreferrer noopener\">Selection Bias<\/a>.\u00a0<\/em>Those willing to participate in research may not be representative of the wider target population.<\/p>\n\n\n\n<p><em><a href=\"https:\/\/ideas.repec.org\/p\/feb\/artefa\/00679.html\" target=\"_blank\" rel=\"noreferrer noopener\">Non-Random Attrition.\u00a0<\/a><\/em>The measures of the treatment effect will not incorporate these people.<\/p>\n\n\n\n<h3 class=\"wp-block-heading\">Data collection<\/h3>\n\n\n\n<div class=\"wp-block-columns is-layout-flex wp-container-core-columns-is-layout-9d6595d7 wp-block-columns-is-layout-flex\">\n<div class=\"wp-block-column is-layout-flow wp-block-column-is-layout-flow\">\n<p><em><a href=\"https:\/\/ideas.repec.org\/p\/feb\/artefa\/00679.html\" target=\"_blank\" rel=\"noreferrer noopener\">Hawthorne\u00a0Effect.\u00a0<\/a><\/em>The\u00a0alteration of behaviour by participants due to their awareness of being observed.<\/p>\n\n\n\n<p><em><a href=\"https:\/\/ideas.repec.org\/p\/feb\/artefa\/00679.html\" target=\"_blank\" rel=\"noreferrer noopener\">John Henry\u00a0Effect<\/a>.\u00a0<\/em>The alteration in behaviour by those in a control group due their awareness of being in a control group.<\/p>\n\n\n\n<h3 class=\"wp-block-heading\">Analysis\u00a0<\/h3>\n\n\n\n<p><em><a href=\"https:\/\/doi.org\/10.3102\/0013189X035003015\" target=\"_blank\" rel=\"noreferrer noopener\">Confounding<\/a>.\u00a0<\/em>Both\u00a0individual- and school-level effects on learning can have a big impact on the effectiveness of a program.<\/p>\n\n\n\n<p><a href=\"https:\/\/doi.org\/10.3102\/0013189X035003015\" target=\"_blank\" rel=\"noreferrer noopener\">Low statistical power.<\/a> Underpowered studies fail to ensure an acceptable likelihood that differences in outcomes attributable to the program will be detected when they exist.\u00a0<\/p>\n\n\n\n<h3 class=\"wp-block-heading\">Policy implementation<\/h3>\n\n\n\n<p><em><a href=\"https:\/\/ideas.repec.org\/p\/feb\/artefa\/00679.html\" target=\"_blank\" rel=\"noreferrer noopener\">Diseconomies of Scale<\/a><\/em>. The cost per participant might increases as a program is scaled up making it expensive to maintain.<\/p>\n\n\n\n<p><em><a href=\"https:\/\/ideas.repec.org\/p\/feb\/artefa\/00679.html\" target=\"_blank\" rel=\"noreferrer noopener\">Overgeneralising<\/a>.\u00a0<\/em>Overgeneralising\u00a0a program\u2019s applicability to a wide variety of situations and populations will distort the program\u2019s effectiveness.<\/p>\n\n\n\n<h3 class=\"wp-block-heading\">Practice<\/h3>\n\n\n\n<p><em><a href=\"https:\/\/www.taylorfrancis.com\/chapters\/edit\/10.4324\/9780203726556-21\/molehill-mountain-process-scaling-educational-interventions-firsthand-experience-upscaling-theory-successful-intelligence-robert-sternberg-damian-birney-linda-jarvin-alex-kirlik-steven-stemler-elena-grigorenko\">Poor\u00a0Dissemination.\u00a0<\/a><\/em>Major\u00a0breakdowns in going to scale comes from failing to disseminate findings in a way that communicates effectively with educators.\u00a0<\/p>\n\n\n\n<p><em><a href=\"https:\/\/ideas.repec.org\/p\/feb\/artefa\/00679.html\" target=\"_blank\" rel=\"noreferrer noopener\">Program\u00a0Drift.\u00a0<\/a><\/em>Individuals\u00a0implementing the program may additionally make minor changes to the program to fit their context.\u00a0<\/p>\n\n\n\n<p><em><a href=\"https:\/\/ideas.repec.org\/p\/feb\/artefa\/00679.html\" target=\"_blank\" rel=\"noreferrer noopener\">Incorrect Delivery \/\u00a0Dosage.<\/a><\/em> The program may be incorrectly applied, delivered or dosed.<\/p>\n\n\n\n<p><em><a href=\"https:\/\/www.brookings.edu\/opinions\/the-challenges-of-scaling-up-findings-from-education-research\/\" target=\"_blank\" rel=\"noreferrer noopener\">The Learn Effect\u00a0Myth<\/a>.\u00a0<\/em>It\u00a0is not the program per se that generates effects, it is the activities students perform with this device.\u00a0<\/p>\n<\/div>\n<\/div>\n<\/div><\/div>\n<\/div><\/div>\n\n\n\n<p><\/p>\n\n\n\n<p><a rel=\"noreferrer noopener\" href=\"https:\/\/ideas.repec.org\/p\/feb\/artefa\/00679.html\" target=\"_blank\">Al-Ubaydli et al. (2019)<\/a> offer advice to scholars, policymakers, and practitioners on the actions they can each do to prevent things going wrong at scale. Everyone has their part to play in the transition of research to practice. I will give a brief overview of one important issue: the\u00a0<em>representativeness of the situation<\/em>.\u00a0<\/p>\n\n\n\n<p>Representativeness can refer to the sample. I.e., an intervention may work for a particular demographic but not another. Representativeness may also refer to the research context. Characteristics of research, such as having a high level of control and providing participants with a high level of support, vanish as a programme is scaled up. Contextual idiosyncrasies such as the efficacy of the teacher, the classroom culture or the in-class support from teaching assistants are often overlooked or unaccounted for when scaling up interventions.<strong>\u00a0<\/strong>A potential solution is for researchers to use technology to standardise as much as possible and to conduct educational research as naturalistically as possible by setting up ecologically valid conditions.\u00a0<\/p>\n\n\n\n<p><a href=\"http:\/\/link.springer.com\/10.1007\/978-0-387-09667-4_3\" target=\"_blank\" rel=\"noreferrer noopener\">Clarke and Dene (2009)<\/a> describe 37 contextual variables that could influence the efficacy of a technology-based intervention. These variables were spread across five categories: (i) student variables, such as their access to technology or absentee record; (ii) teacher variables, such as their pedagogical beliefs or their prior professional development related to technology in classrooms; (iii) technology infrastructure conditions, such as the reliability of the equipment or its location in the school; (iv) school\/class variables, such as the type of class schedule or the length of lessons; and finally, (v) the administrative variables, such as the level of support from the school\u2019s administrators.<\/p>\n\n\n\n<p>Clarke and Dene developed a \u2018scalability index\u2019 identifying which variables statistically interacted with the treatment, and thus were conditions for success. By identifying key features of the intervention\u2019s context, Clarke and Dene were able to give policymakers a detailed depiction of the types of schools the computer game would be suitable for and what additional requirements were needed for the programme to scale up.<\/p>\n\n\n\n<hr class=\"wp-block-separator\" \/>\n\n\n\n<p>Scaling up educational interventions is extremely complicated, and this post barely scratches the surface of everything that could cause a scaling effect. The representativeness of the situation in which a study is conducted is one of the many ways the scaling effect could manifest, but it is often overlooked by researchers, policymakers and school leaders. The context in which research is conducted is sometimes counterproductively the most conducive for positive results. Research programmes are carefully checked that they are being implemented properly, participants might change their normal behaviours as a result of being observed and the organisational culture of schools and classrooms might be instrumental to the programme&#8217;s success. Before deciding whether to adopt an evidence-based practice, it is important to not only ask whether the sample is representative of the individuals who will be affected by these practices, but also whether the context is representative of the organization adopting these practices. And if in doubt, contact the researchers of the original study and ask them for advice on how to implement their research programme.\u00a0<\/p>\n","protected":false},"excerpt":{"rendered":"<p>Written by Jacob Strauss and edited by Dr Jayne Pickering. Jacob is a PhD student at Loughborough University.&nbsp;Please see&nbsp;here&nbsp;for more information about Jacob and his work. How does education research transition to practice? The usual approach is something like this: Phase 1: start with a small-scale study Phase 2: repeat phase 1 using a much [&hellip;]<\/p>\n","protected":false},"author":676,"featured_media":292,"comment_status":"closed","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"lboro_blog_alternative_thumbnail_image":"","footnotes":"","_links_to":"","_links_to_target":""},"categories":[57],"tags":[58,60,62,61,59],"class_list":["post-274","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-interventions","tag-education-interventions","tag-representativeness","tag-reproducibility","tag-research-context","tag-scalability"],"_links":{"self":[{"href":"https:\/\/blog.lboro.ac.uk\/cmc\/wp-json\/wp\/v2\/posts\/274","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/blog.lboro.ac.uk\/cmc\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/blog.lboro.ac.uk\/cmc\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/blog.lboro.ac.uk\/cmc\/wp-json\/wp\/v2\/users\/676"}],"replies":[{"embeddable":true,"href":"https:\/\/blog.lboro.ac.uk\/cmc\/wp-json\/wp\/v2\/comments?post=274"}],"version-history":[{"count":15,"href":"https:\/\/blog.lboro.ac.uk\/cmc\/wp-json\/wp\/v2\/posts\/274\/revisions"}],"predecessor-version":[{"id":342,"href":"https:\/\/blog.lboro.ac.uk\/cmc\/wp-json\/wp\/v2\/posts\/274\/revisions\/342"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/blog.lboro.ac.uk\/cmc\/wp-json\/wp\/v2\/media\/292"}],"wp:attachment":[{"href":"https:\/\/blog.lboro.ac.uk\/cmc\/wp-json\/wp\/v2\/media?parent=274"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/blog.lboro.ac.uk\/cmc\/wp-json\/wp\/v2\/categories?post=274"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/blog.lboro.ac.uk\/cmc\/wp-json\/wp\/v2\/tags?post=274"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}