Measuring the impact of edtech solutions

Go back

Measuring the impact of edtech solutions

July 24, 2019

edtech uutiset

The impact of educational technology is a key question for both start-ups and traditional educational publishers. Teachers need to know, what a solution is teaching, why it is effective and how it can be used. The traditional educational material publishers have the advantage of having a long history of being used at schools and therefore plenty of practical experience on what works. However, gathering real evidence is a project that takes time and requires expertise. But when it's done properly the work pays off. Here I’d like to present some examples of how it can be done.

Pearson is one of the leading educational publishers in the world. They are regularly conducting research on the efficacy of their products and publishing the results on their efficacy reports.

The studies are designed to be highly rigorous - the experiments include a large number of students and they are followed for a relatively long period of time. For example, with their product MyPedia India they observed 557 lessons in 74 schools. The learning outcomes were measured in standardized tests and compared to national averages.
 
When compared to Pearson, Imagine Learning is a newcomer in educational publishing. Founded in 2004, but currently employing several hundred people, they offer digital math and language learning solutions. They also generate trust and credibility through academic studies, which they have conducted on their products, either in-house or executed through a research partner. 

To communicate the validity of their research, Imagine Learning is referring to Every Student Succeeds Act (ESSA) which is a non-regulatory guidance framework by the U.S. Department of education, and provides a guideline of what counts as relevant and strong research evidence of educational impact.

ESSA has four tiers of evidence, from Strong (Tier 1) to Demonstrating rationale (Tier 4)

ESSA levels of evidence presented as a graph by EAF

An important point to note is that the tiers are not mutually exclusive but rather serve a different purpose. When designing a whole new product or when coming up with new features to an existing one, ESSA tiers can be followed step-by-step to validate the efficacy of the product or feature in a way that is most helpful for development.

Tier 4: When designing new features, make sure they align with learning science principles. Just like if you'd build an airplane, you'd first make sure its design aligns with laws of physics, increasing the odds it'll stay in the air. 
When you have a rationale why your solution should work enhancing learning, then it's time to move on to testing with real users.

Tier 3: The goal for the research is to identify who are the specific groups of users your solution works for. Certain age groups, students with specific background etc. When you focus on specific groups, it's more likely that you'll have meaningful findings in a reasonable amount of time.

Tier 2: Testing with larger groups and then comparing the results has more credibility in proving the efficacy. However, a valid study requires a lot of time and is better to do after you've used Tiers’ 4 and 3 approaches to refine the product’s design. You don't want to spend 12 months gathering data only to find that e.g. your lesson plan activities are not challenging enough for the target age and don't align with the curriculum.

Tier 1: This approach is otherwise the same as tier 2, but uses randomized control groups to find out what difference did the use of your solution makes, and expects the results to be published. The method is widely used in medical sciences, but with education, this approach is more challenging. It is not possible to replicate the same learning experience with maintaining all the same parameters. The students’ background and motivation and the way how the teacher is guiding the student and using your solution are hard to control. To minimize the effect of these changing parameters in your study takes a lot of work and the study has no validity if it's not done correctly. So, make sure you've done the previous steps well before you move on to this stage.

A similar viewpoint to educational impact is presented by Jennifer Carolan and Molly B. Zielezinski  in their Edsurge article - an Efficacy portfolio. They see efficacy as something to be considered from the beginning of the development and to be put to test as early as possible with suitable means. Research can be a powerful tool for both developing and marketing your solution.