How do we know our product works?

BrainQuake
7 min readApr 25, 2020

--

Students engage with Wuzzit Trouble in math class. Photo by BrainQuake.

BrainQuake was founded by lifelong educators who shared a common vision that digital games can provide powerful mathematics learning not readily achieved with any other educational medium. Not to replace anything, but to supplement it, adding to what was already available. But how do we know that our games do what we want them to?

The answer is, in a single word: Research. Not only are all our designs based on years of university research into mathematics learning, we subject the product to a variety of studies at every stage of development. So we know that what we produce should work, But, as with the pharmaceutical industry, “should work” is not good enough. Before we make any claims about our games, we subject them to independent research conducted to high scientific standards.

Our gold standard is university run studies in school classrooms, not funded by us, with the results published in highly-regarded, peer-reviewed, scholarly journals. Unfortunately, universities, by and large, do not conduct studies of commercial products. Their focus is scientific research for its own sake.

But when a product embodies or exhibits features that interest researchers, they will use them in their studies. That’s what happened when we first released our launch app Wuzzit Trouble a few years ago. Two doctoral researchers in the Stanford University Graduate School of Education decided to use the game to study the degree to which a digital game could lead to good learning outcomes. [See the footnotes for references.]

The chart shown below summarizes the results. The two middle-school classes in the study were at the same grade level and taught by the same teacher, an essential requirement for a reliable study of a learning product used as an intervention. The game was given to the students in the intervention group to play during the last ten minutes of math class, three times a week, for one month. The comparison group had lessons as usual.

Results from the 2015 Stanford University study of Wuzzit Trouble showing the learning gains, as measured by a written math test, after the game was played for a total of two hours spread over four weeks. N = 59. Image by Dr. Holly Pope.

I admit, I was initially surprised to see such a dramatic increase in performance on a written test, against the comparison group that did not play the game. To my mind, playing the game for a total of 120 minutes spread over four weeks was far too short an amount of time to produce such a result. I expected it would require maybe two months at least to see anything of significance, possibly longer.

Suspecting that the Stanford result may be a statistical outlier of no real significance, I contacted a university researcher in Finland (famous for both world class mathematics education and the production of excellent video games) to ask him to run a similar study. He agreed, subject to including two additional study items to advance his own research. In addition to using the Stanford pre- and post-test, he used a math game he had developed (dealing with fractions) as a second pre- and post-test, and also asked each student in the study to complete a questionnaire looking into engagement and flow in a math learning game.

When the results came in, they were almost identical to the ones from the Stanford study. See chart below.

Results from the 2015 Finnish study using Wuzzit Trouble as an intervention (duration 2 months, 40 min game play each week; N=30 Finland; N=25 USA). Image by Dr. Kristian Kiili.

Though both studies were small (each one comprising two classes taught by the same teacher, one used for the intervention, the other as a comparison group), the fact that they produced such similar outcomes gave me confidence that our launch product did what we had hoped for.

Again, it was not the strong learning gains that surprised me; rather the short time-frame of the intervention. After several lengthy discussions with colleagues around the world, I have since come to understand, at least in part, what is going on to produce such dramatic outcomes, and will address that in future posts to this blog.

It was after those two study results came in that we (BrainQuake) felt comfortable making claims about learning gains on our website and in presentations Randy and I made about Wuzzit Trouble.

What was of particular interest to me as a researcher in math education was the nature of the learning gains.

The image below shows the written post-test the students completed after the two university studies. Questions 1, 2, 3, and 5 are familiar looking math-test questions focusing on the kind of mathematics embedded in Wuzzit Trouble. The students in the intervention class improved slightly overall on those questions against the students in the comparison class, but not enough to have any real significance.

The written test used in the 2015 Stanford study of Wuzzit Trouble. Test created by Prof. Jo Boaler.

It was in the responses to the very different question 4 that the students who had played Wuzzit Trouble (for 120 minutes spread over four weeks) opened up a significant gap over the students in the comparison class. The mean improvement in score on several key measures in the Boaler-Pope study was 16.4%, with a similar improvement in the Finnish study. (The peer-reviewed, published papers give the details. See the footnotes.)

The significant factor here is the nature of question 4. It is very much a non-cookie-cutter question that the students were unlikely to have seen before, where it was not possible to identify a standard technique to solve it. To find a solution, you had to think about the problem in a creative way. That, in a nutshell, is what the game-playing class learned to do.

Question 4 was not about absorbing new factual knowledge, or mastering a new formula or technique. Rather, it came down to knowing how to make sense of and think creatively about a novel problem. Which is exactly what we at BrainQuake had designed Wuzzit Trouble to do. Presented with something they had not seen before and had not had an opportunity to practice, the students in the comparison class simply did not know how to proceed. In contrast, after just two hours total of Wuzzit Trouble play, the intervention group did just fine with it.

The two results made us feel good, and gave us confidence going forward to expand our original app to our new, far more expansive product. If you are a parent or teacher thinking of getting the BrainQuake app for your children or students, I would imagine it will make you feel confident going forward with us. (As I have mentioned in previous posts, the Gears puzzle of Wuzzit Trouble is one of three different puzzles in the new BrainQuake app.)

Those two university studies are now in our past. So too are some other research projects you will find on our website research page. But there are a number of new research projects in progress and in planning, and I will report on them in this blog as and when results come in.

At BrainQuake, we believe that, as is required by law for pharmaceuticals, an educational product should be backed up by solid research carried out by reputable scientists. We take drugs to change the way our body behaves, so we need to know the drug will do what the manufacturer claims. While the analogy is not exact, an educational product is intended to change the way our minds (or our students’/children’s minds) behave, so shouldn’t we require evidence that it too does what it claims? We think the answer is “Yes!”

– Keith

FOOTNOTES:
1. Both research publications cited are available on the BrainQuake website in the Research section, together with a summary (in presentation form) of the subsequent doctoral study carried out by the lead Stanford researcher (Holly Pope), plus an op-ed I co-wrote with two Finnish academics where we discuss why suitably designed digital learning games can have significant impact after a relatively short period of time. You will also find there links to other studies using BrainQuake products conducted by various universities and independent, non-profit, research organizations.
2. Stanford study disclaimer: I was a full-time scientist at Stanford at the time of the study, I knew the two researchers’ doctoral advisor, and she was (and is) an unpaid, volunteer advisor to BrainQuake, having no financial stake in the company. One of the two students eventually wrote her entire Ph.D. dissertation using Wuzzit Trouble as a tool to investigate the educational potential of digital games for disadvantaged or minority students, in particular, and long after the study was published, I was asked to serve on her doctoral evaluation committee. By then, the original study had been published in a peer-reviewed scientific journal. As per standard university rules, everyone involved was subject to instant dismissal from the university (and an end to their career) if strict scientific protocols were not followed or any results were falsified.
3. Finnish study disclaimer: I helped the researcher make contact with a California school to run the American part of the study and familiarize him with the thinking behind our game and with the Stanford study protocol, and assisted with study logistics, but took no part in the data collection and analysis. (My involvement was subject to the same Stanford restrictions as mentioned above.) I am listed as a co-author of the final peer-reviewed research publication.

--

--

BrainQuake
BrainQuake

Written by BrainQuake

Developing children’s true math proficiency

No responses yet