Posts Tagged ‘experiment design’

HarvardX research: both foundational and immediately applicable

Wednesday, October 23rd, 2013

There is a difference between research and how innovation happens in industry. Research tends to be more foundational and forward-thinking, while innovation in industry is more agile and looks to generate value as soon as possible. Bret Victor, one of my favorite people in interaction design, summarizes it nicely in the diagram below.

Bret Victor's differences between industry and research innovation

HarvardX is a unique combination of industry and research by the classification above. The team I am part of (HarvardX research) works to generate research and help shape online learning now, as well as contribute to foundational knowledge. Course development teams, who create course content and define course structure, sit on the same floor as us. Course developers work together with the research team looking for ways to improve learning continuously and generalize findings beyond HarvardX to online and residential learning in general. Although the process still needs to be streamlined as we are scaling the effort, we are making progress. One example is the project on using assignment due dates to get a handle on student learning goals and inform course creation.

Here is how it got started.

As we were looking at the structure of past HarvardX courses, we discovered that there was a difference in how graded components were used across courses. Graded components include assignments, problem sets, or exams that contribute to the final grade of the course which determines whether a student gets a certificate of completion. Below is public information on when graded components occurred for 3 HarvardX courses.

The visualization above shows publicly available graded components structure for three completed HarvardX courses: PH207x (Health in Numbers), ER22x (Justice), and CB22x (The Ancient Greek Hero). Hovering the mouse over different elements of the plot reveals detailed information, clicking on course codes removes extra courses from display. For PH207x, each assignment had a due date preceding the release time of the next assignment (except the final exam). For the other two courses, students had the flexibility of completing their graded assignments at any time up until the end of the course.

When the due date passes on a particular graded component, students are no longer able to access and answer it for credit. The "word on the street" among course development teams so far has been that it's generally desirable to set generous due dates on the graded components as this promotes alternative (formative) modes of learning allowing students not interested in obtaining a grade to access the graded components. Also, this way students who register for a class late have an opportunity to "catch up" by completing all assignments that they "missed". However, so far it has been unclear what impact such due date structure has on academic achievement (certificate attainment rates) versus other modes of learning (non-certificate track, ie. leisurely browsing).

Indeed, one of the major metrics of online courses is certificate attainment - the proportion of students who register for the course and end up earning a certificate. It turns out that PH207x experienced the attainment rate of over 8.5%, which is the highest among all open HarvardX courses completed to date (average rate of around 4.5%). Does this mean that setting meaningful due dates boosts academic achievement by helping students "stay on track" and not postpone working on the assignments until the work becomes overwhelming? While the hypothesis is plausible, it is too early to draw causal conclusions. It may be that the observation is specific to just public health courses, or PH207x happened to have more committed students to begin with, etc.

While the effect on certificate attainment is certainly important, an equally important question to answer is what impact do due dates have on alternative modes of learning? That's why we are planning to start an A/B test (randomized controlled experiment) to study the effect of due dates, in close collaboration with course development teams. Sitting on the same floor and being immersed in the same everyday context of HarvardX allows for agile planning, so we are hoping to launch the experiment as early as November 15 or even October 31. The findings of the study have the potential to immediately inform course development for new as well as future iterations of current courses, aiming to improve educational outcomes of learners around the world and on campus.

HarvardX is a great example of a place where research is not only foundational but also immediately applicable. While the combination is certainly stimulating, I wonder to what extent this paradigm translates to other fields, and what benefits and risks it carries. With these questions in mind, I cannot wait to see what results our experimentation will bring and how we can use data to improve online learning.

Adaptive and social media in MOOCs: the data-driven and the people-driven

Thursday, May 23rd, 2013

In light of my new position as a HarvardX Research Fellow, I have been thinking about the role of data in improving online learning experiences (aka MOOCs) at edX. Can data tell us everything about the ideal learning experience of tomorrow? Can product developers at edX come up with the best version singe-handedly? Or, maybe, the online students could also tell us what is the ideal MOOC?

First, let's think about what could be the "ideal MOOC". There is a broad consensus that an ideal online learning experience would yield the best "educational outcomes" for the students. For now, let's think about the educational outcome as something that's well-approximated with the amount of learning. Specifically, this means that we want students to extract and internalize as much educational content from the interactive learning experience as possible. Finally, the educational content is information that is relevant to the substance of the class. For example, for a probability course, this would include information on how to use Bayes rule or the change of variables. For a Python programming class this would include information on how to operate Python modules and language syntax. For a class on interactive visualization, this could include (of course!) information on how to use d3js.

This is an important point. Educational content is information relevant to the substance of the class. We want the students to internalize as much of it as possible, make it their knowledge. How can we do that?

Let's assume that the educational materials (lectures, homework, tests, examples) have already been prepared and we believe that they are good. How do we expose the materials to the students in the best possible way so that students learn the most, stay engaged, and more students complete the class?

Clearly, the setting of a MOOC is different from the setting of a standard classroom. One of the significant differences is the number of students - it's massive. Depending on the course, the number of enrolled students can exceed 150 thousand - CS50x by David Malan on HarvardX is a great example. Do we want to expose every single student, no matter what country he/she is from, no matter what talents and aspirations he/she has, no matter how many peers he/she will study with, all to the same sequence of the material? Maybe, yes. And maybe, no.

The setting of MOOCs can be a wonderful platform for adaptive media - an algorithmic way of sequentially presenting content and interacting with the user in order to maximize the informational content that the user "internalizes".

Adaptive media. It's the characterizing trait of a computer as a medium - the ability to simulate responses, interact, predict, "act like a living being". We can use it to model, predict, and synthesize the best way to serve content to users, algorithmically.

Adaptive media is used actively across the Web in conjunction with social media. Often, the inputs of adaptive media are the outputs of social media (and then it repeats). When you share an article on Facebook, the system learns about your preferences and makes sure that the next time you see content it'll be more relevant to your interests. A lot of the time, by the custom-tailored content we mean advertisements. Same goes for LinkedIn - ever noticed the "Ads you may be interested in" section to the right on your LinkedIn profile?

Can we use adaptive media in MOOCs? The benefits are obvious - with hundreds of thousands of enrollees, it is impossible to adequately staff the course with enough qualified facilitators. Adaptive media could be used together with the teachers' input and social media such as forums, social grading, and study groups. The purpose, instead of displaying personalized ads, would be to make sure each student learns as much as possible from the interactive learning experience, in his or her unique way. There could also be a multitude of positive extras - reduced dropout rate, higher engagement, higher enrollment for adaptive MOOCs.

Isn't this interesting?

The gap between academia and current industry practices in data analysis

Sunday, March 25th, 2012

The demand for specialists who can extract meaningful insights from data is increasing, which is good for statisticians as statistics is, among other things, the science of extracting signal from data. This is discussed in articles such as this January article in Forbes, and also the McKinsey Institute report published in May last year, an excerpt from which is given below:

There will be a shortage of talent necessary for organizations to take advantage of big data. By 2018, the United States alone could face a shortage of 140,000 to 190,000 people with deep analytical skills as well as 1.5 million managers and analysts with the know-how to use the analysis of big data to make effective decisions.

This sounds encouraging for the current students in fields such as statistics as they are looking to get out on the hot job market after graduation. However, are they prepared for what industry jobs need them to do?

One news that hasn't been covered in the press yet is that methods and data related problems in industry are often different from those described in the body of scientific publications. By different I mean either scientifically invalid or scientifically valid and cutting edge.

An example of this phenomenon is the so-called Technical Analysis of financial data, which is often used by algorithmic trading groups to devise computer-based trading strategies. Technical analysis is a term that people came up with to describe a set of methods that are often useful, and yet their validity is questionable from the scientific perspective. Quantitative traders have been employing this type of analysis for a long time without knowing whether it is valid.

Another example is a project I worked on, which was to create an algorithm of optimizing annual marketing campaigns for a large consumer packaged goods company (over $6 billion sales) to achieve 3-5% revenue increase without increasing expenditure described in this post. Essentially, this was an exercise in Response Surface methods with dimensionality as high as 327,600,000. There are no scientific papers in the field that consider problems of such high dimensionality. And yet companies are interested in such projects, even given the fact that methods for their solution are not scientifically verified (we worked hard to justify the validity of our approach for the project).

Recently I received an email inviting quantitatively oriented PhD's to apply for a summer fellowship in California to work on data science projects. Here is a quote from the email:

The Insight Data Science Fellows Program is a new post-doctoral training fellowship designed to bridge the gap between academia and a career data science.

Further, here is what is stated on the website of the organization sponsoring the program:

INSIGHT DATA SCIENCE
FELLOWS PROGRAM
Bridging the gap between academia and data science.

As with algorithmic trading about 15 years ago, the use of sometimes scientifically questionable data analysis techniques is commanded by the increased demand for insights from quantitative information. Such approaches, which in the world of quantitative finance are called Technical Analysis, during the current data boom are named Data Science.

When using the term, one should be careful that while the methods employed by inadequately trained "data scientists" may be scientifically valid, they may well not be. There is an inherent danger in calling something that encompasses incorrect methods as a sort of "science" as this instills a perception of a field that is well-established and trustworthy. However, the term is about a couple years old. In my opinion, a more accurate one would be "current data analysis practices employed in industry".

The way we name the phenomenon does not change what it is. It is the fact that there is a lot of data and a lot of problems in industry that often go beyond what has been seen or addressed in academia. This is an exciting time for statisticians.

Optimization, experiment design, and Sir David Cox

Wednesday, August 3rd, 2011

It has been almost a year of my involvement in a project of global marketing mix optimization solution for a large consumer packaged goods company. Conceptually, the problem is simple: given a fitted model of a company's revenue as a function of promotion campaigns for its products, and using past year's promotion campaigns allocation scenario as a starting point, find a revenue maximizing scenario subject to promotion expenditure constraints.

Figure 1: A visualization of a step in a solution of an optimization problem. To see the full dynamic visualization, go to theory.info.

The problem becomes more interesting when we go into details. (more…)