Illustration: Joyce Hesselberth

For the past decade, online “smart” systems have helped teach concepts such as fractions or laws of physics. The systems analyze students’ work and assign problems that target their weaknesses.

As with quantitative trading in finance and predictive marketing in advertising, the data proliferation from online and mobile systems, coupled with increased processing power and storage capacity, has spawned the measurement, collection, analysis and reporting of data about learners and their contexts, to understand and optimize learning and the environments in which it occurs. But leaders like TC’s Charles Lang fear that “atheoretical” algorithms, disconnected from hypotheses about student behavior, can’t explain such patterns or appropriately guide a system’s prompts.

Lang — Visiting Assistant Professor and co-coordinator of TC’s Learning Analytics program — believes three models should guide smart system development:

A narrative model “An elevator speech to say, ‘Here’s what this system does.’”

An operational model that tells the system what to count For example, Google’s Flu Tracker system predicted major flu outbreaks by monitoring people’s online searches for flu information and purchases of flu-related products.

A continuous validation model Flu Tracker ultimately wrongly predicted a vast outbreak. Lang’s guess: Inquiries and purchases spiked when people heard about a bird flu outbreak or other similar illness.

Lang himself has created Snowflake, an algorithm that decodes each student’s unique problem-solving logic based on detailed information about his or her learning over time. Now he wants to empower teachers to critique the choices made by smart systems. His strategy: to create a “domain-specific” language

that allows teachers to “look under the hood” and assess a system’s conclusions. “TC should teach teachers how to decide whether the software is really doing a better job than using a piece of paper.”

Sounds like a smart system indeed.