Why We Still Need Humans Dept.
Published in TC Today - Volume 36, No. 2
States were collecting a mountain of school data that no one looked at. Priscilla Wohlstetter is using it to learn what makes charter schools tick
Over the past decade, the federal government has required states to develop vast, sophisticated computer systems for tracking school and student performance, part of the general push for greater accountability in education. The result, attests Priscilla Wohlstetter (TC’s Visiting Tisch Professor in 2010-11 and now a Visiting Professor), has been either a gold-mine of data or a national case of “too much information,” and sometimes both.
Several years ago, Wohlstetter—the Diane and MacDonald Becket Professor in Educational Policy at the University of Southern California’s Rossier School of Education, and Director of USC’s Center on Educational Governance—was approached by a group of California charter school leaders with a problem. Under state law, they were required to submit reams of “compliance data” on their schools’ finances, demographics and academic results, information that, in theory, could explain why some schools’ students might be performing poorly on standardized tests. But as representatives of relatively new schools, where cultures of success were not yet fully entrenched, these school leaders knew that improvements in test scores take several years. Could Wohlstetter help them develop additional indicators of performance and growth?
As Wohlstetter and her team began sifting through the state compliance data, she realized that no one had ever looked at most of it, let alone tried to transform it into useable information. She also realized that she now had a new window onto a central question in her research: What makes good charter schools good?
“People say charters are preparing students for the 21st century, but we’re here now—it is the 21st century—and these schools are still black boxes to a large extent,” she says. “We know that many of them employ project-based learning, have smaller classes and use a lot of technology, but we don’t really know how that plays out with respect to performance. So we began looking at all these data points—the finest grains of sand, really—to see how we could turn them into meaningful indicators of performance.”
The result was the 2006 launch of the School Performance Dashboard, a comprehensive annual performance assessment of each of California’s more than 900 charter schools. Assessments are based on 12 indicators, ranging from standard measures such as Average Yearly Progress to metrics of the Center’s own devising, such as School Productivity and Academic Momentum. The Dashboard, which last year included a first-ever ranking of California’s top 10 charter schools, has come to be regarded as the most comprehensive and substantive effort of its kind. Indeed, one of the top 10 charters used its ranking to build a case for a high bond rating. The school eventually received the bond and used the money to construct a new building.
The Dashboard now also includes an interactive website that users can employ for a number of purposes. These include comparing individual schools over time, checking the performance of a specific school that one’s child attends, looking for foundations that invest in charter schools, finding a job at a charter school, and providing evidence for charter school authorizers who are considering whether to renew or revoke a school’s charter. Meanwhile, Wohlstetter is working on expanding the Dashboard to other states.
“The key is to work with stakeholders in each state in developing different indicators, because each state values different things,” she says. “In California, the performance of English Language Learners is very important. In Louisiana it is not; they are very focused on special education and college readiness.”
The Dashboard also takes pains to distinguish between value-laden indicators—clear-cut “more is better” factors, such as the number of highly qualified teachers in a school—and neutral information that’s of interest to stakeholders but depends on other factors for a positive or negative impact.
For example, Wohlstetter says, “We’ll report on how many computers a school has, but we won’t attach a value to it, because as many principals have pointed out to us, you can have a whole bunch of computers sitting in the back of a room gathering dust, and you can have a classroom with no computers where the kids are very productively engaged in learning.”