Identifying Outperforming School Districts in Ohio and Texas | Teachers College Columbia University

Skip to content Skip to main navigation
Skip to content

Identifying “Outperforming” School Districts in Ohio and Texas Over the Long-Term – A Complement to Current State Accountability Systems

Since the authorization of No Child Left Behind policy in 2002, an increasing number of states began to provide designation letters such as A, B, C, D to evaluate school and school districts based on students’ performance on standardized test scores. Such designation letters not only provide information for multiple stakeholders on how school districts perform within a state for a given year but also allow researchers and practitioners to identify “outperforming” school districts for further in-depth qualitative studies to generalize successful leadership practice. However, this method has been critiqued historically for not considering school district contextual or demographic effects on students’ achievement test scores. Evaluating school district effectiveness based on single-year performance has been criticized as “snapshot” research without considering a school district’s capacity for achievement growth over time.

In the recently published chapter in the book Leading Holistically: How Schools, Districts, and States Improve Systemically, Alex J. Bowers, Xinyu Ni and Jennifer Esswein addressed the above critiques through applying hierarchical linear growth modelling to all school districts in the states of Ohio and Texas to identify districts that significantly outperform or underperform their contexts, demographics, and resources based on their seven-year longitudinal performance. The study builds on the previous literature on district “site-selection” models (Bowers, 2008, 2010, 2015) and was successfully applied into two different states, Ohio and Texas. Each state has very different educational histories and challenges, and the study successfully identifies 15 outperforming districts in Ohio and 32 outperforming districts in Texas, demonstrating the generalizability and practicability of this modeling framework in District Effectiveness Research (DER).

Figure 1.1 and Figure 1.2 display the seven-year longitudinal performance of each Ohio and Texas school district’s performance index (PI) score over seven academic years. In the plot, each line represents the trajectory for one school district. The school districts PI scores are calculated as an average across all subjects (five subjects including math, reading, writing, social studies and science for Ohio; reading and writing for Texas) and across grades for both states (grade 3, 8 10 in Ohio; from grade 3 to grade 12 in Texas). There are 608 school districts in Ohio State and 1028 school districts in Texas included in the analysis. As shown from the trajectory plot, school districts’ performance in both states slightly increases over seven years.

Figure 2.1 and Figure 2.2 present the model identified “outperforming” school districts in Ohio and Texas. Fifteen school districts above the top line in Figure 2.1 were identified as “outperforming” in Ohio while thirty-two school districts above the top line in Figure 2.2 were identified “outperforming” in Texas. Additionally, school districts between the two lines were considered as “at the norm” districts. School districts under the bottom line were considered “underperforming”. In the plots, school districts are denoted with a specific marker related to the state assigned letter grade with “+” representing an A and “×” representing an F (2010-2011 rated by the Ohio Department of Education (ODE) in Figure 3.1 and district rating grade for 2011-2012 by the Texas Education Agency (TEA). As shown in the plot, there are several school districts in both states that were identified as “outperforming” by the model while the state assigned a low letter grade by the such as “D” or “F”.

To further compare model results and state accountability indicators the study further provided a quadrant plot for both states in Figure 3.1 and Figure 3.2, where the x-axis represents school district performance in state accountability assessment indicators and the y-axis represents school district performance identified by the model. The larger values represent better performance for both assessment frameworks. Three dashed lines separate the plot into four parts, where we defined the upper left as low-high; upper right as high-high; lower left as low-low and lower right as high-low. Districts in the high-high quadrant (top right) were identified as “outperforming” by the model and are rated highly by their state, and thus the two modeling frameworks agree. Similarly for school districts in low-low (bottom left) quadrant. However, for the districts in the low-high quadrant (upper left) the model suggests that they should be outperforming, yet by the state accountability indicator they are seen as having less than average performance. School districts in the high-low quadrant (lower right) are above the average by the state accountability indicator and are identified as “underperforming” by the model.

The argument for the authors is not to replace the state accountability systems; instead, this set of models and visualization provide a new means for practitioners, researchers and policy makers to identify outperforming or underperforming long-term district performance trends in their states. As the HLM approach takes advantage of long-term growth while most of the state systems use single year performance for AYP status, further investigation and reflection between state-determined ratings and HLM results are encouraged for better understanding why a district received different ratings from two frameworks. Investigations such as this can help the state identify schools and districts that better reflect initial intentions of the system. Changes to state calculations could include creating conditions that honor growth revealed by the HLM, or excluding indicators unintentionally causing disagreement of the two methods.

Finally, the authors encourage next-step studies in gathering leaders and central office personnel for the 15 districts identified as outperforming in Ohio and the 32 districts in Texas for a multi-day workshop to identify possible generalizable improvement approaches for further investigation and qualitative deep-dives. Additionally, as argued by the authors, districts at or below the norm could pair with outperforming districts with similar student populations to understand processes and foci in order to learn what is transferrable so as to improve school district effectiveness systematically.


Bowers, A.J., Ni, X., Esswein, J. (2018) Using Hierarchical Growth Modeling to Promote District Systematic Improvement in Ohio and Texas. In Shaked, H., Schechter, C., Daly, A. (Eds.) Leading Holistically: How Schools, Districts, and States Improve Systemically (p.77-100). Routledge.

Open Access Version:

Link to the book chapter: