Two new articles caught my eye over the weekend. The first was Alan Borsuk’s latest column touching on the state of testing in Wisconsin, which he appropriately describes as being in a “state of limbo.” Borsuk states that Wisconsin students will, in 2015-2016, be taking their third different test in three years (and what test will be used is still undecided). In addition, the school report card will be taking a year off. I guess that means my son’s school will continue to “meet few expectations” for a while.
The other article is an academic piece entitled Value-Added Measures: Undermined Intentions and Exacerbated Inequities by Kimberly Kappler Hewitt. In the article Dr. Hewitt reports the results of a survey on value adding testing she administered to teachers in a decent-sized North Carolina school district. Generally, she found the use of value-added testing in teacher effectiveness models to increase perceived inequities and undermine teacher morale and methods.
Reading these two pieces on the same day got me thinking about what I call the analytics paradox. It is impossible to perfectly and objectively measure public performance, yet the public sector nonetheless needs to use analytics to improve public performance. As I tell my students in my research methods course, analytics are nothing more than attempts to measure a concept. So, in K-12 education a standardized test score is an attempt to translate student performance (or teacher performance, or school performance) into a number (or grade, or percentage, etc.). Obviously, it is impossible to perfectly translate a concept as complex as student performance into such a simple variable. Yet there is huge pressure to try. And I think that it good.
Why? As I wrote a few years back in a report I co-authored for the Wisconsin Policy Research Institute, data, when in the hands of public employees, can be actionable intelligence used to improve the performance of public organizations. Now, if that data is viewed as an attempt to undermine public employees, as the Hewitt piece illustrates, the chances of said data having a positive impact on public performance goes way down.
Going back to my first paragraph, I mentioned my son’s school is deemed to be meeting few expectations under the state report card system. What exactly does that mean? A closer look shows the low score is mostly driven by a failure of the school to close reading and math achievement gaps. On its face, that is good to know. But the story is more complex. The school is only K-5, meaning the school has very little time (grade 3 – 5) to close the gaps. In addition the school has a fairly large ESL population (in addition to being majority low-income). The way I take this, as a parent and taxpayer, is it is important to know where the school can improve, but also not surprising that the gaps exist given the population served and limited time of attendance.
On the heels of the push to turn over control of low-performing schools in Milwaukee to outside providers, I could see how the school district may view this report card as a threat. As a researcher, I’d like to view this as a baseline by which to determine if the school is showing progress. But, as Borsuk articulates, this is impossible because Wisconsin keeps changing the tests.
The lessons for policy makers and public managers here is actually fairly simple. First, policy makers need to make performance data widely available to public managers in a format they can use on the ground. Simplicity, like letter grades for schools, sounds nice, but doesn’t mean all that much to the practitioner. Second, policy makers also need to be consistent with how they measure public performance. Three years of different tests means three years of very limited-use data. There is no perfect analytic, and the search for it undermines the usefulness of what is available. Last, practitioners need to continue to embrace the use of analytics (the reality is public employees constantly make good use of data), and demonstrate (celebrate?) exactly how they are using data to make evidence-based decisions.
It is disheartening to watch how politicized and divisive the boring world of K-12 performance data has become. Creating consistent, useful systems with high face validity need not be this complicated.