I am sitting here enjoying my last day at the American Society of Public Administration’s (ASPA) annual research conference and reflecting a bit on the various research presentations I attended. What sticks out to me most is the diversity of our field. We are diverse in subject area; local government, state government, budgeting, theory, non-profit management, networking, and human resources were all well represented. We are diverse in scope, studying broad national questions as well as small organizational ones. We are continuing to become more diverse in gender, race, and nationality.
So why is it that the field of public administration (PA) is experiencing methodological isomorphism? I am referring to the elevation of experimental methodology above all others. We are not there yet, and certainly there are many interesting research approaches represented at ASPA, but I also know some scholars avoid ASPA because of a perceived lack of rigor. I also witnessed a disturbing trend of scholars apologizing that their studies are not experiments. As I stress to my methods students, the right methodology is one that enables you to answer your research question or solve your problem. Sometimes it is an experiment, sometimes it is a case study, sometimes it is qualitative, and sometimes it is the poor old whipping boy that is OLS regression. And sometimes a non-experimental design that cannot show causal inference nonetheless moves the study of a field forward.
I try to be methodologically agnostic as both a researcher and a reviewer. My own articles are methodologically diverse (ranging from simplistic to complex) as a result of the specific questions I ask, the data available, and the state of the literature to which I am contributing. I know some journals/job committees/conferences favor methodological consistency, but yeah, that is not me. Now, there is nothing wrong with experimental design. It is moving PA in interesting directions in areas like behavioral PA. That is great. It becomes a problem when it becomes the only acceptable or most-favored approach. Why? Some of the most important PA questions cannot be answered via an experiment. Some of the realities of governance cannot be simulated in a lab. Some of the narrow questions answered in an experimental setting do not translate into actionable knowledge for practitioners.
An example from the school choice research world, where I also keep a firm foot planted, is illustrative. A body of school choice researchers argue that randomized control trials are the gold-standard method of measuring the performance of school choice programs. The phrase gold-standard study has even worked its way into mainstream policy debates. The problem is dozens of gold-standard research studies have failed to answer the larger and more complex problem of whether school choice programs are a good idea. They have shown, with some exceptions, that voucher programs lead to small test score gains for voucher users. The studies answered a narrow question conducive to a randomized control trial, but missed the larger more complex question that is as, or even more important, to policymakers and the public. The gold-standard studies have value, but I wonder how much quality research that could have answered the larger and more complex questions did not occur because of the elevation of randomized control trials?
So I hope those working on important PA questions via experimental designs continue to do so. But I hope those applying other appropriate methodologies to pressing PA questions do not get shut out of opportunities, or feel pressure to unnecessarily change their approaches. Ok. So I am now off to moderate a panel and perhaps listen to Louie Louie, Hang on Sloopy, Knocking on Heaven’s Door, and maybe a couple of the other classic songs out there built on just three or four of the easiest guitar chords that exist.