Despite the warnings of Donald Trump and Ben Carson, I went ahead and got a flu shot yesterday. Thus far the only side effect is a reduced likelihood of contracting the flu. Though, when I looked in the mirror this morning I noticed I am balding, so maybe it was the flu shot. I might Google it and find out. After all, research in the Internet age seems to consist of searching the Internet for anecdotes that support your conclusion. I’ve had many discussions with otherwise intelligent rational people that include the phrase, “you just have not seen the research I’ve had, I’ll send you the links.” Yeah…why don’t you not do that.
I tend to be pretty harsh in my thinking about the position of the anti-vaccine folks, and the positions of others who make claims about food safety etc. that really are not supported by science. However, if I am honest, I must admit that policy-makers and interest groups participate in pseudo-science all the time. Education, for example, is an area where anecdotes are constantly used as evidence to support reform efforts shown to have little impact on academic achievement. Perhaps it is just human nature to be moved more by a personal story rather than a set of regression results.
I also think it is a side effect of researchers too often being obsessed with the wrong questions or methods. This struck me earlier this week while giving a lecture on theories of public policy making. Many of the theories, which are tested over and over, just do not reflect the actual public policy making process experienced by the practitioner. It does not mean the theories are useless or bad, just that there is a missing step where we, as social scientists, needs to connect our questioning to reality.
How can we do this? One method is the oft-maligned case study. Though often judged harshly during the peer-review process for not being generalizable, they are invaluable in illustrating how a theory applies to reality. Another method is survey research. There is a move in public administration away from certain types of survey research due to the real problem of common-source bias, and the measurement issues created when humans are asked to give their perceptions. But we need these perceptions. Lest we forget public administration is a practical science where human nature, with all of its volatility and inconsistency, is as important a variable as anything that can be measured directly. Yes, understanding phenomena in public administration would be easier without having to worry about the human element, but that human element is, in my opinion, the most important factor in the field. Sometimes a messy, imperfect but relevant piece of research is superior to a methodologically perfect piece of research completely detached from and useless to the practitioner in the field.
My point here is, the problem of policy-making by anecdote rather than evidence can be addressed by refocusing social science research (in public administration and other relevant fields) to ensure it is applicable and accessible to practitioners. We need to give them something they can use, and include them in the process. It is not fair to complain about the lack of evidence-based policy making and implementation without providing accessible evidence.