Contact Us

Use the form on the right to contact us.

You can edit the text in this area, and change where the contact form on the right submits to, by entering edit mode using the modes on the bottom right. 

         

123 Street Avenue, City Town, 99999

(123) 555-6789

email@address.com

 

You can set your address, phone number, email and site description in the settings tab.
Link to read me page with more information.

Blog

Stop bashing methods. Help create a better world. #BMJnoQual

Randall F. Clemens

Originally posted at www.21stcenturyscholar.org

1.

Last year, the British Medical Journal rejected an article. Such an action does not ordinarily generate attention; editors reject articles every day. The author, however, tweeted the rejection: “Thank you for sending us your paper. We read it with interest but I am sorry to say that qualitative studies are an extremely low priority for the BMJ. Our research shows that they are not as widely accessed, downloaded or cited as other research.” The tweet produced a lively strand of responses, responses like “shocking and shameful,” “epistemological oppression,” and, “I guess nothing qualitative ever happens in a clinical setting.”

As scholars, we learn where to submit (and where not to submit) our work. I conduct qualitative research with policy implications. I know, however, if I submit an ethnographic article to Educational Evaluation and Policy Analysis, the editor is likely to swiftly drop-kick it back to me. Similarly, I know my quantitative-oriented friends will not be submitting to Qualitative Inquiry any time soon.

Without getting into nuances like impact factors, tenure decisions, and research standards, most qualitative researchers understand the landscape. To be honest, when I read the BMJ rejection note, I thought it was quite civil. I have received a few rejections that would make that rejection blush. I once submitted a life history on a late Friday evening. Within an hour, I received a rambling three-paragraph response from the editor, perhaps assisted by a nightcap or two, stating that anything with an N of one is neither research nor policy-relevant. Okie dokie. Thanks for the quality feedback. On to the next journal.

2.

Last week, Stephen Porter, a professor of higher education at North Carolina State, published a strongly-worded blog denouncing the #BMJnoQual incident, in particular, and qualitative research, in general.

I do not know Porter. He is a senior scholar with a fine record and numerous accomplishments. He is far more accomplished than I—so, take everything with a grain of salt. I am sure he is a reasoned, thoughtful person. However, after even a generous interpretation, the blog demonstrates a provincial understanding of qualitative research and a paternalistic and mean-spirited tone towards qualitative scholars.

Consider a few points:

(1) The title of the blog—“Speaking Truth to Power about Qualitative Research”—is ironic. Whether intentional or not, the blogger alludes to Aaron Wildavsky’s classic Speaking Truth to Power: The Art and Craft of Policy Analysis. Wildavsky, a hugely influential policy scholar, argued that policy analysts need to account for the interpretive nature of policy-making—something that qualitative work is particularly well-suited to accomplish.

(2) Porter argues that qualitative research has little impact and will have less. I agree that qualitative researchers need to redouble their efforts to collaborate with scholars and practitioners across disciplines and methodologies in an effort to produce and advocate for rigorous policy-relevant research. NSF isn’t funding many six- and seven-figure qual studies. But, many examples exist—as models for early career faculty to follow—of scholars who have achieved wide impact with well-designed studies that include qual methods. I’m fortunate to have two of them as mentors: Bill and Yvonna. Even more, look at the list of past AERA presidents over the last few decades. That’s an awful lot of impactful and innovative “dinosaurs,” a term Porter uses to describe qualitative researchers.

The blogger also introduces the technology argument: tech will enable scholars to create, gather, and analyze larger and larger datasets. The argument works both ways. Technology and social media will provide new opportunities for qualitative researchers to collect, synthesize, analyze, present, and share data. Big data will magnify, not lessen, the need for interpretive and site-based inquiry. As we’ve learned from previous examples, policies based on one-sided approaches are often ineffective. Some even reinscribe the same inequities they seek to remedy.

(3) The idea that qualitative research does not appear in well-read publications is fiction. I have read numerous articles and blogs at the NY Times and Washington Post. I am a sociologist of education who examines neighborhood-level issues. For a recent example of impact, search Google for MacArthur Fellow Matthew Desmond’s newest book Evicted, based on ethnographic research. And, while federally-funded qualitative studies in education are rare, there are numerous foundations who do fund multiple methodologies. I’m thinking of the Russell Sage Foundation, which has funded, published, and promoted significant projects that have reached beyond academia and into public and policy discourses.

(4) Porter presents a straw-man argument about generalization and qualitative research. Of course, qualitative research can’t (and shouldn’t) generalize—while a conversation for another time, most quantitative work shouldn’t either. And yet, from rigorous, well-designed qualitative studies, scholars can and have provided actionable findings and policy implications. At school and community levels, researchers and participants creatively and meaningfully employ strategies like action research and participatory action research to improve practice, research, and policy.

(5) The blogger writes, “Some qual researchers insist there are multiple realities. What do you think the average person, who lives in a single reality like most of us, thinks of this idea?” The answer is—while probably not using the pedantic words of academics—they would agree. As accomplished scholars like Gloria Ladson-Billings, Rich Milner, Luis Moll, Michelle Fine and numerous others have demonstrated, education research and policies often propagate white, privileged perspectives. Deficit-based policies represent dominant beliefs and assumptions about whose knowledge matters and how it is defined and measured. The “average” person who has experienced persistent poverty and endured racist policies would probably argue that their voices are not heard and their experiences are not represented in policy discussions and designs.

Critical (and necessary) exchanges about epistemologies and ontologies and axiologies and other fancy words and concepts are easy targets. From a policy perspective, overly theoretical arguments unmoored from practice—Latour refers to these stances as “fairy positions”—become counterproductive. I agree that navel-gazing rarely influences policy design. But, skilled academics have the ability to connect theory and practice, something the aforementioned scholars (who have employed qualitative methods) have done to great and consequential success.

I could go on, but I won’t. Again, I do not know Porter. I’m sure he has reasons for his fervent and seemingly unyielding opinions about qualitative research. He certainly has years of experiences to inform his perspective. But, a narrow approach to research—and, by extension, knowledge, beliefs, values, assumptions, etc.—is misguided, at best, and harmful, at worst. When has a one-size-fits-all approach to education ever worked, particularly for underrepresented populations? Social inquiry and policy design require a plurality of approaches. Each has strengths and limitations. We have a large toolbox of methods to examine complex, intractable issues. Why would we limit ourselves to just one?