recentpopularlog in

kme : publication   12

sometimes i'm wrong: life after bem |

let's start with this: bem recommends writing 'the article that makes the most sense now that you have seen the results' (rather than 'the article you planned to write when you designed your study'). it is pretty clear from the rest of this section that bem is basically telling us to invent whatever a priori hypotheses turn out to be supported by our results. indeed, he says this pretty explicitly on the next page:

'contrary to the conventional wisdom, science does not care how clever or clairvoyant you were at guessing your results ahead of time. scientific integrity does not require you to lead your readers through all your wrong-headed hunches only to show - voila! - they were wrongheaded.'

actually, science does care. and scientific integrity does require something along those lines. not that you tell us about all your wrong ideas, but that you not claim that you had the right idea all along if you didn't. if science didn't care, pre-registration and bayesian statistics would not be enjoying the popularity they are today.

the tension here is between sticking to the (messy) facts, and doing some polishing and interpreting for the reader so she doesn't have to wade around a giant pile of unstructured findings. i will give bem credit, he seems aware of this tension. later in the chapter, he writes:

'[...] be willing to accept negative or unexpected results without a tortured attempt to explain them away. do not make up long, involved, pretzel-shaped theories to account for every hiccup in the data.'

so he acknowledged that the results don't need to be perfect (though he seems more worried about the fact that the story you would need to tell to explain the imperfections would itself be ugly and tortured, rather than about the fact that it's important to disclose the imperfections for the sake of scientific rigor).

another (related) step is to stop claiming to have had a priori hypotheses that we didn't actually have. however, this step is trickier because editors and reviewers still want to see the 'correct' hypothesis up front. there are some good reasons for this - it is kind of a waste of time to read an entire intro building up to a hypothesis that turns out to be false. one handy solution for this is to select research questions that are interesting whichever way the results come out - then the intro can present the two competing hypotheses and why each result would be interesting/informative.*** i am told that in some areas of research this is just not possible. all the more reason for everyone to become a personality psychologist. come join us, we never know which way our results will turn out!

From the comments:
When I first encountered Bem's paper I was (quite frankly) appalled. But other's had directed me to read it, and I saw no criticisms in the literature about it, so I thought "well, that just must be the way things work."

Then (a few months later) I read a wonderful paper by Arina K. Bones (a great psychologist in her own right) and Navin R. Johnson ( On p. 409 the paper reads "We did not know whether to predict a rapid or slow change in animate associations because of conflicting existing evidence about the malleability of implicit cognition. The fragility of the hypothesis did not pose a difficulty because we proposed a priori to write the article as a good story and as if the ultimate results were anticipated all along (Bem, 2003). After all, this is psychology--only actual scientists would want to read how the process of scientific discovery actually occurred."

Exactly. And now 7 years later, your post is the second time I have seen Bem's advice criticized. I am almost sure to have missed another criticism of it here or there somewhere, but I am still amazed that this is considered mandatory reading nay, advice, for graduate students in psychology.

I found this quote from Albert Einstein on the internet one day (i.e., I am not giving accuracy to its attribution; nonetheless I find it useful): "Anyone who doesn't take truth seriously in small matters cannot be trusted in large ones either."

Sometimes science is full of boring details. So be it.
writing  science  research  publication 
november 2017 by kme
Sokal affair - Wikipedia
In an interview on the U.S. radio program All Things Considered, Sokal said he was inspired to submit the bogus article after reading Higher Superstition (1994), in which authors Paul R. Gross and Norman Levitt claim that some humanities journals would publish anything as long as it had "the proper leftist thought" and quoted (or was written by) well-known leftist thinkers.[6][7]
science  publication  ethics  fakescience 
february 2017 by kme
publications - Can I add a baby as a co-author of a scientific paper, to protest against co-authors who haven't made any contribution? - Academia Stack Exchange

IMO using a baby is worse than using a cat or a made up name. Sometime that baby will grow up and then you have a person who really exists falsely attributed on the paper.
research  publication  coauthors 
november 2015 by kme

Copy this bookmark:

to read