- Flashing lights might cure Alzheimer’s, according to Nature.
- There is no paradox: being obese is definitively bad for you.
- Class attendance predicts success in college.
- Barbara Streisan had her dog cloned, more than once.
- Contrary to what we have been told: stopping or reducing dietary fiber intake reduces constipation.
- Fasting can help prevent and treat cancer. Sadly, my wife would not let me fast if I wanted to (I’m quite thin as it is).
- If you cannot fast, maybe you can take aspirin: Aspirin mimics some of the effects of caloric restriction.
- Mitochondria, the power plants of your cells, run at a temperature of 50°C.
- Apparently, nobody knows how airplanes fly. (Credit: Leonid Boytsov)
- Another well-established psychology result bites the dust:
Dijksterhuis and van Knippenberg (1998) reported that participants primed with a category associated with intelligence (professor) subsequently performed 13% better on a trivia test than participants primed with a category associated with a lack of intelligence (soccer hooligans). (…) The procedure used in those replications served as the basis for this multilab Registered Replication Report. A total of 40 laboratories collected data for this project, and 23 of these laboratories met all inclusion criteria. Here we report the meta-analytic results for those 23 direct replications (total N = 4,493), which tested whether performance on a 30-item general-knowledge trivia task differed between these two priming conditions (results of supplementary analyses of the data from all 40 labs, N = 6,454, are also reported). We observed no overall difference in trivia performance between participants primed with the professor category and those primed with the hooligan category (0.14%) and no moderation by gender.
Psychology as a field is in big trouble.
- An old reference that should serve as a good reminder not to trust what you read:
Functional MRI (fMRI) is 25 years old, yet surprisingly its most common statistical methods have not been validated using real data. Here, we used resting-state fMRI data from 499 healthy controls to conduct 3 million task group analyses. Using this null data with different experimental designs, we estimate the incidence of significant results. In theory, we should find 5% false positives (for a significance threshold of 5%), but instead we found that the most common software packages for fMRI analysis (SPM, FSL, AFNI) can result in false-positive rates of up to 70%. (…) Our results suggest that the principal cause of the invalid cluster inferences is spatial autocorrelation functions that do not follow the assumed Gaussian shape.
I like this example because it is very common for statisticians to build into their model the assumption that the data must follow some prescribed distribution. Real-life is complicated and rarely behaves like our models.
- We are setting up a mobile phone network on the Moon.
- The state of California will authorize fully automated cars on its roads.
- The cells in your heart do not regenerate. Scientists have found that by turning four genes “on”, they can get the cells to divide and maybe regenerate your heart.
- PhD students face significant mental health problems.
Concerning “Psychology as a field is a big trouble.”
Have you read “The Seven Deadly Sins of Psychology A Manifesto for Reforming the Culture of Scientific Practice” by Chris Chambers. Very interesting… and not only for psychologists!
I haven’t read this manifesto. I’ll hunt it down.
While it is undeniably true that people who are obese have higher risks of disease and on average shorter life-spans, has this been separated from lifestyle factors such as exercise or poor eating habits (processed vs. whole food etc)? I suspect that the risks are different for the obese poor than for the obese rich as well.
There are also the increased dangers associated with yo-yo dieting (that have been documented since 1988 if not earlier). Are the subjects in question subjecting themselves to this factor and thus placing themselves at increased risk?
Anywhere in science that people draw a line between an observation and an outcome, I become suspicious.
Hi Daniel,
What do you mean precisely by saying that psychology as a field is in big trouble? What’s the problem with the article you are quoting?
Highly cited and widely reported results in psychology do not hold up under reproduction.
Psychology is composed of several disciplinary sub-fields, among which is the social psychology, concerned by the article that you quote. In each field there are competing theories, giving rise to experimental research aimed at empirically validating the theory in question. The method of “priming” used in the article cited, is a widely used method in cognitive psychology, social psychology, psycholinguistics, cultural psychology, etc.
In the field of social psychology, there is a significant body of findings on social priming, suggesting that social knowledge (eg.: stereotypes) is spontaneously and immediately activated in memory. This automatically activated knowledge forms and influences people impressions, judgments, feelings … (see Ferguson and Bargh, 2004).
The research of Djiksterhuis and van Knippenberg (1998), which failed to be replicated in the large-scale replication project that you report, tested a bold assumption that automatic and unconscious knowledge activation can influence complex reasoning processes (hence higher psychological processes). Their results would have allowed bringing a “brick” to the broader theory according to which complex behavior can be automatically induced or driven by the knowledge incidentally activated during perception. Simply stated, these researchers hypothesized that priming effect (activation of complex social constructs) might extend beyond judgment to behavior.
But, contrary to what you seem to suggest (that this is a widely validated result), even from this theoretical approach, their hypothesis was “daring,” and conflict with other theories of complex cognitive processes. That is why Djiksterhuis and van Knippenberg (1998) results were surprising and quickly “tested” by other researchers, who either succeeded in reproducing it (e.g. Bry et al, 2008), refine the understanding of the underlying mechanisms (e.g. LeBeouf and Estes, 2004), or failed to replicate it (e.g. Shanks et al, 2013). Indeed, even within the framework of their underlying theory, it is accepted that the mechanisms behind social priming phenomena are multiple and that it is necessary to break down different factors in the experiments better so that the results can enrich, refine, and even reconstruct the underlying theory.
These considerations lead me to stress the difference between studies that aim at “direct” replication of an empirical result and studies aimed at the replication of a mechanism.
Here too, there are different schools of thought about methodologies and replication goals. This issue has been widely discussed in leading journals in psychology, and various replication initiatives have been put in place (an excellent presentation is that of Stroebe and Strack (2014) which also takes the experience of Djiksterhuis and van Knippenberg (1998) as an example).
The study you are quoting is part of such an initiative, and it testifies, in my opinion, not the weakness of the scientific approach in psychology, but its strength, since it proves that psychology can self-correct by putting in place proper mechanisms to improve.
I have not been able to access the full text of this study, but following the summary, reported studies aimed at direct replication and they invalidated the results of Djiksterhuis and van Knippenberg (1998). So, their results allow to advance in social priming research and claim for greater focus on the mechanisms that underlie the apparent potential independence of conscious intention and actual behavior, as argued by Ferguson and Bargh (2004) and Wheeler and DeMaree (2009). Moreover, it is a significant contribution because it brings arguments to the proponents of theories that value the intentional and conscious control of complex behavior. To conclude, results of this large-scale replication are “problematic” for one theory, but not for another (although the question of direct replication vs. conceptual replication would undoubtedly be raised again). Until then, nothing more normal in Popperian science 😉
(…) it proves that psychology can self-correct by putting in place proper mechanisms to improve.
The study I quote is pre-registered and multi-lab. This means that before you run the experiments, you publish your methodology, your hypothesis, and your statistical tests. Then several independent laboratories run the experiment, as per the registered methodology, and then the results are published.
If this were a common practice in psychology, then I would agree with you that one could be hopeful about the field.
Setting aside the multi-lab part, how common is pre-registration in psychology? Can you point me to a database of pre-registered experiments?
There is an interesting ” research digest” that I quote :
See also :
Registered Replication Reports
Multi-lab, high-quality replications of important experiments in psychological science along with comments by the authors of the original studies.
https://www.psychologicalscience.org/publications/replication
There is a Faustian reading of this sentence.
Consider an analogy…
“It may seem like a disappointing day for politics when politicians keep being caught in corruption charges, but the opposite is true.”
I understand where they are coming from. The fact remains that if you pick a highly cited psychology article at random, even if the journal article ended up getting lots of publicity and a TED talk, then chances are good that the work is not reproducible. That is, it is wrong.
You may choose to feel good about the fact that, in a handful of cases, two decades later, people will try to reproduce the work and be able to publish a failure-to-replicate study. That it makes people feel good is an indication of how low they put the bar.
As in my corrupted politician analogy, you can choose to feel good about the fact that some of the politicians get caught, even if it takes decades. That’s certainly better than some alternatives. But you still don’t have honesty!
There are thousands upon thousands of new psychology studies published each year. What fraction of them will be tested during a reproduction study? Probably much less than 1%.
What are these researchers doing? The obvious priority is to focus on reproduction. That is not happening. Why not?
The link https://www.psychologicalscience.org/publications/replication/ongoing-projects is interesting.
You have, at best, 4 ongoing replication studies going on right now. I say at best because I cannot find complete documentation for any of these registered studies.
Then you have 5 published replication reports. Five.
If this site is anything close to a good match for reality, it is deeply depressing.