Social science findings in fields from psychology to economics may be plagued by a set of harmful mistakes, according to a paper published by an interdisciplinary group of academics.
The article, entitled “Promoting Transparency in Social Science Research,” examined some of the most common errors social science researchers make when executing and analyzing their experiments, which in turn lead to incorrect conclusions. The authors — including four UC Berkeley professors — provided a mandate for scholars to consider a more rigid set of guidelines for their work in order to improve accuracy of findings.
“There is such a pressure in academia to consistently publish papers that people feel — whether they’re aware of it or not — that producing publishable research is more important than producing credible research,” said Kevin Esterling, co-author of the paper and a professor of political science at UC Riverside.
In the paper, published in Science earlier this month, the authors explained that three large problems in social science research methods are failure to disclose information about all aspects of a study, including how many times an experiment was run; failure to release all data after publication; and failure to predetermine what hypothesis will be tested.
If, for example, scientists are not investigating one specific correlation, they may overestimate the significance of correlations that have occurred by chance, explained Leif Nelson, co-author of the paper and associate professor at Haas School of Business.
“It’s not a problem of being deceptive or shifty but really the realities of the circumstance,” Nelson said. “When there are so many different ways of looking at a problem, it’s difficult to choose which is correct. And if scientific literature contains findings that aren’t true, it makes it hard for scientific research to be cumulative.”
The urgency of promoting research accuracy emerges from several recent instances of scientists’ attempting to replicate lauded past studies yet failing to achieve the same conclusions.
One example of this is a famous study conducted by psychologist John Bargh in 1996 that examined the effects of unconscious mental priming. When the study was replicated 15 years later, researchers were unable to achieve the same results. According to Nelson, this implies that much widely accepted research may not be robust or even accurate.
“Across all these fields, the output of research feeds into very important decision-making,” said Edward Miguel, lead author and campus professor of economics. “Firms, investors and households make decisions based on the body of knowledge that’s out there, and we have to know that the studies we’re using to back up our claims are valid.”
Several of the authors are members of the Berkeley Initiative for Transparency in the Social Sciences, a network dedicated to increasing the flow of factual information in the social sciences. The authors hope that by inciting a dialogue among academics, future researchers can avoid the pitfalls of their predecessors.