A hidden “universe of uncertainty” may underlie most scientific discovery, especially in the social sciences, a new study has found.
When scientists used the same data set to answer a question hypothesis — that immigration reduces support for social policy — dozens of researchers have produced completely different results, according to a new study, published Oct. 28 in the journal Proceedings of the National Academy of Sciences.
The finding suggests that it can be very difficult to be confident in the results in some of these areas, as even small changes in the initial choices could lead to drastically different results.
In the new study, Nate Breznau, a postdoctoral researcher at the University of Bremen in Germany, and his colleagues asked 161 researchers from about six dozen research teams to test a common hypothesis: immigration reduces support for government social policy. This question has been asked hundreds of times in the social science literature, and the results have been everywhere, Breznau told Live Science.
As a baseline, they provided the research teams with data from six government policy-related questions from the International Social Survey Program, a large dataset that tracks policy differences in 44 countries.
Next, they asked the teams to use logic and prior knowledge to develop models to explain the relationship between immigration and support for government social services.
For example, a group might predict that an increased flow of immigrants to a country increases competition for scarce resources, which, in turn, decreases support for social services. The research teams then had to decide what types of data to use to answer this question (for example, net inflow of immigrants to a country, gross domestic product, or average or median income in different regions), as well as types statistical analyzes they would use.
RELATED: Deductive vs Inductive Reasoning
The conclusions of the research groups reflected the entire literature: 13.5% said that it was not possible to draw a conclusion, 60.7% said that the hypothesis should be rejected and 28.5 % said the assumption was correct.
Breznau’s team then used their own statistical analysis to try to understand why different groups came to such different conclusions.
They found that neither bias nor inexperience could explain the variance. On the contrary, hundreds of different, seemingly minor decisions may have altered the conclusions in one way or another. Even more surprisingly, no set of variables seemed to skew the results one way or another, perhaps because there simply wasn’t enough data to compare the different models. (There was a limitation to the study: the authors’ analysis itself is a statistical model and is therefore also subject to uncertainty.)
It is not clear to what extent this universe of uncertainty afflicts the other sciences; astrophysics, for example, may be easier to model than large-scale human interactions, Breznau said.
For example, there are 86 billion neurons in the human brain and 8 billion people on the planet, and these people all interact in complex social networks.
“There may be fundamental laws that would govern human social and behavioral organization, but we certainly don’t have the tools to identify them,” Breznau told Live Science.
One takeaway from the study is that researchers should spend time refining their hypothesis before moving on to data collection and analysis, Breznau said, and the new study’s hypothesis in is a perfect example.
“Does immigration undermine support for social policy? That’s a very typical social science hypothesis, but it’s probably too vague to elicit a concrete answer,” he said.
A more specific or targeted question could potentially yield better results, Breznau said.
If you want to see how different variables and modeling choices affected each model’s results, you can do so through their Brilliant app.