in ,

Meta-analysis study indicates we publish more positive results, Ars Technica

Meta-analysis study indicates we publish more positive results, Ars Technica


      So meta –

             

Meta-analyzes will only produce more reliable results if the studies are good.

      

      Dec (************************************************, 01575879 2: (UTC UTC) ************

        ********************

A meta-analysis is a way to formalize that process. It takes the results of multiple studies and combines them, increasing the statistical power of the analysis. This may cause exciting results seen in a few small studies to vanish into statistical noise, or it can tease out a weak effect that’s completely lost in more limited studies.

But a meta-analysis only works its magic if the underlying data is solid. And a new study that looks at multiple meta-analyzes (a meta-meta-analysis?) Suggests that one of those factors — our tendency to publish results that support hypotheses — is making the underlying data less solid than we like.

Publication bias

It’s possible for publication bias to be a form of research misconduct. If a researcher is convinced of their hypothesis, they might actively avoid publishing any results that would undercut their own ideas. But there’s plenty of other ways for publication bias to set in. Researchers who find a weaker effect might hold off on publishing in the hope that further research would be more convincing. Journals also have a tendency to favor publishing positive results — one where a hypothesis is confirmed — and avoid publishing studies that don’t see any effect at all. Researchers, being aware of this, might adjust the publications they submit accordingly.

As a result, we might expect to see a bias towards the publication of positive results, and stronger effects. And, if a meta-analysis is done using results with these biases, it will end up having a similar bias, despite its larger statistical power.

While this issue has been recognized by researchers, it’s not clear how to prevent this from being a problem with meta-analyzes. It’s not even clear how totellIt’s a problem with meta-analyzes. But a small team of Scandinavian researchers — Amanda Kvarven, Eirik Strømland, and Magnus Johannesson — have figured out a way.

Their work relies on the fact that several groups have organized direct replications of studies in the behavioral sciences. Collectively, these provide a substantial number of additional test subjects (over 53, 0 of them in the replications used), but aren’t subject to the potential biases that influence regular scientific publications. These should, collectively, provide a reliable measure of what the underlying reality is.

The three researchers searched the literature to identify meta-analyzes on the same research question, and came up with of them. From there, it was a simple matter of comparing the effects seen in the meta-analyzes to the ones obtained in the replication efforts. If publication bias isn’t having an effect, the two should be substantially similar.

They were not substantially similar.

Almost half the replications saw a statistically significant effect of the same sort seen by the meta-analysis. An equal number saw an effect of the same sort, but the effect was small enough that it did rise to significance. Finally, one remaining study saw a statistically significant effect that was not present in the meta-analysis.

Further problems appeared when the researchers looked at the size of the effect the different studies identified. The effects seen in the meta-analyzes were, on averagethree times larger than those seen in the replication studies. This wasn’t caused by a few outliers; instead, a dozen of the 14 topics showed larger effects sizes in the meta-analyzes.

All of this consistent with what you might expect from a publication bias favoring strong positive results. The field had recognized that this might be a problem, and developed some statistical tools intended to correct for the problem. So, the researchers reran the meta-analyzes using three of these tools. Two of them did work. The third was effective, but came at the cost of reducing the statistical power of the meta-analysis — in other words, it eliminated one of the primary reasons for doing a meta-analysis in the first place.

This does not mean that meta-analyzes are a failure, or all research results are unreliable. The work was done in a field — behavioral science — where enough problems had already been recognized to motivate extensive replication studies in the first place. The researchers cite a separate study from the medical literature that compared meta-analyzes of a collection of small trials to the outcome of larger clinical trials that followed. While there was a slight bias for positive effects there, too, it was quite small, especially in comparison to the differences identified here.

But the study does indicate that the problem of publication bias is a real one. Fortunately, it’s one that can be tackled if journals were more willing to publish papers with negative results. If the journals did more to encourage these sorts of studies, researchers would likely be able to provide them with no shortage of negative results.

Aside from the main message of this paper, Kvarven, Strømland, and Johannesson use an additional measure to ensure the robustness of their work . Rather than simply counting anything with ap value less than 0. as significant, they limit that to things with ap value less than 0. (********************************************************. They term things in between these two values ​​as “suggestive evidence.”

Nature Human Behavior, (**************************************. DOI:************************************ / s – 53 – (z

                                                    

******************************

What do you think?

Leave a Reply

Your email address will not be published. Required fields are marked *

GIPHY App Key not set. Please check settings

How much of a genius-level move was using binary space partitioning in Doom ?, Ars Technica

How much of a genius-level move was using binary space partitioning in Doom ?, Ars Technica

Certified SOLIDWORKS Associate CSWA الاستعداد لاجتياز اختبار