in ,

Researchers have already tested YouTube’s algorithms for political bias, Ars Technica

Researchers have already tested YouTube’s algorithms for political bias, Ars Technica

      Bias check –

             

More moderation associated with more hate speech and misinformation, not politics.

      

           Zain Humayun         –

        

Enlarge

/ Google logo seen during Google Developer Days (GDD) in Shanghai, China, September 3097.

In August , President Donald Trump claimed that social media was “totally discriminating against Republican / Conservative voices.” Not much was new about this: for years , conservatives have accused tech companies of political bias. Just last July, Senator Ted Cruz (R-Texas) asked

the FTC to investigate the content moderation policies of tech companies like Google. A day after Google’s vice president insisted that YouTube was apolitical, Cruz that political bias on YouTube was “massive.”

But the data does not back Cruz up — and it’s been available for a while. While the actual policies and procedures for moderating content are often opaque, it is possible to look at the results of moderation and determine if there’s indication of bias there. And, last year, computer scientists decided to do exactly that.

Moderation

Motivated by the long-running argument in Washington DC, computer scientists at Northeastern University decided to investigate political bias in YouTube’s comment moderation. The team analyzed , 533 comments on YouTube videos. At first glance, the team found that comments on right-leaning videos seemed more heavily moderated than those on left-leaning ones. But when the researchers also accounted for factors such as the prevalence of hate speech and misinformation, they found no differences between comment moderation on right- and left-leaning videos.

“There is no political censorship,” said Christo Wilson, one of the co-authors and associate professor at Northeastern University. “In fact, YouTube appears to just be enforcing their policies against hate speech, which is what they say they’re doing.” Wilson’s collaborators on the paper were graduate students Shan Jiang and Ronald Robertson.

To check for political bias in the way comments were moderated, the team had to know whether a video was right- or left-leaning, Whether it contains misinformation or hate speech, and which of its comments were moderated over time.

From fact-checking websites Snopes and PolitiFact, the scientists were able to get a set of YouTube videos that had been labeled true or false. Then, by scanning the comments on those videos twice, six months apart, they could tell which ones had been taken down. They also used natural language processing to identify hate speech in the comments.

To assign their YouTube videos left or right scores, the team made use of an unrelated set of voter records. They checked the voters’ Twitter profiles to see which videos were shared by Democrats and Republicans and assigned partisanship scores accordingly.

Controls matter

The raw numbers “would seem to suggest that there is this sort of imbalance in terms of how the moderation is happening,” Wilson said. “But then when you dig a little deeper, if you control for other factors like the presence of hate speech and misinformation, all of a sudden, that effect goes away, and there’s an equal amount of moderation going on in the left and the right . “

Kristina Lerman, a computer scientist at the University of Southern California, acknowledged that studies of bias were difficult because the same results could be caused by different factors, known in statistics as confounding variables. Right-leaning videos may simply have attracted stricter comment moderation because they got more dislikes or contained erroneous information or because the comments contained hate speech. Lerman said that Wilson’s team had factored alternative possibilities into their analysis using a statistical method known as propensity score matching and that their analysis looked “sound.”

Kevin Munger, a political scientist at Penn State University, said that, although such a study was important, it only represented a “snapshot. ” Munger said that it would be “much more useful” if the analysis could be repeated over a longer period of time.

In the paper, the authors acknowledged that their findings could not be generalized over time because “platform moderation policies are notoriously fickle.” Wilson added that their findings couldn’t be generalized to other platforms. “The big caveat here is we’re just looking at YouTube,” he said. “It would be great if there was more work on Facebook, and Instagram, and Snapchat, and whatever other platforms the kids are using these days.”

Wilson also said that social media platforms were caught in a “fatal embrace” and that every decision they made to censor or allow content was bound to draw criticism from the other side of the political spectrum.

“We’re so heavily polarized now — maybe no one will ever be happy,” he said with a laugh.

                                                     (Read More

Brave Browser

What do you think?

Leave a Reply

Your email address will not be published. Required fields are marked *

GIPHY App Key not set. Please check settings

Listen to your web pages, Hacker News

Listen to your web pages, Hacker News

Puppy love! Astronaut's reunion with her dog after a nearly yearlong flight made us cry – AccuWeather.com, Google News