Overall, only 26 percent of respondents said they were very or somewhat confident in tech platforms’ ability to prevent that kind of misuse, Pew found. Meanwhile, 90 percent reported being not too confident or not at all confident that services would be able to do so. The responses were extremely similar across both Republican-leaning and Democratic-leaning respondents. A similar number,
percent, said technology companies have a responsibility to prevent their platforms from being misused. Here, Pew did find fairly significant differences in response — not by political affiliation or belief, but instead by age. While less than three-quarters of respondents under age felt the platforms needed to step up responsibility, a striking (percent of seniors over age) (replied that social media services have a duty to prevent abuse.
Younger respondents were also the most likely to think that platforms could or would do something about it: 50 percent of those ages – said they were confident in tech firms to prevent election-influencing misuse. That number dropped to percent among those ages 49 – , percent among those ages 65 – , and only (percent among respondents over
In an attempt to mitigate the harm social media can do during election season, Twitter updated its election integrity policy in April and moved to ban all political advertising from candidates starting last November. Google a short time later tightened its rules on false claims and microtargeting in political advertising.
Facebook, however, is taking a different approach. The globe-spanning social network has repeatedly said its standards do not apply to politicians, and political ads can be (full of lies without falling afoul of Facebook’s rules. There are nominally some limits — attempting to suppress voter turnout or census participation, for example, will get your ad kicked off the service. But attempts to consistently enforce that twisting and dotted line are not going well In lieu of prohibiting deliberately misleading content, Facebook has said the onus is on users to simply try to see less of it. inauthentic behavior . When the platform detects a group of fake accounts trying to manipulate users, it kicks them off, posting updates several times per year about removing batches of bad accounts based in Russia, Iran, or dozens of other nations.
Facebook doesn’t “have visibility into financial relationships taking place off our platforms, which is why we’ve asked campaigns and creators to use our disclosure tools, “a spokesperson for the company told The New York Times. The company also apparently has not yet decided what to do about campaigns that simply ignore its process.
Are there billions of dollars meant to go to charity that really are just sitting there, untouched and collecting dust, in secretive tax shelters? That’s the core question surrounding donor-advised funds (DAFs), a controversial, barely regulated repository for millionaires and billionaires to park their money before slowly doling it out to hospitals, schools, and other…