Platform Politics and Accountability

0
86

Share on LinkedIn

With midterm elections just around the corner, the campaigning has ramped up to full speed – as have the concerns around Facebook, Twitter, and other social networks facilitating and fostering election misinformation. Although Facebook usually bears the brunt of the criticism, it has recently taken some very public measures to minimize ‘fake news’ and identify information about different sites so users can make their own judgements about its content. And fake news doesn’t just come in the form of questionable journalism – it can also come in the form of digital ads and marketing campaigns. Platforms offer brands and political campaigns very nuanced targeting opportunities. Meanwhile, other platforms, including YouTube – which touts the strongest recommendation engine in the world – has come under scrutiny for its ability to be used by election interferers.

While YouTube has taken measures to moderate its content through terms of use and community guidelines, fake news and political ads can be extremely difficult to spot. As we saw in the 2016 election, many people could not spot the difference between real and fake stories – that which was designed specifically to harbor mistrust with the U.S. government and many aspects of cultural significance. For an algorithm, the difference can be even harder to catch. The spread of information, or rather misinformation, is then proliferated by machine learning, which is informed by user-decision, rendering the program itself unaccountable. Consequently, the user, platform, and the content provider all play a role in the spread of misinformation. This means that the real issue exists in assigning accountability for this misinformation and how it is propagated on the platform. Signs of election interference have already cropped up, with Russia seen as the primary instigator. Meanwhile, social networks and their users must stay wary of election information and ads they consume. Here are three areas exacerbating the problem and how platforms and their users can respond.

Section 230
One of the key areas of concerns for this is around the issue of curation and liability for websites. As consumption habits move from content sources with built-in human curation (such as limited and carefully-picked television programming or print news outlets with journalistic standards) to platforms with no barrier-to-entry and AI-driven distribution, misinformation becomes a greater danger without a clear solution for removing it.

Further, these platforms rely on protections offered by Section 230 of the Communication Decency Act, which prevent them from being held responsible for content on their site so long as they are acting as a platform rather than a publisher. This can make it tricky for social platforms such as Facebook and YouTube to manually curate content. Curation can serve as an editorial voice, making them directly responsible for the content and labeling them as “publishers,” forfeiting the protections of Section 230. This gives these companies great legal liability for the content on their platforms — clearly a risk without much incentive. And given the massive scale of content on their platforms, the risk is huge.

Need for Greater Transparency
Following the Cambridge Analytica scandal, data breaches, and a slew of other controversies, Facebook has received the crux of negative press. But the role of YouTube in the spread of misinformation, particularly in the past few years, cannot be underestimated. Despite YouTube’s incredible ability to curate highly relevant content, there needs to be a greater degree of transparency that reveals where they are pulling information from and why things are being recommended – all while identifying bad actors that are click-baiting controversial searches and topics to drive traffic to videos they are monetizing (which in turn, YouTube is profiting from).

As award-nominated data journalist Jonathan Albright points out, “In the disinformation space, all roads seem to eventually lead to YouTube. This exacerbates all of the other problems, because it allows content creators to monetize potentially harmful material while benefitting from the visibility provided by what’s arguably the best recommendation system in the world.” Some of those very content creators might even be legitimate political campaigns, who through questionable but legal digital advertisements, also help further sow misinformation and misperceptions.

This has resulted in the challenge of assigning accountability for this misinformation. Since it is proliferated by machine learning, (which is itself informed by user-decision making,) the program behind the platform cannot be held accountable. With people growing more disillusioned with institutions like the free press and forms of government every day, it’s imperative for tech providers to be transparent about where their content is coming from. Otherwise, people will retreat further into echo chambers that reinforce their views.

Reconsidering Fake News and Advertisements
To combat the underlying problems related to Section 230, machine learning, and a lack of transparency, more action is needed by both government and social networks/tech providers to identify, connotate and inform people about what fake news is, the source of political content and advertisements, why they are being targeted, and how to spot the difference. Programmatic is still the dominant form of advertising on YouTube, with high performing “fake news” videos likely to serve ads. Social networks have a social responsibility to their users and need to greatly reconsider the issues surrounding fake news, particularly ahead of elections. While Google and YouTube’s recently announced plan to stop its spread by highlighting and promoting legitimate news sources is a helpful start, more needs to be done. Here are a few potential solutions:

1) Create “news category” qualifiers and political sources: If someone wants to make a substantial claim, they need to be able to back it up and videos containing substantiated claims should be visually different from ones who don’t. Platforms should be able to differentiate these without risking liability by exhibiting an “editorial voice.” Political advertisements should be explicitly called out. Also, with the risk of advertising occurring on high-performing fake news videos, this can lead to major brand safety issues. Advertisers should consider only choosing to advertise on verified news outlets or avoid news topics altogether.
2) Manual review processes: For brands and publishers concerned about brand safety, social networks could implement a system similar to the current manual review process for advertiser-friendly content. Something akin to the topic admins at Reddit, which manually curate what’s appropriate and not.
3) Weave fact checking into all news sources: YouTube took a big step this summer when it announced that it would be adding fact check links for videos on topics that inspire conspiracy theories. This is a great first step, but the site needs to make the sourcing consistent across the board and verifiable so that the facts themselves cannot be dismissed. Other social networks could follow suit by providing similar resources on highly controversial topics. The same could be applied specifically for political advertising as well. For instance, they could add the ability to link sources for anyone making substantial claims, with the links needing to be reviewed for quality. Videos without links would be displayed with a Wikipedia-like “Needs citation” denomination, flagging it to users.

While social networks did not play an active role in the spread of misinformation, their technology was weaponized by hackers for their own use, creating vacuums of opinions, where American citizens were pitted against each other by their ideological views. At one point, if a news story was baseless or without fact, it could safely be called ‘fake news,’ but in today’s climate, that term has come to mean nearly anything that a speaker doesn’t like – rendering it almost meaningless. The 2016 election may have been a turning point for the proliferation of misinformation, but that doesn’t mean that social networks can’t take concrete steps in stopping its spread.

Keith Johnson
Keith Johnson is Chief Operating Officer of Made In Network, a video-first media company video-first media company based in Nashville. Made In Network builds and operates YouTube channels that generate over 250 million views per month to help brands connect with fans across the YouTube community and is one of only a few companies in the world with YouTube certifications in both Audience Growth and Advanced Digital Rights Management.

ADD YOUR COMMENT

Please use comments to add value to the discussion. Maximum one link to an educational blog post or article. We will NOT PUBLISH brief comments like "good post," comments that mainly promote links, or comments with links to companies, products, or services.

Please enter your comment!
Please enter your name here