Essay

Essay On Danger of Digital Platform Policies | DailyHomeStudy

Last year, more people started to ask the right questions of platforms. Beyond the questions of who should set what terms of online speech and how, lay another important question — when should that someone set the terms. The deceptively simple issue of timing has laid bare the inconsistencies in platforms’ policies and the harm that they can wreak. We have seen this play out in three dimensions over the past year: political violence, voting and vaccines.

Since 2016, it was clear that Donald Trump would undermine election results if he did not win. The problem was merely delayed by his victory in November 2016. In summer 2020, a high-level bipartisan group gamed out how Trump might try to undermine the peaceful transition of power if he lost in 2020; in many of the group’s scenarios, social media played a crucial role in spreading lies. Despite growing evidence and predictions of such likely consequences, platforms took little to no action, not even to enforce some of their own policies against incitement to violence. (The police, too, seem, disturbingly, to have overlooked or to have fundamentally misjudged online material).

After the violent storming of the US Capitol on January 6, platforms and infrastructure providers suddenly acted with astonishing speed. Facebook has blocked Trump from posting until after the inauguration of President-elect Biden on January 20. After an initial 12-hour suspension, Twitter permanently banned Trump from the platform and deleted the accounts of many major QAnon figures, who promote discredited far-right conspiracy theories. Google removed alt-right social media platform Parler from its app store, and Amazon booted Parler from its hosting service AWS (Amazon Web Services) on January 10 for failing to remove violent threats. So many platforms, from Twitch to Snapchat, have banned or removed Trump that First Draft now has a 10-page document listing all these actions.

But many such threats were amply documented before. Based solely on easily accessible information, researchers, journalists and civil society observers such as Arieh Kovler were warning of potential violence on January 6 since at least December 21. This online organizing was not hard to find. Had platforms acted even a few weeks earlier to remove or reduce the spread of such material, would online threats have manifested in real life?

The question of timing was not new. Even if we take examples from the United States in 2020, this question was there in April, when tweets from Trump such as “Liberate Michigan” seemed to support armed men storming the Michigan state house. It was there in August 2020, when Facebook failed to remove a webpage from a self-proclaimed militia group before 17-year-old Kyle Rittenhouse had driven across state lines to Kenosha, Wisconsin, and shot three Black Lives Matter protestors, killing two of them. (After January 6, Twitter also suspended the account of Lin Wood, one of Rittenhouse’s defense lawyers, who has also worked for Trump and promoted QAnon conspiracy theories.) As Evelyn Douek has put it, platform actions after January 6 showed that “the posts and tweets of platform executives and spokespeople can be seen as fig leaves, trying to hide that these were, at bottom, arbitrary and suddenly convenient decisions made possible by a changed political landscape and new business imperatives.”

But when is also a question about where platforms are looking and what they have the capacity to contextualize. Twitter explained its permanent suspension of Trump in a note exploring the context of his tweets that carried “the risk of further incitement of violence.” Such contextual readings of posts in other places often are impossible. In some cases, barely anyone in the company might even speak the language of a place where the platform operates. For example, Facebook’s lack of Amharic speakers may be exacerbating violence in Ethiopia.

In other cases, content moderation is outsourced — a process that comes with its own problems. A post in Canada in English might be moderated in the same queue as an English-language post from Australia or South Africa. A seemingly innocent post might actually contain a violent threat, one only apparent to someone with deep contextual knowledge of a particular place at a particular time. In effect, Twitter’s note admitted a basic point that has long been argued by scholars: context and timing matter. And platforms can only pay attention to those factors if they invest the resources to do so.

As with political violence, platforms’ approach to voting has evolved mainly around timelines of US political contests. During the US election campaign, platforms kept clarifying how they would deal with electoral integrity. Some platforms even changed the rules on political advertising in the interval between the presidential election and the Georgia Senate runoff. As I wrote for CIGI in early October, such changes have “created a fundamental issue of inconsistency. More than one million Americans have already voted. Many millions will now vote under potentially different rules on social media platforms for advertising and discussions of voting. Some voters have made their choices under one regime of political speech; others will make their decisions under other policies.”

Here, too, the when is again a question of where platforms are looking. Other democracies had already created laws that platforms could have also applied to the United States, or at least partially. Canada’s federal election in 2019 was relatively free of disinformation. While there were many reasons for this, one was the Canadian Elections Modernization Act, which required ad transparency from social media platforms. Although, in the end, Google decided not to serve political ads in Canada, Facebook participated. What would the result be if platforms, rather than seeking to globalize American speech norms and laws, started implementing policies from other democracies around the world?

Platforms might follow the advice given by scholars such as Stephen Wertheim and David Adler and James Goldgeier and Bruce Jentleson to the incoming Biden administration — to stop trying to host an international “summit for democracy” that portrays the United States as the preeminent democratic country with globally applicable norms. Instead, turn toward cooperation and humility, combined with domestic reforms. What if platform executives internalized that US norms are only one possible democratic alternative and that US democracy is in deep need of repair?

Health information has displayed a similar pattern of crisis and inconsistency. Platforms acted with greater speed than they have in the past in addressing disinformation, this time about COVID-19. But that action mainly came in March, when the pandemic really took hold in Europe and North America. Why did platforms not plan and pay attention to the pandemic in January, as Taiwanese and South Korean media and public officials did? These countries have successfully addressed the pandemic in different ways, but both embedded effective communications into their pandemic strategies. In comparison, global platforms dragged their heels. Research has shown that the more swiftly governments put out guidelines on COVID-19, the fewer quack cures their citizens purchased. Might swifter action by platforms to highlight legitimate public health information have contributed to mitigating the pandemic?

As vaccines now roll out, the question of timing for platforms has once again reared its head. Anti-vaccination content has flourished online for years. Back in 2018, a group of researchers led by David Broniatowski demonstrated that health communications around vaccines had been “weaponized” by bots and Russian trolls. Anti-vaccine content and networks were obviously going to play a massive role in COVID-19 vaccine hesitancy. So why did it take until mid-December for Twitter to implement a policy on vaccine misinformation?

Platform policy cannot remedy all of the issues besetting a democracy. No one serious asserts that it can. Donald Trump was a sign of the problems of a media ecosystem of cable TV and talk radio, Republican Party politics, American celebrity culture, inequality and racism as much as he was a result of social media’s ills. Vaccine hesitancy, too, has myriad causes, including legitimate, deep-seated concerns about systemic racism in medical institutions. Poor-quality information on platforms is one factor of many.

But platforms affect billions around the world beyond the United States. Policies cannot simply be made based on when horrific events occur at the US Capitol. For all the problems of platforms not reacting until after January 6, they barely react at all to similar threats in other countries. As Dia Kayyali tweeted on January 10, “Dear (most white, “western”) people exclaiming over the de-platforming of Trump: the rest of the world is watching and shaking their heads, knowing unless something massive changes they’ll continue to be ignored as states use social media to incite atrocities. When platforms weigh priorities, are 5 dead people in Washington DC heavier than all the bodies in India or Myanmar or the many other places states use social media to incite violence?”

In 2020, platforms did things that executives had often said could not be done. It is crucial to debate whether these actions addressed the right problems in the right places or the right ways. But another important question is why platforms acted when they did. The when tells the rest of the world everything it needs to know about who really counts for platforms. Changing these dynamics will be a crucial challenge in 2021.

Facebook Comments
error: Content is protected !!