Wednesday, June 3, 2020

On Zeynep Tufecki: Twitter and Tear Gas: The Power & Fragility of Networked Protest (Platforms & Algorithms)

When social media began emerging in 2005 it altered and shaped how we - users, producers, consumers, audiences - behaved, not just when on these platforms - Facebook, Twitter, YouTube, et al - but off them as well. Over

the past fifteen years, social media has shaped, among other things,  how we define ourselves, how we consume media, and how we respond to calls for protest and reaction.  Online civic spaces have moved from individual blogs to the behemoths of social networks that dominate our screen time (p 34).  Controlled by algorithms set by the corporations that own the platforms, these privately owned spaces have become our public space to share, to learn, to consume, to market, and to protest. Spaces that on the surface seem to be 'public' are in fact controlled, manipulated, and adapted to meet corporate requirements. Concerns raised by scholars in the early years of social media as to whether these platforms would be restrictive, would enforce censorship, would sell user information have all been realized. Facebook, Twitter and YouTube regularly censor content. Images of same gender couples kissing are regularly banned for violating community guidelines, posts that are considered to spread 'fake news' are removed, posts that might be considered 'hot under the collar' are at best 'screened' and at worst removed. At the same time platforms permit images that show male gaze driven porn, allow hate speech from groups because they have bought advertising, and Zuckerberg recently went on record saying social media shouldn't fact check politicians (while still removing 'ordinary' user posts the alogrithm deems unacceptable - even if they're not hate speech, violent, or untrue) . 


Let us be clear - social media exists not to provide a platform for users but to create money for the parent corporation. Their success is dependent on attracting mass numbers, retaining them, and utilising them in such a way that they can be monetised.  These are networked public spaces that are privately owned with corporate owners making the rules. Facebook has used - and continues to use - the real name policy to shut down groups and pages like "We Are All Khaled Said" without having to be accountable for censorship - violating a business rule is just cause for closure. This policy while lucrative for the overarching corporation is dangerous for individuals. People of colour, LGBTQI+ groups, people of diverse faiths - are required to put themselves in danger in order to satisfy Ts & Cs of a platform that will then censor their activism should it become uncomfortable. Reliant on community policing - on one user reporting another - these policies are underpinned by US laws that require only that the platform remove content they are told violates the law (p 143).  This targets any user who is commenting on or advocating for anything socially or politically sensitive - I have had a photo removed for community standard violation that showed two men, in tuxedos, exchanging wedding vows. Activists especially are at risk of being reported, harassed (online and off), and of being physically harmed. 


Real name policy, 'think of the children' (censorship justification) policy, SPAM policies, verification practices that require sending legal documents through a system that is at best fragile all align a platform with commercial and legal models that prioritise the bottom line. These policies are rarely consistent or even...comprehensible. They disadvantage minority groups that are socially and politically vocal and hide behind protections rarely afforded to those using these platforms as a space to convene, dialogue, and  protest. 


No comments:

Post a Comment