Flipping the Line: The trouble with doing something about 'Big Tech'
Léonid Sirota sounds the alarm about sounding the alarm on Big Tech censorship.
The Line welcomes angry rebuttals and responses to our work. The best will be featured in our ongoing series, Flipping the Line. Today, writer Léonid Sirota takes on Jen Gerson and Jamil Jivani, both of whom were alarmed by the way Twitter cut off distribution of a New York Post story prior to the election.
Two of Canada’s most thoughtful journalistic voices, Jen Gerson and Jamil Jivani, are sounding the alarm over “Big Tech censorship.” Why? Because, as they put it, Facebook and Twitter “banned” or “censored” a (dodgy) New York Post story making allegations of doubtful damning-ness against the son of the Democratic presidential candidate, and now president-elect, Joe Biden. Just what Gerson and Jivani propose to do about the censors is not quite clear, but something, they suggest, must be done.
These expressions of alarm are misguided in two ways. First, they exaggerate the power of “Big Tech.” Social media companies are not an overbearing monopoly. And second, the danger of demanding that something be done is what will be done ― if only because no one has yet proposed a more compelling alternative ― is that bureaucrats will become the arbiters of social media speech.
Though frequently described as “giants,” online companies are less mighty than they might seem. To begin with, they cannot and do not stop anyone from viewing anything. Indeed, as Gerson notes, ham-fisted attempts at blocking content might only enhance said content’s notoriety. All the tech companies can do is make content less conveniently accessible: to force people to deliberately look for it instead of stumbling upon it, and thus make the process of discovery more like … what it was before social media existed.
Besides, the “web giants” are not a unified cohort. While Twitter prevented its users from sharing the Post’s article at all, Facebook did not, although it slowed down its distribution. Same difference? Not quite, and in any case the point is that these are companies with independent decision-making processes. They compete against one another, as well as against the other sources of information and distraction in our lives. The claim that any one of them is anything approaching a monopoly is, at best, hyperbolic.
Most importantly, the power of these companies, such as it is, exists only for so long as we want to use their services. It will vanish the day we are no longer interested. The standard response to this is to point to network effects: once enough people are on a platform, everyone else wants to be there rather at some other place where people aren’t. And yet this hasn’t stopped Twitter coming along to compete with Facebook, just as Facebook previously competed with ― and usurped ― Myspace. (Remember them?)
To be sure, more recent attempts to create substitute social networks have flopped. But is that because people found it impossible to leave existing networks behind or because they found the alternatives not to their liking? One would be hard-pressed to offer conclusive evidence to settle this question. But it is telling that one conservative website describes Parler, the most prominent recent challenger to Twitter by right-wingers whom it supposedly excluded, as beaten down by “monsoons of bigotry, conspiracy-mongering, and — of course — spam.” Another finds (sickening screenshots included) that it is a “hellscape of racist incels who have the emotional maturity of 13-year-old brats at a segregated boys’ prep school.” The simplest explanation for the failure of such outfits is that mercifully few people are interested in unmoderated, anything-goes online spaces.
This brings me to the other problem with critiques of “Big Tech” as biased and dangerous for democracy: the consequences. Even if the critics do not positively endorse regulation of social media intended to rectify its real or supposed biases, they seem to accept and even reinforce its inevitability. Yet the only regulatory proposals on the table involve government dictation of what may and what must be tolerated on social media.
Conscripting these platforms into providing a service they do not wish to provide to people or organizations they do not wish to serve is not a good idea. I have argued elsewhere that such regulation is what Frédéric Bastiat described as “plunder,” the unjust taking of the fruits of some persons’ labour and giving them to others. But even if such considerations do not move you, consider whether you want to entrust to the government the task of deciding which content must be made available on social media, and which need not be.
Simple rules as such “all content anyone wishes to share” or “content that is not illegal” will not work. As Parler’s struggles show, virtually everyone wants more moderation than that. At best, a complex and fine-grained set of rules would need to be devised and frequently updated. But complexity and the need for constant revision would make it unlikely that such a list could be set out in legislation that could be debated in Parliament. It would have to be implemented through government regulation of dubious democratic legitimacy. More realistically, no rule at all could do, and only general principles would be set out in law, and officials ― perhaps at the CRTC, perhaps within some other bureaucratic structure ― would be assigned the task of implementing them on a case-by-case basis.
This would be no improvement over the status quo. The social media bureaucracy would be no less arbitrary and no more democratically accountable than the social media companies themselves. Unlike these companies, it would not be subject to any market discipline at all. And the costs of regulation, which large established players will always be better positioned to bear than new entrants into the social media market, would only help maintain the incumbents’ advantageous position vis-à-vis any potential competitors.
Saying “do nothing, and hope that the market sorts things out” is not very helpful advice in the face of what looks like a thoroughly unpleasant situation. I agree that the behaviour of Facebook and especially Twitter has been thoroughly unpleasant of late. And yet dismal advice is, in this case, the best advice. Hard as it is to accept, it still beats “do something” and its inevitable consequences. And if you really are sure that you know just how much moderation on social media people really want, why not create your own platform? Facebook started in a dorm room. Go for it!
Léonid Sirota teaches public law and legal philosophy at the Auckland University of Technology Law School and writes for the Double Aspect blog.
The Line is Canada’s last, best hope for irreverent commentary. We reject bullshit. We love lively writing. Please consider supporting us by subscribing. Follow us on Twitter @the_lineca. Fight with us on Facebook. Pitch us something: lineeditor@protonmail.com
While I appreciate this perspective, I still find the lack of accountability that these social media platforms have quite galling. Correct me if I'm wrong but if someone is banned from posting on Facebook, there is no means of appeal. It seems strange to me that we should allow these big companies to retain so much authority on their platform when there are folks whose livelihoods depend on these platforms. It would seem sensible to at the very least, mandate means of appeal, as well as some sort of "Facebook court".
It's ironic that, on at least FB, Conservative media out performs everything else. Top 10 posts, 9 are usually rightwing focused. Secondly it's not censorship but content moderation! A horrible thankless job.