Facebook, Twitter, ending the hypocrisy edit

20 janvier 2021

Facebook and Twitter's decision to censor Donald Trump, after the January 6th insurgency, has definitively shattered the fiction on which these platforms have built their prosperity. From the beginning, they asserted forcefully that they were merely technical tools for sharing content and could not be held responsible for the texts or images they made available to their users. The reason was simple: there is no business model for a media to edit - that is, to moderate, choose, or even rewrite - so much content. On the other hand, if you do not invest in editing, you make a lot of money, since the revenues from advertising and the collection and sale of personal data far exceed the costs of managing the platform and recommendation algorithms. Can Twitter and Facebook still say, after censoring the President of the United States, that they have no editorial responsibility for the content they publish? To ask the question is to answer it.

At first, however, the argument seemed strong. Digital world was brand new, social networks were not media that publish content to an audience according to the transmitter-receiver model, they considered themselves as tools that allow a conversation between several people where each is alternately transmitter and receiver, according to an egalitarian principle. This fable did not last long in the face of the harsh reality of social hierarchies. It is not technology that assigns an individual a position of transmitter or receiver, it is reputation and the ability to capture attention. The 88 million twitter followers @realDonaldTrump are here to testify. The "few to many" principle is therefore found on the platforms, where a small number of profiles address millions of followers. This principle characterizes a media and justifies the laws and regulations that govern editorial responsibility. Yet the governments of the great democracies have taken this fable for cash and have agreed to exempt social networks from any responsibility and regulation on the content they publish, guided by a subtle mixture of ignorance, appeal, and illusion.

Ignorance first. In many countries, and this is particularly true for France, the digital illiteracy of the ruling classes, especially of the political classes, is abysmal. They neglected this new cognitive technology. When writing emerged in oral societies, kings and princes quickly realized that it was also an instrument of power and learned to write. Instead of doing the same, our leaders saw digital as a commodity, not a strategic knowledge.  They stayed at a distance and sometimes even boasted that they didn't understand it. They are paying the price. To establish new rules of the game, the regulator is always late with regards to the innovator, but this delay is usually small. In digital purpose, it is tens of years, if we consider that the creation of the TCP/IP protocol dates from 1983 for example. We can fear that the digital giants have become "too big to regulate" much like the big banks were "too   big to fail" during the 2008 crisis.  

Then the appeal. The only thing that many politicians have seen in the web and social networks, and which has really interested them, is the ability to bypass and weaken mainstream media. Producing and editing your own content, controlling your image, filming and broadcasting your own events, freeing yourself from journalists' questions, communicating directly with your supporters,  driving the media agenda by posting messages on social media, Trump did so with brutality, but before him, Obama had led the way with subtlety among general enthusiasm.[1] The short-term benefits of this transformation of the political leader into a brand and a medium have  long concealed the devastating effects it could have on the quality and integrity of political debate. Independent media play an essential role not only in defining the political agenda of a democracy but also in building a public space that encourages the confrontation of opinions and offers credible information, accessible to all citizens. Their weakening has paved the way for an atomization of public space that weakens democracy.

The illusion at last. At first, many believed that social networks, conversational media were instruments of emancipation and even liberation. The role they played in the Arab Spring uprisings or the revolt against authoritarian governments in Ukraine or Uzbekistan elicited optimistic analyses [2] that exalted the power they offered citizens to organize, unite and fight oppression. In democratic countries, the relationship between these tools and the new forms of protest that emerged was made:  Occupy Wall Street in the United States, Los Indignados in Spain, Nuit Debout in France. The ability of social networks to revitalize democracy and generate new forms of political engagement seemed obvious. Then came the election of Donald Trump, the Brexit referendum and Cambridge Analytica's role in both elections, and the first revelations about Russian strategies and intrusions. Because we were fascinated by the fact that certain uses of the conversational media could weaken or even bring down dictatorships, it was overlooked that other uses could weaken or even bring down democracies. Because we had become accustomed to the consensus in the media on the definition of facts, current events, and agenda, we had forgotten that a journalistic fact was a cognitive pattern. And like any cognitive pattern, it could be challenged by another cognitive pattern, such as an "alternative fact" or a "fakenews". And we began stupidly to accuse social networks, without questioning the inconsistency of public policies.

You can hardly blame their inertia on Google, Facebook, Twitter, Instagram, or YouTube, they are private companies that seek to maximize their profits, they are not benefactors of humanity. The casualness with which they have dealt with the excesses of the last five years can be easily explained. The cost of hiring tens of thousands of employees to moderate content was a deterrent. And automatic tools for searching for violent or offensive content, based on artificial intelligence, directly conflict with their recommendation algorithms, programmed from the statistical fact that hate and outrage are more powerful drivers than benevolence and reasoning to capture users' attention and generate clicks.

There is only one solution: regulation. And it's really urgent. Google, Facebook, Twitter, and other digital giants have become genuine private political institutions, as I mentioned in 2006 in my book The End of Television[3]. They negotiate with States, censor, authorize or prohibit public expressions, store and market personal data, set up a "Supreme Court", plan to create a new currency. That is why a purely economic approach, in terms of dismantling, is not enough. It is necessary to examine the dominant positions and dismantle if necessary, but the political function of these platforms, the way in which they disarticulate the public space and reorganize deliberation, must be the subject of specific regulation. This will have to avoid many pitfalls, on which all regulatory projects have failed in the past. The first of these pitfalls is of course  the impossible definition of what truth and objectivity are in the political field, let alone some automatic and algorithmic  corsetry  of freedom of expression. Since artificial intelligence is not even yet able to effectively detect the irony of a message[4], it is unthinkable that it will be able to distinguish in the near future the often subtle boundaries between humor and offense.

The second trap is the geographical scale. Such regulation cannot be imposed by a single country on global platforms. It can only be developed at the European level, as the differences in the conception of freedom of expression between Europe and the United States are irreconciliable.

The third pitfall would be to model the regulation of these actors on traditional media regulation mechanisms. We must start from the idea that these platforms are not media but meta-media. They host media called "accounts" on Twitter, "profiles" on Facebook, or "channels" on YouTube. But they don't just host, they moderate, they recommend, they highlight, they connect, they arrange. In doing so, they are not merely a technical hosting tool, they play an editorial role that must be regulated in a quite different way from regulating the traditional media. It is not the content that needs to be controlled, but the moderation, recommendation, reporting, and intervention devices on reported content or accounts.

In moderation, rules need to be flexible enough for projects such as Wikipedia to continue to practice volunteer moderation without taking risks, and hard enough to define ratios between the amount of content to be moderated and the number of moderators, not globally but at the country level. There is also a need for increased moderation during elections. Independent organizations will need to regularly audit software and other automated moderation tools and report publicly on their evaluation.

The issue of recommendation is perhaps the most complex. Platforms consider their algorithms to be strategic assets that cannot be communicated. There is also another, less avowed reason. It's rumored that at Google, even the most caped engineers don’t know how and why their algorithms recommend this rather than that. The tools have been so modified, triturated, reassembled, complexified since their creation, that no one is able to document it properly. However, the transparency and reliability of recommendation algorithms is a crucial issue. They are to be audited and subject to a publicly evaluated assessment. The European Commission should engage in a dialogue on this issue with the platforms as soon as possible. At a minimum, it could require authors who have been repeatedly reported by users to have their content excluded from any recommendations while the reports are processed. Since the recommendation is completely discretionary, this measure would leave content in the process of judgment accessible to those who actively seek it, but would make it little visible by algorithmic chance, thus limiting the filter bubble effect.

The last, and not the least, trap would be to believe that regulators can simply enact rules and enforce them through platforms. They have a responsibility to be reliably on their side in the application and interpretation of the rule. It must   be understood that only real-time cooperation between platforms, police and justice will be effective. Meta-media are "digital commons" for their regulation to work, it must be the subject of shared governance. What is the training of a student whose part-time job is to moderate content on a platform? Who can he turn to ask if such a statement by an MP or a journalist is contentious? Why don't there seem to be interfaces between police and platform tools?  Platforms must be able to report clearly criminal content to the authorities in real time, or on the contrary, request in real time, through a mechanism for interconnection of processing systems (API), the opinion of a sworn officer who will have the legitimacy to say the law. Don't governments let platforms do the "dirty work" for them?   Of course, we can estimate that the 7,500 moderators employed by Facebook in 2018 are very inadequate, but that is still 250 times more than the thirty gendarmes and police officers dedicated to Pharos, who handle all illegal content on the Internet in France. However, it is they, supported when necessary by a judge, who should be the reference point for any content of French law.

For reporting, the ball is clearly in the court of the police and judicial authorities of European countries. The failure is on the government's side. To be convinced, it is enough to compare the ease with which one can report illegal content on Twitter and the time taken by a similar report on the platform of the French Ministry of the Interior. In addition, it is likely that the interconnection between the two is best done by email, and more likely by fax.

The decision, taken sovereignly by Twitter and Facebook, to censor the President of the United States has had the effect of an electroshock. But tens of thousands of accounts had been deleted before that one, just as sovereignly. The only effective solution is to organize monitoring of reports and cooperation in interventions on reported content and accounts, in accordance with the rule of law. Governments will be able to demand more responsibility from platforms if they have done their own job, by bringing their polices, systems and procedures into the 21st century.

Finally, it is certainly not Facebook’s job to launch a digital "Supreme Court" but the European Union’s one. Such a body will have to arbitrate all disputes and can be referred by any European citizen. We're running out of time. The German federal elections will be held in September 2021 and the French presidential election in June 2022. Many ill-intentioned people are on the lookout for these two events to destabilize two major European democracies, and they know that Facebook and Twitter will be much less attentive and responsive to these European elections than they were in the US presidential election. If Europe don’t act, we will all pay the price.

 

 

[1] https://abcnews.go.com/Politics/president-obama-white-house-media-operation-state-run/story?id=12913319

[2] Larry Diamond, LibeRation Technology: Social Media and the Struggle for Democracy, Baltimore, Johns Hopkins University Press, 2012.

[3] Jean-Louis Missika, The End of Television, Paris, Le Seuil, La République des idées, 2006.

[4] https://link.springer.com/article/10.1007/s00521-020-05102-3