MediaPolicyLab Grafik-aufRot-05
MediaPolicyLab Grafik-aufRot-06

Prof. Dr. Frank Pasquale on Media Pluralism and Intermediaries

The keynote by American researcher Prof. Dr. Frank Pasquale was certainly a highlight of re:publica and Media Convention Berlin 2017. In his talk, he spoke about the lost hopes for a digital public sphere and the problems that arise with algorithmic decision-making.

 

Within our workshop with experts, which took place from 5-7 May 2017 near Berlin, we asked Prof. Dr. Frank Pasquale about information intermediaries and their impact on media pluralism and related challenges and developments as well as problems.

Prof. Dr. Frank Pasquale works at the University of Maryland in the US as a lawyer and professor. Among other things, he is the author of the book The Black Box Society: The Secret Algorithms that Control Money and Information which is seen as a milestone in this debate.

He clarifies his point of view in the following statement:

Statement for Blankensee Media Pluralism Workshop, 2017 Frank Pasquale

1. What do you see as the most relevant developments and challenges regarding the impact of information intermediaries on media pluralism?

Major information intermediaries are driven by profit, and their methods of selecting and arranging newsfeeds and search engine results pages are secret.[1] In a world of stable and dominant media firms, large social networks and search engines were in a rough equilibrium of power relative to the owners and creators of the content they selected and arranged.[2] However, the trend toward media revenue decline makes a new endgame apparent: online intermediaries as digital bottlenecks or choke-points, with ever more power over the type and quality of news and non-news media that reaches individuals.[3]

If Google and Facebook had clear and publicly acknowledged ideological agendas, users could grasp them and “inoculate” themselves accordingly, with skepticism toward self-serving views.[4] However, the platforms are better understood as tools rapidly manipulated to the advantage of search engine optimizers, well-organized extremists, and others at the fringes of political respectability or scientific validity. Thus a search for “Hillary’s Health” in October 2016 would have led to multiple misleading videos and articles groundlessly proclaiming that the US Democratic presidential candidate had Parkinson’s Disease. Google search results reportedly helped shape the racism of Dylann Roof, who murdered nine people in a historically black South Carolina church in the US in 2015. Roof said that when he Googled “black on white crime, the first website I came to was the Council of Conservative Citizens,” which is a white supremacist organization. “I have never been the same since that day,” he said. So too are sources of support for climate denialists, misogynists, ethnonationalists, and terrorists easily developed and cultivated in what has become an automated public sphere.[5]

2. Which influence has the use of algorithmic decision making by intermediaries on media pluralism – especially in regard to news offers? 

Large online intermediaries reduce a good type of media pluralism, and tend to promote a very destructive type of diversity. They make the metric of success online “virality,” promoting material that has received a good deal of attention or seems to match a sub-public’s personalization profile, regardless of whether it is true or minimally decent.[6] That reduces pluralism by elevating profit considerations over the democratizing functions of public discourse. However, the same intermediaries also promote a very troubling diversity by advancing the agenda of the most baseless and dangerous propagandists. Such political forces are particularly gifted at creating media capable of influencing and persuading lowinformation, floating voters—exactly the persons most likely to swing the results of elections.

3. What are the main problems, risks and challenges associated with the influence of intermediaries’ use of algorithmic decision making?

We should expect any company aspiring to order vast amounts of information to try to keep its methods secret, if only to reduce controversy and foil copycat competitors.  However wise this secrecy may be as a business strategy, it devastates our ability to truly understand the social world Silicon Valley is creating.  Moreover, like a modern-day Ring of Gyges, opacity creates ample opportunities to hide anti-competitive, discriminatory, or simply careless conduct behind a veil of technical inscrutability.

Massive search operations are so complex, and so protected by real and legal secrecy, that it is almost always impossible to identify all the signals that are driving a given set of results.  A recurring pattern has developed: some entity complains about a major internet company’s practices, the company claims that its critics don’t understand how its algorithms sort and rank content, and befuddled onlookers are left to sift through rival stories in the press. Silicon Valley journalists tend to give their advertisers the benefit of the doubt; national media outlets find the mysteries of online content ordering perfectly fit into their own templates of balanced reporting.  No one knows exactly what’s going on when a dispute arises, so rival accounts balance into an “objective” equipoise.

Moreover, personalization is leading advertisers to abandon traditional, and even not- so- traditional, publishers in favor of the huge Internet platforms. Why? Because nobody  else can approach either the granularity or the comprehensiveness of their data. The result is a revolution- in- process about who can afford to keep publishing, and concomitant alarm about the concentration of media clout into fewer and fewer hands.

4. What potential responses to these problems, risks & challenges do you see as most promising?

a. Label, monitor and explain hate-driven search results.

In 2004, anti-Semites boosted a Holocaust-denial site called “Jewwatch” into the top 10 results for the query “Jew.” Ironically, some of those horrified by the site may have helped by linking to it in order to criticize it. The more a site is linked to, the more prominence Google’s algorithm gives it in search results.

Google responded to complaints by adding a headline at the top of the page entitled “An explanation of our search results.” A web page linked to the headline explained why the offensive site appeared so high in the relevant rankings, thereby distancing Google from the results. The label, however, no longer appears. In Europe and many other countries, lawmakers should consider requiring such labeling in the case of obvious hate speech. To avoid mainstreaming extremism, labels may link to accounts of the history and purpose of groups with innocuous names like “Council of Conservative Citizens.”[7]

Are there free expression concerns here? Not really. Better labeling practices for food and drugs have escaped First Amendment scrutiny in the U.S., and why should information itself be different? As law professor Mark Patterson has demonstrated, many of our most important sites of commerce are markets for information: search engines are not offering products and services themselves but information about products and services, which may well be decisive in determining which firms and groups fail and which succeed.[8] If they go unregulated, easily manipulated by whoever can afford the best search engine optimization, people may be left at the mercy of unreliable and biased sources.

b. Audit logs of the data fed into algorithmic systems.

We also need to get to the bottom of how some racist or anti-Semitic groups and individuals are manipulating search. We should require immutable audit logs of the data fed into algorithmic systems. Machine-learning, predictive analytics or algorithms may be too complex for a person to understand, but the data records are not.

A relatively simple set of reforms could vastly increase the ability of entities outside Google and Facebook to determine whether and how the firms’ results and news feeds are being manipulated. There is rarely adequate profit motive for firms themselves to do this — but motivated non-governmental organizations can help them be better guardians of the public sphere.

c. Ban certain content.

In cases where computational reasoning behind search results really is too complex to be understood in conventional narratives or equations intelligible to humans, there is another regulatory approach available: to limit the types of information that can be provided.

Though such an approach would raise constitutional objections in the U.S., nations like France and Germany have outright banned certain Nazi sites and memorabilia. Policymakers should also closely study laws regarding “incitement to genocide” to develop guidelines for censoring hate speech with a clear and present danger of causing systematic slaughter or violence against vulnerable groups.

It’s a small price to pay for a public sphere less warped by hatred. And unless something like it is done, expect social media driven panics about despised minorities to lead to violence. 

d. Permit limited outside annotations to defamatory posts and hire more humans to judge complaints.

In the U.S. and elsewhere, limited annotations ― “rights of reply” ― could be permitted in certain instances of defamation of individuals or groups. Google continues to maintain that it doesn’t want human judgment blurring the autonomy of its algorithms. But even spelling suggestions depend on human judgment, and in fact, Google developed that feature not only by means of algorithms but also through a painstaking, iterative interplay between computer science experts and human beta testers who report on their satisfaction with various results configurations.

e. Limit the predation possible by online intermediaries.

Given all the negative externalities generated by online intermediaries, policymakers should limit the profits they make relative to revenues of the content owners whose work they depend on. In the health care context in the US, private insurers can only keep a certain percentage of premiums (usually 15 to 20%)—the rest must go to health care providers, like hospitals, doctors, and pharmaceutical firms. Such a rule keeps the intermediary from taking too much of the spending in a sector—a clear and present danger in monopolistic internet contexts. Governments could limit the amount of profits that search engines and social networks make as intermediaries, requiring them to pay some share of their revenues to content generators like newspapers and media firms.[9]

f. Obscure content that’s damaging and not of public interest.

When it comes to search results about an individual person’s name, many countries have aggressively forced Google to be more careful in how it provides information. Thanks to the Court of Justice of the European Union, Europeans can now request the removal of certain search results revealing information that is “inadequate, irrelevant, no longer relevant or excessive,” unless there is a greater public interest in being able to find the information via a search on the name of the data subject.[10]

Such removals are a middle ground between information anarchy and censorship. They neither disappear information from the internet (it can be found at the original source) nor allow it to dominate the impression of the aggrieved individual. They are a kind of obscurity that lets ordinary individuals avoid having a single incident indefinitely dominate search results on his or her name. For example, a woman in Spain whose husband was murdered 20 years ago successfully forced Google Spain to take news of the murder off search results on her name. This type of public responsibility is a first step toward making search results and social network newsfeeds reflect public values and privacy rights.

-------------------------------------------------------------------------------------------------------------------

[1] Frank Pasquale, THE BLACK BOX SOCIETY: THE SECRET ALGORITHMS BEHIND MONEY AND INFORMATION (Harvard University Press, 2015). Chapter 3 discusses search engines and social networks in detail.
[2] Frank Pasquale, Beyond Competition and Innovation: The Need for Qualified Transparency in Internet Intermediaries, 104 Nw. U. L. REV. 105 (2010).
[3] Oren Bracha and Frank Pasquale, Federal Search Commission? Access, Fairness, and Accountability in the Law of Search, 93 CORNELL L. REV. 1149 (2008) (with Oren Bracha); Frank Pasquale, Internet Nondiscrimination Principles: Commercial Ethics for Carriers and Search Engines, 2008 U. CHI. LEG. F. 263 (2008) (invited piece for symposium Law in a Networked World). Note, too, that the filter bubble problem is not one of left voters needing to be exposed to right voters’ worldview, and vice versa; it is one of a lack of autonomy and understanding of how one’s media environment is shaped.
[4] Frank Pasquale, Restoring Transparency to Automated Authority, 9 J. TELECOMM. & HIGH TECH. L. (2010). 
[5] Frank Pasquale, Duped by the Automated Public Sphere, at http://discoversociety.org/2017/01/03/duped-by-the-automated-public-sphere/
[6] Frank Pasquale, Rankings, Reductionism, and Responsibility, 54 CLEV. ST. L. REV. 115 (2006).
[7] Frank Pasquale, Platform Neutrality, 17 THEORETICAL INQUIRIES IN LAW 487 (2016); Asterisk Revisited: Debating a Right of Reply on Search Results, 3 J. BUS. & TECH. L. 61 (2008). 
[8] Patterson, Antitrust Law in the New Economy (2016). 
[9] I applied this model to cable firms here: http://madisonian.net/2010/01/04/a-content-loss-ratiofor-cable-companies/; see also Jaron Lanier, Who Owns the Future; Vili Lehdonvirta, https://www.oii.ox.ac.uk/blog/could-data-pay-for-global-development-introducing-datafinancing-for-global-good/
[10] Frank Pasquale, Reforming the Law of Reputation, 47 LOYOLA L. REV. 515 (2016).

Published: 03.01.2018

<< Back

Cookies enable the best possible provision of our online offer.
By using the website www.mabb.de you agree that cookies are used.