Below is an article I wrote with Maarten de Rijke, which was published in nrc.next and NRC Handelsblad under a somewhat misleading title (which wasn’t ours). I cleaned up a Google Translate translation of this article. The translation is far from perfect, but I believe gets the main point across. You can read the original article in Blendle (for €0.29) or on NRC.nl (for free).

See the article in NRC
The article in NRC

A google image search for “three black teens” resulted in mugshot photos, while a search for “three white teens” yielded stock photos of happy smiling youth. Commotion everywhere, and not for the first time. The alleged lack of neutrality of algorithms is a controversial topic. In this controversy, the voice of computer scientists is hardly ever heard. And to have a meaningful discussion on the topic, it is important to understand the underlying technologies.

Our contention, as computer scientists: the lack of neutrality is both necessary and desirable. It is what enables search and recommendation systems to provide us access to huge amounts of information, and let us discover new music or movies. With objective, neutral algorithms, we wouldn’t be able to find anything anymore.

There’s two reasons for this. First, the “usefulness” of information is personal and context-dependent. The quality of a movie recommendation from Netflix, the interestingness of a Facebook post, even the usefulness of a Google search result, varies per person and context. Without contextual information, such as user location, time, or the task performed by the user, even experts do not reach agreement on the usefulness of a search result.

Second, search and recommendation systems have to give us access to enormous quantities of information. Deciding what (not) to display, the filtering of information, is a necessity. The alternative would be a “Facebook” which shows thousands of new messages every single day, making each new visit show a completely new deluge of posts. Or a Netflix which recommends only random movies, so that you can no longer find the movies you really care about.

In short, search and recommendation systems have to be subjective, context-dependent, and adapted to ourselves. They learn this subjectivity and lack of neutrality, from us, their users. The results of these systems are thus a reflection of ourselves, our preferences, attitudes, opinions and behavior. Never an absolute truth.

The idea of an algorithm as a static set of instructions carried out by a machine is misleading. In the context of, for example, Facebook’s news feed, Google’s search results or Netflix’ recommendation, a machine is not told what to do, but told to learn what to do. The systems learn from subjective sources: ourselves, our preferences, our interaction behavior. Learning from subjective sources naturally yields subjective outcomes.

To choose what results to show, a search and recommendation system learns to predict the user’s preferences or taste. To do this, it does what computers do best: counting things. By keeping track of the likes a post receives, or the post’s reading time, the system is able to measure various characteristics of a post. Likes or reading-time are just two examples: in reality, hundreds of attributes are included.
To then learn what is useful for an individual user, the system must determine which features of posts the user considers important. Essential here is to determine the effectiveness of the information displayed. For this, the system gets a goal, such as making sure the user spends more time on the site.
By showing messages with different characteristics (more or less likes, longer or shorter reading times), and to keep track of how long or often the user visits the site, the system can learn which message characteristics makes people spend more time on the website. Things that are simple to measure (clicks, likes, or reading time) are used to bring about more profound changes in user behavior (long term engagement). Furthermore, research has shown that following the personalized recommendations eventually leads to a wider range of choices, and a higher appreciation of the consumed content for users.

The success of modern search and recommendation systems largely results from their lack of neutrality. We should consider these systems as “personalized information intermediaries.” Just like traditional intermediaries (journalists, doctors, opinion leaders), they provide a point of view by filtering and ranking information. And just like traditional intermediaries, it would be wise to seek a second or third opinion when it really matters.

Related posts


Posted

in

,

Comments

3 responses to “Algorithms aren’t neutral. And that’s a good thing.”

  1. Jeremy Pickens Avatar
    Jeremy Pickens

    Our contention, as computer scientists: the lack of neutrality is both necessary and desirable. It is what enables search and recommendation systems to provide us access to huge amounts of information, and let us discover new music or movies. With objective, neutral algorithms, we wouldn’t be able to find anything anymore.

    I think that there is a slight misunderstanding here, in what each side of this discussion means by “bias”. To a machine learning person, yes, “bias” essentially means “learnability”, or the ability to assign non-equal probability to every item in the collection. And of course we want to be able to do that. We do indeed want a lack of neutrality.

    But to the layperson, when they talk about bias, they mean it in a different way. If I could attempt a translation from the lay meaning to the machine learning meaning: When folks say that they want unbiased results, what they mean is that they want to the bias to be consistently applied. Or rather, they want the bias to be “normalized” in some manner, such that it achieves a socially desirable outcome.. rather than just the highest data-probability outcome.

    The words I would use for this are “descriptive bias” versus “prescriptive bias”. In descriptive bias, you have the situation where the search for “three black teens” yields mugshots, whereas the search for “three white teens” yields happy smiling faces. In this example, there is bias in both searches, but the bias is descriptive of what has been happening in the world to this point in time.. everyone who has searched and what they have clicked, what the pagerank scores of each of the documents are, etc. etc. etc.

    And what people actually want is bias, but “prescriptive bias”, instead. Prescriptive bias means that the bias itself is further biased, so as to “normalize” for differences in outcomes that shouldn’t be there, no matter what the data historically has said. Think of prescriptive bias as an advanced form of machine learning generalization.

    For example, results that are prescriptively biased would show either mugshots for both “three black teens” AND mugshots for “three white teens”. Or it would show happy smiling faces for both “three white teens” AND “three black teens”. There is more than one way to prescriptively bias these search result.. either all to mugshots or all to happy smiling faces. Further work is needed to figure out which prescriptive bias is the best prescriptive bias. But either way that bias goes, it would be an “equal” bias.

    So that’s what the layperson means when they say that they want no bias. What they mean is that they want equal (prescriptive) bias. Rather than what they’re currently getting, which is unequal (descriptive) bias.

    Furthermore, research has shown that following the personalized recommendations eventually leads to a wider range of choices, and a higher appreciation of the consumed content for users.

    Yes and no. The work I’m most familiar with is the Celma 2007 dissertation, and what he found is that personalized recommendation increased variety for individual users, but decreased variety as a whole. That is, any one person had their horizons enlarged, but the union of all users of a system had its horizon shrunk.

    One would think that if there were prescriptive bias, rather than just descriptive bias, both the individual variety AND the holistic, systemic variety would increase. Rather than the former increasing and the latter decreasing. My feeling is that we should strive for the right kind of bias, and I don’t know if we’re currently doing that.

    1. dvdgrs Avatar

      You’re right in the bias distinction, it’s a meaningful one but too in-depth for the newspaper  . In the article we talk about the layperson’s notion of bias.

      And as to whether we would want to achieve these (biased but) socially desirable outcomes is interesting, because it is also potentially risky. In the end, we can “unbias” in multiple directions (as you show), and consequently nudge people in multiple directions, too. And I’m not sure whether we want to have tech companies dictating/deciding what is socially desirable.

      Blindly “listening” to the user as systems do now is safer, particularly in the accountability area I would think, provided that users understand how these results come to be, and that and why they may exhibit these (human-encoded) biases. That’s what this article aims to do; treat your SERPs and FB newsfeeds as just another source with another bias, and keep using your own brain in the mean time. That’s not to say that we shouldn’t do anything for the bad cases (as the black teen/white teen example — however that sorted itself out eventually  ).

      The sentence you cite came from a WWW 2014 paper [1] (and the sentence used to be a slightly more nuanced paragraph — but got butchered down into its current state).

      [1] http://dl.acm.org/citation.cfm?id=2568012

  2. […] has come to my attention that the nrc.next article I wrote with Maarten de Rijke on the lack of neutrality of algorithms is mentioned in Maurits Martijn & Dimitri Tokemetzis’ “Je hebt wél iets te […]

Leave a Reply