surveillance capitalism - skemman

47
< Surveillance Capitalism A love story about slavery Kolbeinn Gauti Friðriksson Nútímafræði Hug- og félagsvísindasvið Háskólinn á Akureyri 2021

Upload: others

Post on 17-Mar-2022

4 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: Surveillance Capitalism - Skemman

<

Surveillance Capitalism

A love story about slavery

Kolbeinn Gauti Friðriksson

Nútímafræði

Hug- og félagsvísindasvið

Háskólinn á Akureyri

2021

Page 2: Surveillance Capitalism - Skemman

1

Surveillance Capitalism

A love story about slavery

Kolbeinn Gauti Friðriksson

12 eininga lokaverkefni

sem er hluti af

Bachelor of Arts-prófi í nútímafræði

Leiðbeinandi

Giorgio Baruchello

Nútímafræði

Hug- og félagsvísindasvið

Háskólinn á Akureyri

Reykjavík, 10. janúar 2021

Page 3: Surveillance Capitalism - Skemman

2

Titill: Surveillance Capitalism: A love story about slavery

Stuttur titill: Surveillance Capitalism

12 eininga lokaverkefni sem er hluti af Bachelor of Arts-prófi í Nútímafræði

Höfundarréttur © 2021 Kolbeinn Gauti Friðriksson

Öll réttindi áskilin.

Nútímafræði

Hug- og félagsvísindasvið

Háskólinn á Akureyri

Sólborg, Norðurslóð 2

600 Akureyri

Sími: 460 8000

Skráningarupplýsingar:

Kolbeinn Gauti Friðriksson, 2021, B.A. verkefni, nútímafræðideild, hug- og

félagsvísindasvið, Háskólinn á Akureyri

Page 4: Surveillance Capitalism - Skemman

Abstract

Digital surveillance has become an integral part of the daily lives of billions of people all across

the globe. What we search, what our interests are, what we eat, whom we meet, where we meet

them, and for how long, is all being documented in one way or another. For everyday Internet

users, this system of constant surveillance is becoming more and more obvious, inasmuch as

when we meet someone at a bar, the following day Facebook suggests that we ‘befriend’ that

person. As our paths cross, so do our algorithms. In this essay, I aim to explain how this system

of constant surveillance came about and keeps expanding, and the effects that this system could

have on our lives.

Ágrip

Stafrænt eftirlit hefur á síðastliðnum áratugum ofist inn í daglegt líf milljarða manna út um

allan heim. Vefsíður sem við notum, leitarorð sem við sláum inn í leitarvélar, áhugamál okkar,

matarvenjur okkar, hvern við hittum, hvar við hittum viðkomandi og hversu lengi, er allt

skrásett. Þetta landslag af stöðugu eftirliti og skrásetningu á gjörðum okkar verður æ augljósara

fyrir þá sem nýta sér þessa nútímatækni. Við rekumst á gamlan kunningja á bar og morguninn

eftir stingur Facebook upp á því að við sendum vinabeiðni á viðkomandi. Algrím metur hvað

muni kveikja áhuga okkar og gaukar að okkur tillögum og hugmyndum, sem allt byggist á

upplýsingum um okkur sem stöðugt er verið að safna. Í þessari ritgerð hyggst ég útskýra

hvernig þessi heimur af stafrænu eftirlit varð til, hvernig og af hverju hann stækkar og dafnar,

og mögulegar afleiðingar þess að láta hann óáreittan.

Page 5: Surveillance Capitalism - Skemman

4

Table of Contents

The beginning of surveillance capitalism .................................................................................. 7

The blueprint .......................................................................................................................... 7

A customer or a laborer.......................................................................................................... 8

A bursting bubble ................................................................................................................... 9

An electron microscope into people’s behavior................................................................... 10

Behavioral surplus: Breadth and scope .................................................................................... 12

SC levels up ......................................................................................................................... 13

Omnipresent ......................................................................................................................... 14

Emotional surplus ................................................................................................................ 16

The net catches the shoal ..................................................................................................... 18

Predictions and nudges ............................................................................................................ 20

Omnipotent .......................................................................................................................... 20

The second trimester: A retailers goldmine ......................................................................... 21

The co-pilot .......................................................................................................................... 23

Predicting the future ............................................................................................................. 24

Predictions in the political sphere ........................................................................................ 26

Hyper-relevance ................................................................................................................... 27

Nudging................................................................................................................................ 28

Legal issues concerning surveillance capitalism ..................................................................... 30

The Andy Grove formula ..................................................................................................... 31

Not an accident .................................................................................................................... 34

The ultimate revolution: Will surveillance capitalism get us to love our servitude ................ 35

Brute force versus love ........................................................................................................ 36

Standardization .................................................................................................................... 37

Comparing dystopias ........................................................................................................... 38

People as instruments ........................................................................................................... 39

Page 6: Surveillance Capitalism - Skemman

5

Conclusion ............................................................................................................................... 40

References ................................................................................................................................ 42

Page 7: Surveillance Capitalism - Skemman

6

Surveillance capitalism: A love story about slavery

The gathering and storing of personal information by tech companies is becoming a

part of our everyday lives in the new landscape of digital surveillance. The complexity of this

matter has made it hard for people to put a finger on the problem, or even to decide whether

there exists a problem to put a finger on.

What kind of information are companies like Google, and Facebook extracting from

us? Who wants this information? And for what purposes? What is the raw material in this

virtual realm? Who are the laborers? And what is the end product?

These are some of the questions that I intend to answer in this essay, which aims at

providing a detailed picture of this new landscape. Surveillance capitalism—hereafter simply

SC—is an umbrella term coined by Harvard economics professor Shoshana Zuboff, to put a

name on the problem. The term was put forth in Zuboff’s 2019 book, The Age of Surveillance

Capitalism: The Fight for a Human at the New Frontier of Power (Zuboff, 2019). I intend to

use some key terminology from Zuboff’s book to showcase what SC entails and what living in

a world where this new form of capitalism thrives, could mean for us and future generations.

The first chapter of this essay is titled “The beginning of surveillance capitalism”, there

I intend to focus on the socio-political and economic environment from which SC was born.

The second chapter is entitled “behavioral surplus: Breadth and scope”, in that chapter I intend

to explain what behavioral surplus is, where it comes from, and the significance that it plays in

the SC marketplace. Chapter three is entitled “Predictions and nudges” and in it, I look at the

final product of SC’s algorithmically constructed predictions about consumer behavior, and the

concept of the hypernudge, put forth in 2017 by law professor Karen Yeung (Yeung, 2017).

Chapter four is entitled “Legal issues concerning surveillance capitalism”, in this chapter I

intend to shed a light on some of the issues concerning the legality of data collection and

concerns raised about monopolization in the digital sphere. In the last chapter, “The ultimate

Page 8: Surveillance Capitalism - Skemman

7

revolution: Will surveillance capitalism get us to love our servitude?”, I introduce some

theories and predictions put forth by the English writer Aldous Huxley, both in his 1932 novel

Brave New World and in a speech given by him at the University of Berkeley in 1962 titled

The Ultimate Revolution (Huxley, 1962/2019). In this chapter, I aim to showcase some

significant links between Huxley’s thoughts and predictions, and today’s landscape of SC,

specifically the role that SC plays in shaping today’s world and possibly the future of the

democratic society.

The beginning of surveillance capitalism

In this chapter, I aim to explain succinctly the cultural, social, and economical

environment that gave birth to contemporary surveillance capitalism. How and why pre-

existing technology that had been used for some years to improve Google’s search engine was

repurposed to gain unwarranted insight into the everyday life of Google users around the world,

and what effects this had on technological advancements across all sectors. I will also explain

some key concepts from Zuboff’s book about SC, such as behavioral surplus and behavioral

byproducts (Zuboff, 2019, p.97).

The blueprint

As Zuboff points out in her book, to fully understand SC one must begin at the very

beginning, where the blueprint for SC was laid out, i.e., at Google. Google came on the market

in 1998 and quickly became a popular search engine. The Page Rank Algorithm invented by

one of Google’s founders, Larry Page, identified the most popular results for searches and is

credited for giving Google an advantage over other search engines (Zuboff, 2019). Google is

also credited with being the first web browser to make use of behavioral byproducts (Zuboff,

2019).

Behavioral byproducts is a collection of facts about searches beyond key-words, such

as spelling errors, how long an individual is typing, phrasing, and the location where the search

Page 9: Surveillance Capitalism - Skemman

8

takes place, in the early stages of Google this data was strictly used to improve their search

engine in Zuboffs book this is titled The behavioral value reinvestment cycle (Zuboff, 2019, p.

70). Google also stored behavioral data that was not used up to improve their search engine,

this in Zuboffs book is titled the behavioral surplus (Zuboff, 2019, p. 74). At first, this data

simply sat on hard discs and was of no particular interest to anyone, but it would later become

of vital importance to SC (Zuboff, 2019). As more users started using Google to learn about

all sorts of things, Google was learning more from the behavioral data left by its users, and

using this information to improve the search engine. As the search engine started to come up

with more relevant results for each user, more people started to use Google, which led to more

behavioral surplus (Zuboff, 2019).

A customer or a laborer

Most tech companies that came before Google had all offered something for sale.

Microsoft sold software and computers, Apple sold computers and would later go on to sell

Mp3 players and phones among other things. Google however did not offer anything for sale

to the general population - how were they to capitalize on their revolutionary technology?

Today Google has expanded into multiple areas, but most of the company's flagship products,

Google Maps, Google’s search engine, and Gmail for example are all free (Zuboff, 2019).

Nevertheless, Google often refers to the people that use Google products as their customers but

in most cases, there is no transaction of fees for goods or services (Zuboff, 2019).

This dilemma begs the question who are the actual customers of the company? Another

widespread rule of thumb that Zuboff addresses in her book is to think of the users as the

product. This, she claims, is also not accurate, for the users of Google should instead be thought

of as the producers of the raw material (i.e., behavioral data) that the company transforms into

economic goods (Zuboff, 2019).

Page 10: Surveillance Capitalism - Skemman

9

In the first years of the company, Google made profits from licensing deals to provide

web services for other web portals, such as Yahoo and the Japanese company Big Lobe. They

also made some revenue from selling advertisement space, but the founders of Google had

always been wary of introducing advertisements to their search engine (Zuboff, 2019). This

shines through in a 1998 essay written by Google’s founders Larry Page and Sergey Brin while

still students at Stanford University. In which they state that:

Search engines have migrated from the academic domain to the commercial. Up until

now most search engines have gone on at companies with little publication of technical

details. This causes search engine technology to remain largely a black art and to be

advertising oriented. With Google, we have a strong goal to push more development

and understanding into the academic realm (Brin and Page, 1998, p. 109).

A bursting bubble

In the late 1990s, the business world and the so-called “Silicon Valley” went through

what was later dubbed “The Dot-Com bubble”. During this time, the number of people who

were using the Internet was growing steadily every day and many investors were investing in

internet-related companies, including Google (Zuboff, 2019). By the year 2000, The Dot-Com

bubble eventually burst, as many of the investors who had put their money in internet startups

started doubting that the companies in which they had invested were going to turn a profit.

These events caused a minor international stock-market crash and considerably bigger turmoil

in Silicon Valley. Although Google was well on its way to become the most widely used

Internet browser in the world and their offices filled up with CV´s from people who wanted to

join their team, this was not reflected in their profits (Zuboff, 2019).

Two articles published in the Wall Street Journal in the early 2000s captures the

atmosphere of Silicon Valley during those turbulent times. An article titled “The new Dot-Com

Page 11: Surveillance Capitalism - Skemman

10

mantra: ‘Just pay me in cash, please’” written by Susan Pulliam describes the change of

attitudes regarding compensation for dot-com employees stating that:

With the collapse of many Internet stocks, both executives and rank-and-file are

pressuring more and more Internet companies to increase the amount of compensation

they pay out in cash rather than stock options, many of which have become worthless

as the shares have plummeted (Pulliam, 2000, para. 2).

Another article published in the Wall Street Journal titled “Dot-Com bubble has burst;

Will things worsen in 2001?” (Swisher, 2000) also reflects the change of attitude towards dot-

com businesses from the venture capitalists funding these companies, a term used for private

investors providing capital in exchange for an equity stake. A venture capitalist interviewed for

the article claimed that the climate had changed and it was no longer enough to display the

ability to make money to remain a major player in the years ahead, an ability to show sustained

and exponential profits from a variety of sources would be required (Swisher, 2000). In the

year 2000 Google was selling ads through its ads department, Ad-Words (now known as

Google Ads). Using Google’s logistics of click rates, advertisers would target specific ads to a

specific group of people based on their search history, but this was all about to change (Zuboff

2019).

An electron microscope into people’s behavior

Intending to create more relevance of ads shown to their users, Google decided to

change how they targeted advertisements to customers. Google co-founder Larry Page decided

that Google would choose the key-words to which advertisements would be linked, and

behavioral data that had until this point only been used to improve Googles search-engine

would now be used to target advertisements to specific users. Some behavioral data would still

be used to improve for search engine improvement, but the behavioral surplus was to be used

for advertisement targeting, thus making the ads more relevant to each user based on his or her

Page 12: Surveillance Capitalism - Skemman

11

online presence. This would later prove to be extremely profitable for Google and a watershed

moment for SC (Zuboff, 2019).

In 2002 the New York Times published an article titled “Postcards from planet Google”

(Lee, 2002). In this article, the reader gets a glimpse of the inner workings of Google during

the first years of surveillance capitalism. The author, Jennifer 8 Lee, interviews several Google

employees whose work it is to analyze the stream of queries that Google processes each second.

The article covers the significance of the “Carol Brady moment“. One morning Google´s log

team came into work and realized the question: What was Carol Brady’s maiden name? had

reached the top of the search query the night before. Specifically, there were fives spikes in

search queries of the question all occurring 48 minutes after the hour. Carol Brady was a

member of the popular ‘70s sitcom The Brady Bunch, and her maiden name was the final

question of the quiz show Who wants to be a millionaire, while the five spikes in searches were

spread over different states with different time zones (Lee, 2002). Google’s co-founder Sergey

Brin described studying the Carol Brady search data as: “trying an electron microscope for the

first time” (Lee, 2002).

In the article Lee describes the significance of the data that Google is obtaining in the following

way:

Google's query data responds to television, movies, and radio. But the mass media also

feed off the demands of their audiences. One of Google's strengths is its predictive

power, flagging trends before they hit the radar of other media. As such it could be of

tremendous value to entertainment companies or retailers. Google is quiet about what

if any plans it has for commercializing its vast store of query information (Lee, 2002,

para. 24).

Page 13: Surveillance Capitalism - Skemman

12

When asked about the future use of this data Google employee Craig Silverstein responded:

”There is tremendous opportunity with this data, the challenge is defining what we want to do”

(Lee, 2002).

In the following chapters, I intend to demonstrate what these opportunities were and

what Google decided to do with the culled data. The move that Google made away from its

original business model onto the more advertisement oriented strategy created a blueprint for

SC. Also, Google had shown its competitors how they could save their companies and start

making exponential profits for its investors. Unsurprisingly, pretty soon all other major

companies would follow suit.

Behavioral surplus: Breadth and scope

In this chapter, I aim to demonstrate the further developments of SC, and how many

recent technological advancements have widened the scope of SC. Furthermore, I will

showcase how many of the technological advancements that have become a part of our daily

lives in recent years have been harvesting information from their users for monetization

purposes. In Zuboff’s theory, this is titled the extraction imperative (Zuboff, 2019, p.87).

Zuboff explains that the difference between industrial capitalism and surveillance

capitalism lies in the fact that industrial capitalism demands economies of scale in production

to achieve high throughput as well as low unit cost. Surveillance capitalism however demands

economies of scale in the extraction of behavioral data (Zuboff, 2019). The technological

market-place can therefore be looked at as twofold: On one hand, companies are creating

products for the modern consumer: products that we view as practical, useful, neat, or beautiful.

But on the other hand, there is a different kind of development taking place. In industrial

capitalism, the manufacturers need to get the product to the consumer, but in SC it doesn’t end

there. The technology of the gadgets, software, apps, or websites must also be able to extract

information from the consumer, for that is the raw material from which many of these

Page 14: Surveillance Capitalism - Skemman

13

companies build their fortunes (Zuboff, 2019). Zuboff titles this built in ability to collect

information the extraction architecture (Zuboff, 2019, p. 129).

SC levels up

In 2015 the Web Privacy Census revealed that a person who visited one of the 100 most

popular websites of the internet would end up with over 6000 tracking cookies on his or her

computer, 83% of these cookies were left by third parties and were completely unrelated to the

content on the visited website. They furthermore discovered that 92 of the 100 most visited

websites, and 923 of the 1000 most visited websites, had Google tracking infrastructure

(Altaweel et al., 2015). The invention of the smartphone has played a vital part in broadening

the scope of SC as it opened up a whole new stream of behavioral surplus as the users were

now carrying their devices with the built-in extraction architecture everywhere they went

(Zuboff, 2019).

In 2017 the Yale Privacy lab and the French non-profit organization Exodus-Privacy

discovered 44 trackers spread over 300 apps available through the Google Play platform which

is pre-installed on phones using the Android mobile operating system (Grauer, 2017). The

research showed that apps designed for dating, studying weather forecast, and a flashlight app

were collecting a significant amount of information from their users that had nothing to do with

the functionality of the applications. They furthermore stated that many of the apps were

designed in a way that, although they may not be collecting information right now, nothing is

preventing the controllers of the app’s from changing the functions of the app´s at any given

moment so that they start obtaining personal information (Grauer, 2017).

The marketplace of behavioral surplus in the SC marketplace is ever-growing, the

companies that have the most used gadgets, websites, or apps obtain the greatest amount of

behavioral surplus. With technological advancements, SC is no longer concentrated in the

digital sphere (Zuboff, 2019).

Page 15: Surveillance Capitalism - Skemman

14

In 2007 Google launched Google Street View (O’Brien, 2010). In the beginning, the

site was only available for San Francisco, New York, Miami, Denver, and Las Vegas but soon

the project expanded and Google started the ambitious task of mapping the whole world. In

many European countries, the initiative was met with resistance, most notably in Germany

where in 2010 the German Federal Commission for Data Protection proclaimed that the Google

street view cars mapping Germany were covertly obtaining and storing personal data from

private Wi-Fi networks (O’Brien, 2010). At first, Google refuted the allegations, but in a blog

post posted on the official Google blog on May 17th, 2010 Google confessed to having collected

private data but claimed they had not done so deliberately, it had happened because of a mistake

made by a single engineer (O’Brien, 2010). Germany‘s Federal commissioner for data

protection and freedom of information, Peter Schaar, expressed his doubts that the incident had

been a mistake on Google’s behalf in an entry posted to his government blog that was later

quoted in The New York Times:

So everything was a simple oversight, a software error? The data was collected and

stored against the will of the project's managers and other managers at Google. If we

follow this logic further, this means: The software was installed and used without being

properly tested beforehand. Billions of bits of data were mistakenly collected, without

anyone in Google noticing it, including Google's own internal data protection

managers, who two weeks ago were defending to us the company's internal data

protection practices (O’Brien, 2010, para. 12).

Omnipresent

The last decades have seen tremendous technological advancements most of which are

connected in one way or another to the internet, the internet is no longer something you access

through your computer or phone it is becoming omnipresent. The Fastest growing consumer

technology today is that of intelligent personal assistants (Pridmore and Mols, 2020). In a panel

Page 16: Surveillance Capitalism - Skemman

15

discussion at the World economic forum held in Davos, Switzerland in 2015, Google chairman

Erick Schmidt was asked about the future of the internet and replied:

I will answer very simply the Internet will disappear, there will be so many IP addresses

so many devices, sensors, things that you are wearing, things that you are interacting

with that you won’t even sense it. It will be part of your presence all the time. Imagine

you walk into a room, and the room is dynamic. And with your permission and all of

that, you are interacting with the things going on in the room (Smith, 2015, para. 3).

This reality that Schmidt is describing is commonly referred to as the “internet of

things”, where everything from household items to clothing is connected to the Internet (Smith,

2015). These technological advancements according to Zuboff have tremendous opportunity

for surveillance capitalists, as they entail that the extraction architecture is no longer confined

to the Internet in the terms of online applications, websites, emails, and online activity. We

now wear, talk to, sleep on, and vacuum with internet-connected objects with built-in extraction

architecture. With the internet of all things our online presence is no longer bound to the time

we use our phones or computers; our lives turn into our online presence and every aspect of

our lives is therefore suitable for extraction, all our behavior can be monitored, stored, sold and

monetized. (Zuboff, 2019). In 2017 the company, iRobot, publicized their new product, a

vacuum cleaner titled Roomba, which, armed with a laser sensor, camera, and GPS abilities,

maps your home while it vacuums. iRobot CEO Colin Angle implied the option of selling these

maps to companies such as Google and Amazon in an interview he gave to the news

organization, Reuters, that same year. When asked about privacy issues connected to the

mapping of people’s private homes, he replied that he believed that most would give their

consent to access the smart home functions (Wolfe, 2017).

The extraction imperative does not end when we fall asleep. The company, Sleep

Number, put a smart bed on the market that can heat or cool the body and record your sleeping

Page 17: Surveillance Capitalism - Skemman

16

habits. The bed also records your heart rate, breathing, movement, as well as all audio signals

from the bedroom (Zuboff, 2019). Sleep Number warns their customers that they can share

their personal information even after they deactivate or cancel their services (Zuboff, 2019).

The brochure that comes with the bed states that whether you give up the information or not is

of course up to the customer, but if you opt not to they cannot guarantee that they will be able

to provide certain services and some features of the bed will not work (Zuboff, 2019).

According to Zuboff, these examples showcase the eagerness of surveillance capitalists

to obtain your data. If one tries to buy a product and hinder the inner-workings of the extraction

architecture of the product, the product will simply not work, even though allowing for the

company that produced the product to sell the information they obtain from you to a third party

has nothing to do with the functionality of the product. The consumers are, as Zuboff points

out, therefore paying twice for every product, first, they pay for the product itself and then they

pay with an unlimited amount of raw material in the form of behavioral by-products to be

extracted from them via the extraction architecture of the product that the product’s

manufacturer can then sell to third parties. If they refuse to pay for it the second time and do

not accept all terms of agreements the product will simply not work (Zuboff, 2019).

Emotional surplus

How a person expresses himself or herself online on digital platforms does not

necessarily reflect the emotional state of the person, according to Zuboff, and the way people

talk, carry themselves, and interact with their loved ones does not necessarily, Zuboff

continues, reflect the deepest self of the individual. People may be hiding things, even from

themselves, but certain things can perhaps tell the true story of a person, a heart rate, a sudden

rush of adrenaline, the flow of dopamine, a sudden twitch. If someone obtained this kind of

information, it could be used to build a very accurate map of an individual’s inner lives,

especially when it is put together with location, voice, video footage, etc. As technological

Page 18: Surveillance Capitalism - Skemman

17

advancements proceed to enter into new aspects of our everyday lives the extraction

architecture gets placed deeper. With the extraction architecture entering new territories the

behavioral surplus becomes more accurate and varied and the map that can be built of an

individual becomes more precise (Zuboff, 2019).

In 2017 the digital marketing firm Ovum released a report titled The Future of E-

Commerce: The Road To 2026. In this report, Ovum stated that the high increase in wearables

(electronic devices that are worn on clothing or skin) in the world would be a “source of very

granular data insights and also new types of behavioral and usage data. Wearables of the future

will have the ability to capture a wide array of data related to a user's contextual activity, health,

and emotional state. This information can be used to enhance and tailor products and marketing

messages to a very high degree” (Ovum report, 2017).

In 2014 Facebook (FB) applied for a patent, as pointed out in Zuboffs book, on emotion

detection. The patent application included software modules capable of detecting emotions,

expressions, and other characteristics of a user, obtainable by image information (Zuboff,

2019). The list of detectable emotions FB listed included joy, humor, amazement, excitement,

surprise, a frown, sadness, disappointment, confusion, jealousy, indifference, boredom, anger,

depression, and pain. What they hoped for such a module to achieve was to read into a user’s

interest in displayed content to create emotion type customization (Zuboff, 2019).

In 2014 the results from an experiment carried out by FB together with Cornell

University and the University of California were published. The experiment dealt with massive

scale emotional-contagion through social networks. In the experiment, researchers filtered

news feeds, the flow of contents, and exposure to FB friends in order to research the emotional

reactions to these experiments. The conclusion was that emotions expressed by others through

social networks influence the moods of users, constituting evidence for the first known

experimental evidence for massive-scale emotional-contagion (Kramer et al., 2014). The

Page 19: Surveillance Capitalism - Skemman

18

results from the experiment showcase what kind of power over people’s emotional lives a

company like FB yields. The music we listen to offers another window into our hearts and state

of mind in the digital era. The Swedish audio streaming company Spotify has utilized the

information brought to them through the listening habits of their consumers to help third parties

make ads more relevant through their playlist targeting ad platform, announced on Spotify’s

website May 1, 2015. The platform allows advertisers to advertise to listeners that have been

categorized by Spotify’s 1.5 billion playlists. Advertisers can furthermore decide what group

they want to advertise to as categorized by Spotify (Gesenhues, 2015).

In 2017 a confidential internal FB document meant to showcase to advertisers FB’s

ability to determine the mood of teenagers using the platform, leaked and was published in an

Australian newspaper, The Australian. The document details how FB can detect the emotional

states of teenagers and young people in real-time using the platform. By monitoring, posts,

pictures, and internet activity FB can detect when the teenagers feel “stressed”, “defeated”,

“overwhelmed”, “stupid”, or “silly” (Levin, 2017).

The net catches the shoal

As we have seen the amount of information it is possible to extract from us is seemingly

endless, if all taken together it is quite plausible that the holders of this information can even

know a person more fully than that person knows his or herself. It is no longer enough to have

the best or most reasonably priced product on the market to be a top competitor in the data

marketplace. The top tiers are the ones that can extract the best, most varied, and most precise

information from the users (Zuboff, 2019). It also becomes increasingly difficult, if not

impossible, to use technology without allowing access to private information (Zuboff, 2019).

There are regulations that aim to protect the everyday internet users from having personal

information extracted from them without their consent. One of which is the GPDR (General

Data Protection Regulation). The GPDR was put into law by the European Data Protection

Page 20: Surveillance Capitalism - Skemman

19

Board and states that: “Users must freely give a clear and affirmative action to indicate their

consent in order for your website to activate cookies and process personal data” (What is the

GPDR, n.d.).

However the website of the GDPR also recognizes that it is extremely difficult for the

runners of the websites to know if their websites have cookies (coding that allows the flow of

information between a server and a client computer) (Zuboff, 2019). The GDPR’s definition

of biometric data (i.e emotions, voice tones, wording, etc.) focuses on data that can be used to

confirm the unique identification of the person, so as long as it can not be proven that the

companies know who they are monitoring they are not breaking European law (McStay, 2020.)

Ethics professor, Luciano Floridi, has proposed a new way to approach the subject of

privacy. He proposes that the right to privacy is not only an individual right but also the right

of a group. The picture that surveillance capitalists are after is not that of an individual, it is a

mosaic picture put together with many smaller pictures (Floridi, 2014). Based on this, Floridi

concludes that it is not enough to protect the identity of the individual, the group must be

protected as well. To strengthen this point Floridi uses the metaphor of the famous whale Moby

Dick from the Herman Melville novel by the same name. The net of extraction architecture is

not put out to capture the big whale, it is put out to catch the everyday people, us all as a group,

a shoal of sardines. For as Floridi puts it. “There are very few Moby-Dicks. Most of us are

sardines. The individual sardine may believe that the encircling net is trying to catch it. It is

not. It is trying to catch the whole shoal. It is therefore the shoal that needs to be protected, if

the sardine is to be saved” (Floridi, 2014).

Page 21: Surveillance Capitalism - Skemman

20

Predictions and nudges

To understand the inner workings of SC one can look at it as a traditional factory. The

slots that bring in the raw material would then be the extraction architecture, which I wrote

about in chapter two where I detailed the evolution of the extraction architecture, outlining how

far wide and deep it has spread into every aspect of our daily lives. The extraction architecture

sucks in the raw material, which is the behavioral surplus; the behavioral surplus is then

processed by computers and data analysts and the end product is so created i.e. predictions.

Much like the extraction of behavioral surplus, predictions have also evolved and changed over

time. As the extraction architecture becomes more advanced and spreads further the more

detailed behavioral surplus it is allowed to extract and so the individual mapping, as well as

the predictions, become more detailed. In this way, the extraction imperative is closely knitted

with what Zuboff titles the prediction imperative (Zuboff, 2019, p.198). The imperative behind

what kind of behavioral surplus is extracted is decided by concluding: “What forms of surplus

enable the fabrication of prediction products that most reliably foretell the future” (Zuboff,

2019, p. 198).

Omnipotent

Behavioral surplus gives the surveillance capitalists information about the general

population. To be exact, after the surplus is analyzed, it is possible to build predictions. If you

build a profile of an individual, i.e., the person's past and current online state of affairs; where

they go; the person’s friends and families; where they shop; and what kind of culture they are

into, you can more easily predict what this person will do next; the person’s weaknesses; their

desires; what they fear and how the person perceives him- or herself (Zuboff, 2019). The

interactive nature of modern-day technology also allows for suggestions, nudges, and pushes

from tech companies. The predictions play the part of mapping out when and where these

nudges and pushes will be the most likely to succeed (Zuboff, 2019).

Page 22: Surveillance Capitalism - Skemman

21

The second trimester: A retailers goldmine

In a New York Times Magazine article entitled “How companies learn your secrets”,

journalist Charles Duhigg wrote about data collection at the US retail store Target. According

to Duhigg, every Target customer is assigned a unique code, or a Guest ID without the

customer’s knowledge or consent (Duhigg, 2012). In the interview Target statistician Andrew

Pole elaborates on how they create these Guest IDs. He says “if you use a credit card, or a

coupon, or fill out a survey, or mail in a refund, or call the customer helpline, or visit our

Website, we’ll record it and link it to your Guest ID” (Duhigg, 2012). Duhigg stated that the

Guest ID also contains information such as:

Demographic information like your age, whether you are married and have kids, which

part of town you live in, how long it takes you to drive to the store, your estimated

salary, whether you’ve moved recently, what credit cards you carry in your wallet and

what websites you visit (Duhigg, 2012, para. 7).

Duhigg furthermore stated that the information that Target did not obtain through their

customer’s Guest ID, they could purchase through third parties. The information available for

sale to Target includes information about their customer's ethnicity, job history, the magazines

they read, if they’ve ever declared bankruptcy or got divorced, the year they bought (or lost)

their house, where they went to college, what kinds of topics they talk about online, whether

they prefer certain brands of coffee, paper towels, cereal or applesauce, their political leanings,

reading habits, charitable giving and the number of cars they own (Duhigg, 2012).

People are creatures of habit, this makes it a challenge for retailers like Target to

convince their customers of new ways of consumption (Duhigg, 2012). There are however

certain times in life when people are more suggestive to changes in their consumer habits than

usual, if one manages to plant a new habit during this time frame, the new habit, becomes a

habit and could stay for years (Duhigg, 2012). Pole’s task was to figure out who of Target’s

Page 23: Surveillance Capitalism - Skemman

22

customers were likely to be going through such a period (Duhigg, 2012). One example of a

major life event that can cause a shift in a person's consumer habits is pregnancy (Duhigg,

2012). After the baby is born there is of course also a change in consumption: diapers, pacifiers,

etc. Birth certificates are however available to everyone, so by the time that the baby is born,

pamphlets, and coupons from all sorts of retailers flood the parent’s inbox and letterbox. This

can make it extremely lucrative for retailers to find out who among their customers could be

pregnant before the baby is born so they can be ahead of the curve (Duhigg, 2012). It is with

this incentive in mind that Pole set out to create the Target Pregnancy Score. By Identifying 25

products that Pole considered related to pregnancies in one way or another and analyzing the

data from the people that bought these products, he was able to estimate which Target

customers were pregnant, and the estimated due date. Target would then send targeted

advertisements to pregnant Target customers each designed to fit the trimester of the pregnant

customer (Duhigg, 2012). One Target employee that was interviewed for Duhigg's piece gave

a hypothetical example, that paints a clear picture:

Take a fictional Target shopper named Jenny Ward, who is 23, lives in Atlanta and in

March bought cocoa-butter lotion, a purse large enough to double as a diaper bag, zinc and

magnesium supplements, and a bright blue rug. There’s, say, an 87 percent chance that she’s

pregnant and that her delivery date is sometime in late August. What’s more, because of the

data attached to her Guest ID number, Target knows how to trigger Jenny’s habits. They know

that if she receives a coupon via e-mail, it will most likely cue her to buy online. They know

that if she receives an ad in the mail on Friday, she frequently uses it on a weekend trip to the

store. And they know that if they reward her with a printed receipt that entitles her to a free cup

of Starbucks coffee, she’ll use it when she comes back again (Duhigg, 2012, para. 45).

Women in their second trimester are thought to be especially flexible in their consumer

habits, and for this reason, figuring out which one of their customers is in the second trimester

Page 24: Surveillance Capitalism - Skemman

23

is of vital importance for retailers. As Pole states: “We knew that if we could identify them in

their second trimester, there's a good chance we could capture them for years” (Duhigg, 2012).

The co-pilot

According to Zuboff, personalization of technological products and the prediction

imperative go hand in hand; the more personalized that the gadgets become, the more they

know; the more precise the predictions become. As more and more aspects of our daily lives

become interactive, the more we become passive about our choices; what we should eat, where

we should go, what we should do, etc. In short, why not ask Google for suggestions? It knows

us better than anyone else and can therefore predict what would suit us (Zuboff, 2019).

In 2012 Google launched a new function titled the Knowledge Graph, which is

described on the official Google blog as a “system that understands facts and information about

entities from materials shared across the web, as well as from open source and licensed

databases. It has amassed over 500 billion facts about five billion entities” (Sullivan, 2020).

The same year also saw the launch of Google Now, a web feature and mobile

application which, with the information stored in Google’s Knowledge Graph, suggests

relevant information for users throughout the day, e.g., when a train is scheduled, where to eat,

etc (Only rich can afford PAs, but anyone can have Google now, 2013).

In an interview with The Economic Times Google chief economist Hal Varian was

asked about privacy issues concerning Google to which he replied: “Well, you trust your

lawyer, your doctor, your accountant with sensitive information, don’t you? You have to trust

them. Similarly, you have to trust Google.” (Only rich can afford PAs, but anyone can have

Google now, 2013). Zuboff however argues that unlike Google “Doctor’s accountants and

attorneys are held to account by mutual dependencies and reciprocities dictated by the

extensive institutionalization of professional education codes of conduct, and procedures for

evaluation and review” (Zuboff, 2019).

Page 25: Surveillance Capitalism - Skemman

24

In 2014 Google introduced some new features to the Google maps site/application. In

the Google blog they proclaim that: “With Google Maps by your side, you have a co-pilot for

everything from turn-by-turn directions, to discovering new restaurants to deciding which

hiking trails to climb next” (Lin, 2014).

As our gadgets become more intimate and we depend more and more on them, we start

to perceive them as neutral objects with no horse in the race. Yet, when we go to a lawyer, we

expect him/her to make his/her living practicing law, not from auctioning information. When

we ask our uncle for a restaurant recommendation we trust that he is not accepting money from

local pizzerias for suggesting them to his close relatives. Companies such as FB and Google

make the majority of their money through the monopolization of the information that we render

to them and the information that they suggest to us, so when Google gives you suggestions it

is not neutral, your co-pilot is working for another airline.

Predicting the future

Information gathered from the Internet can also be used to make predictions in the job

market. For instance, the digital recruiting company HireVue assists companies that are in the

process of hiring staff members to make predictions about their applicants by obtaining

information about applicants such as their relationship status, interests, and spending habits

(Adler-Bell and Miller, 2018).

Predictions based on information extracted from internet devices are also made about

current employees. In a report titled the Datafication of Employment writers Sam Adler-Bell

and Michelle Miller report about the history of workplace monitoring. In the article, they write

about cyber-security firms that companies hire to collect information about their employees to

better predict their future behavior. One of the cyber-security firms listed in the report is the

software company Red Owl which gathers information such as:

Page 26: Surveillance Capitalism - Skemman

25

Searches, tone, and expression in email, as well as physical movements, to assess the

potential risk that employees may pose to an organization, including whether

employees may be likely to engage in workplace organizing, talk to reporters, or share

sensitive workplace information. Such software can also be used to identify workers

who are unlikely to protest wage stagnation or a decline in conditions, due to a

combination of personal circumstances, economic liabilities, or emotional disposition

that may surface in a firm’s analysis of behavioral data (Adler-Bell and Miller, 2018,

para. 43).

In these cases, the predictive power that one gains from vaults of behavioral surplus is

not used to garner insights into people’s lives to offer them the right product at the right time,

as is the case with personalized ads for example. In some cases, behavioral surplus is used to

give companies insight into people's lives to better understand who is to be trusted, and who

could become a problem (Adler-Bell and Miller, 2018).

Another case of this nature is Facebook’s 2015 patent for Cross-Reference To Related

Applications. The patent lists four different ways that FB could use this technology. The first

embodiment deals with connecting email communications and FB in an attempt to prevent

spamming. The second embodiment focuses on FB related search query. The third embodiment

revolves around third party content providers. The fourth embodiment listed as the possible use

of this patent reads as follows:

When an individual applies for a loan, the lender examines the credit ratings of

members of the individual's social network who are connected to the individual through

authorized nodes. If the average credit rating of these members is at least a minimum

credit score, the lender continues to process the loan application. Otherwise, the loan

application is rejected (Authorization and authentication based on an individual’s social

network, 2015, para. 9).

Page 27: Surveillance Capitalism - Skemman

26

This embodiment of the patent enables FB to help lenders make predictions as to

whether a person is likely to be able to pay back a loan. The prediction is based on the loan

applicant’s societal status and the income of the applicant’s friends and relatives. In her book,

Zuboff states that actions such as Facebook's 2015 patent showcase the ushering in of the

uncontract (Zuboff, 2019, p.333). The uncontract, in Zubbof’s words: “abandons the human

world of legally binding contracts and substitutes instead the positivist calculations of

automatic machine processes” (Zuboff, 2019, p.333).

In most societies, lenders are not allowed to refuse a loan to an individual based on the

fact that his or her friends are poor. Employers are not allowed to ask employees whom they

vote for, or if they remember to refill their birth control prescription, or if they work out, let

alone make decisions regarding their employment status based on this knowledge. But because

people have already ticked the digital boxes, allowing for surveillance capitalists to harvest

their data and sell them to third parties, employers, lenders, insurers, and renters can now gain

this privileged access into people’s lives and use this information to make predictions and use

these predictions to make decisions. So, in order to get a job, it is no longer enough to present

your very best at the present moment, you must also have an impeccable past. If you plan to

get a loan, you must make sure that you stop socializing with your low-income friends and

family well in advance, because even if you can show a steady stream of income, which in the

past would have been enough, chances are that the odds will be against you.

Predictions in the political sphere

Prediction models have also made their way into the political sphere. In Barack

Obama’s 2008 presidential campaign Eric Schmidt who was then the CEO of Google worked

as a campaign adviser and brought the science of behavioral predictions into the campaign

(Kreiss and Howard, 2010). In a research published by media scholars Daniel Kreiss and Philip

Howard, it is estimated that the election collected data on more than 250 million individuals.

Page 28: Surveillance Capitalism - Skemman

27

This data included: “A vast array of online behavioral and relational data collected from use of

the campaign’s web site and third party social sites such as Facebook” (Kreiss and Howard,

2010).

In a book published in 2012, entitled The Victory Lab, journalist Sasha Issenberg

chronicles the 2008 Obama campaign and the significance that data collecting had on it

(Issenberg, 2012). In the book, one of the political consultants working for the campaign

claimed that the insight they gathered into the voters’ minds was so significant that the

campaigners knew who people were going to vote for before they themselves had made up

their mind (Issenberg, 2012).

Hyper-relevance

In an essay titled Manipulate to empower: Hyper-Relevance and the contradictions of

marketing in the age of surveillance capitalism (2020), marketing professors Aron Darmody

and Detlev Zwick write about the future of marketing in the age of SC. In the essay, they state

that the consumer market has changed radically in the last years due to rapid globalization, and

the ushering in of the internet market. Because of this change, they say, many companies now

focus less on advertising, and more on predictions and behavioral manipulations. In this digital

age the consumer himself, according to Darmody and Zwick, looks for what he wants, he is no

longer looking at billboards or newspaper ads to figure out who is offering what, he likes to

imagine himself at the steering wheel scrolling through the marketplace, browsing on his own

for what he desires. This changes the nature of advertising and the consumer market (Darmody

and Zwick, 2020). Instead of showcasing or advertising their product, companies, according to

Darmody and Zwick, now focus on steering the individual in the right direction, their direction

(Darmody and Zwick, 2020). In recent times however many tech companies have come under

scrutiny for these practices. The task of digital marketers has therefore become to figure out a

way to intensify consumer surveillance, manipulation, and control while appearing to steer

Page 29: Surveillance Capitalism - Skemman

28

away from such practices. (Darmody and Zwick, 2020). The solution to the aforementioned

conundrum comes in the form of hyper-relevance (Darmody and Zwick, 2020). In a

marketplace of hyper-relevance information flows freely from the consumer to big data

companies, and in exchange for the given information, the consumer gets a consumer

experience of hyper-relevance where all information that pops up is relevant to the consumer.

In other words, the consumer knows that the algorithm knows and can therefore trust that any

experience he might desire or goods he wishes to purchase and services that fit him will appear

on his screen automatically. And since the information is going through both channels in real-

time the hyper-relevance never ends (Darmody and Zwick, 2020). In this world of hyper-

relevance, big data companies manipulate choice context and decision making, but because the

information is relevant to the user and the algorithms is personalized it gets passed off as

consumer empowerment leaving the consumer satisfied (Darmody and Zwick, 2020).

Nudging

In 2008 legal scholar, Cass R. Sunstein, and economist Richard H. Thaler published a

book titled Nudge: Improving Decisions about Health, Wealth and Happiness. In the book,

Sunstein and Thaler explore amongst other things the many choices people make during a

lifetime, why we chose one thing over the other, and what it is that “nudges” us in any given

direction. In the book, Sunstein and Thaler introduce the concept of the choice architecture

(Sunestein and Thaler, 2008, p. 6). The choice architecture is a way to present choices to

consumers but unbeknownst to the person or persons making the choice, the choices have been

designed in a specific way making the consumer more likely to choose one thing over another

(Thaler and Sunstein, 2008).

In 2017, law and ethics professor Karen Yeung published a paper titled Hypernudge:

Big Data as a mode of regulation by design. In the paper, Yeung utilizes some of the concepts

put forth in Sunstein’s and Thaler’s book and puts them in a digital perspective to explain the

Page 30: Surveillance Capitalism - Skemman

29

means of behavioral modification that SC possesses. A key concept of the paper is the concept

of hypernudge (Yeung, 2017, p.1). The nudge presented in Sunstein’s and Thaler’s book is

focused on affecting people’s choices in supermarkets and city streets whereas the hypernudge

focuses on altering people's behavior trough an online platform (Yeung, 2017). Yeung states

that the hypernudge much like the nudge presented in the aforementioned 2008 book, is based

on the choice architecture. Both the nudge and the hypernudge are a soft form of design-based

control that aims to alter people's behavior without changing their economic incentives or

forbidding any choice. The difference between the nudge and the hypernudge however lies in

the fact that the hypernudge is a far more powerful, systematical, and potent form of behavior-

altering tool. An example of the nudge would be to place spinach in a specific place in the

grocery store or making a pyramid out of soda cans to make them a more exciting choice. In

these instances, the choice architecture and the nudge are placed in a somewhat neutral setting

since the retailer does not have specific knowledge over every individual entering the store,

whereas the designer of the hypernudge has intricate knowledge of the person who is being

hypernudged (Yeung, 2017).

Yeung’s paper puts the control that big data companies have into perspective and sheds

light on the significance of the control one can gain utilizing the insight into people's life

garnered through their behavioral surplus. For as Yeung puts it:

I argue that Big Data’s extensive harvesting of personal digital data is troubling, not

only due to its implications for privacy, but also due to the particular way in which that

data are being utilised to shape individual decision-making to serve the interests of

commercial Big Data barons. My central claim is that, despite the complexity and

sophistication of their underlying algorithmic processes, these applications ultimately

rely on a deceptively simple design-based mechanism of influence ‒ ‘nudge’. By

configuring and thereby personalising the user’s informational choice context, typically

Page 31: Surveillance Capitalism - Skemman

30

through algorithmic analysis of data streams from multiple sources claiming to offer

predictive insights concerning the habits, preferences and interests of targeted

individuals (such as those used by online consumer product recommendation engines),

these nudges channel user choices in directions preferred by the choice architect

through processes that are subtle, unobtrusive, yet extraordinarily powerful (Yeung,

2017, p. 2).

The hypernudge is an important factor in the SC market place and is both linked to

Zuboffs writing as well as the theory of hyper-relevance as put forth by Darmody and Zwick.

The behavioral surplus extracted via the extraction architecture allows you to study the mosaic

picture of people’s everyday lives, and make predictions. When are people vulnerable, when is

the moment of the day they are likely to make a purchase, and what is the best way to

hypernudge that individual to make sure that he follows through? This is what makes the

information garnered so valuable and powerful. It gives the possessor of the information an

unprecedented window into people's lives. By constructing a reality that makes the consumer

appreciative that this window exists, and convincing him that he is really at the steering wheel,

the consumer becomes passive and ready to be hypernudged towards the right product. He is

ready to be controlled.

Legal issues concerning surveillance capitalism

In this chapter, I aim to provide a broad overview of legal issues concerning SC. I intend

to show some of the friction that has come to pass between governments and big tech

companies, as well as between individuals and big tech companies, particularly Google. In

most cases, the lawsuits that have been filed by individuals aimed at companies in the SC

marketplace have to do with privacy rights, whereas the issue between governments and big

tech companies are twofold: some of them concerning privacy rights while others deal with

monopolization. The largest issue that has to do with monopolization, concerns Google’s

Page 32: Surveillance Capitalism - Skemman

31

search engine. On Google’s official website however they state that their agenda with the

search engines is to:

Organize the world's information and make it universally accessible and useful. That's

why Search makes it easy to discover a broad range of information from a wide variety

of sources. Some information is purely factual, like the height of the Eiffel Tower. For

more complex topics, Search is a tool to explore many angles so you can form your

own understanding of the world (Our approach to search, n.d., para. 4).

The Andy Grove formula

In 2011 Google’s CEO Erick Schmidt was called before the US Senate Committee on

the Judiciary Subcommittee on Antitrust, Competition Policy, and Consumer Rights. The

hearing aimed to try to bring to light whether Google had gained such a stature in the digital

realm that it violated US competition laws. The hearing also aimed at shedding a light on

whether Google was using its search engine to steer people towards Google related businesses,

such as their travel data software (Puzzanghera, 2011). At the hearing, Schmidt was presented

with the findings of a study that compared the search results from three popular product

comparison sites to Google's product comparison site, Google shopping. When conducting the

same search on all four websites, the results varied, except for Google's platform which

consistently came up with Google related products in the third place (Puzzanghera, 2011).

Senator Mike Lee commented on the findings of the study stating: “When I see you magically

coming up third every time … you’ve cooked it so you’re always third” (Puzzanghera, 2011),

to which Schmidt replied: “Senator... I can assure you we have not cooked anything”

(Puzzanghera, 2011, para. 8). Senator Richard Blumenthal commented on the expansion of the

company from a search engine to service provider and manufacturer, and seller of goods and

services, and the impact that this expansion has had on competition in the digital sphere. He

summed his concerns up by stating that Google owns the racetrack, runs the racetrack, and

Page 33: Surveillance Capitalism - Skemman

32

owns the horses that are winning. In his testimony, Schmidt repeatedly swore off all

implications of bias in the search engine and stated that Google simply aimed to provide

relevant answers, that they believed their users would like (Puzzanghera, 2011).

Jeremy Stoppelman chief executive of Yelp, a company that lets its users write reviews

on businesses, testified at the hearing (Puzzanghera, 2011). Stoppelman was not convinced by

Schmidt’s statements regarding Google’s search engine and stated that: “It has little to do with

helping consumers get to the best information... It has everything to do with generating more

revenue” (Puzzanghera, 2011, para. 15).

Shortly after the Senate hearing in 2011, Schmidt gave an interview with the

Washington Post. In the interview, Schmidt cited a comment made by former intel, CEO Andy

Grove, to sum up, his views on the relationship between big tech and the government. Schmidt

said: “This is an Andy Grove Formula. ... High Tech runs three times faster than normal

businesses, and the government runs three times slower than normal businesses, so we have a

nine times gap. ... And so what you want to do is you want to make sure that the government

does not get in the way and slow things down” (Cunningham, 2011, para 8).

The 2011 hearing was however far from being the first case where the question was

being asked whether Google was in fact, a biased search engine. In 2006, the proprietors of UK

technology company Foundem, which offers to help its users to compare prices on electronics,

books, and other items, realized that the traffic on their website had all of a sudden dropped

significantly (Manthorpe, 2018). They soon realized that the reason for this drop was related

to an update to the Google algorithm supposedly designed to root out spam from it‘s search

engine, this update caused Foundems pages to drop significantly in Googles search engine’s

list of results. Foundem’s founders, Adam Raff and Shivaun Raff realized that no matter how

hard they worked to make their site good, as long as they were rooted out as spam on Google’s

search engine, they would hardly stand a chance. They tried to contact Google and convince

Page 34: Surveillance Capitalism - Skemman

33

them that they were a legitimate business and that debunking them on the search engine was

not necessary but nothing happened. In 2008 they were named as the top comparison site on

the Channel 5 television show The gadget show, the Raffs thought that maybe this information

could convince Google of their legitimacy, it did not (Manthorpe, 2018). It was at this point

that the Raffs realized that convincing Google of the legitimacy of their business would likely

never work or as they put it: “Fuck this, Google are bullies, this is wrong, we are going to win”

(Manthorpe, 2018, para. 15). After giving up on convincing Google of their legitimacy the

Raffs started a public campaign against Google trying to shed light on the issue at hand. In

2009 Google “manually whitelisted”, Foundems website, and traffic directed by Google onto

the Foundem site jumped by around 10.000 percent (Manthorpe, 2018). Although they

managed to have their website “whitelisted” Adam and Shivaun Raff were not satisfied and

decided to file a competition complaint against Google in 2010 (Manthorpe, 2018).

In 2017, the European Antitrust Commission handed Google a fine of 2.42 billion

euros, the largest antitrust penalty ever to be handed to one company (Manthorpe, 2018). In A

statement given by Antitrust Commissioner, Margrethe Vesthag, regarding the issuing of the

aforementioned fine, Vesthag stated that:

What Google has done is illegal under EU antitrust rules. It denied other companies the

chance to compete on the merits and to innovate. And most importantly, it denied

European consumers a genuine choice of service and the full benefits of innovation

(European Union, 2017, para. 4).

On October 4th, 2020, The US Justice Department filed an antitrust lawsuit against

Google. In the lawsuit, Google is described as a monopoly gatekeeper of the internet that does

not shy away from using anti-competitive strategies to maintain its dominance in the

marketplace (Rushe and Paul, 2020).

Page 35: Surveillance Capitalism - Skemman

34

Not an accident

Lawsuits and fines against Google revolving around privacy laws have peppered

Google’s history for many years, including wire-tap allegations (Kravets, 2011), and a 100

million euro fine by the French government for breaching EU regulations regarding cookies

(Rosemain, 2020). In her book, Zuboff points out that many of the investigations involving

Google and Facebook have not ended with government regulations involving the practices of

the companies but with promises that the companies will themselves deal with issues regarding

privacy, and aim to do better in the future (Zuboff, 2019). This, however, Zuboff states, is not

likely to happen, since it is fundamental for the companies that the information stream from

the users of these technologies to the surveillance capitalists at the helm, remains unfettered

and free, because the collecting and selling of personal information is the backbone of SC: “It

is not a flaw, an error, a mistake or a natural progression” (Zuboff, 2019). Extraction of personal

information is a vital part of the SC factory. The information must therefore remain, in Zuboffs

words:

Unprotected and available at zero cost if this logic of accumulation is to succeed. These

requirements are also an Achilles heel. Code is law for Google now, but the risk of new

laws in its established and anticipated territories remains a persistent danger to

surveillance capitalism. If new laws were to outlaw extraction operations, the

surveillance model would implode. This market form must either gird itself for

perpetual conflict with the democratic process or find new ways to infiltrate, seduce,

and bend democracy to its ends if it is to fulfill its own inner logic. The survival and

success of surveillance capitalism depend upon engineering collective agreement

through all available means while simultaneously ignoring, evading, contesting,

reshaping or otherwise vanquishing laws that threaten free behavioral surplus (Zuboff,

2019, p.105).

Page 36: Surveillance Capitalism - Skemman

35

It is safe to assume that the lawsuit filed against Google in 2020 will not come to a

conclusion for some years, although the fact that lawsuits are now being filed by both the EU

and the US government against Google, indicates that there may be a shift taking place in the

degree of tolerance towards big tech companies, at least where monopolization is concerned.

The following years will therefore bring to light whether new regulations will alter the

landscape of SC, or if these companies will yet again remain nine times ahead of public

institutions (Zuboff, 2019)

The ultimate revolution: Will surveillance capitalism get us to love our servitude

In this chapter, I intend to incorporate some ideas and predictions put forth by the writer

Aldous Huxley in a speech that he gave in the California Berkeley Language Center in 1962. I

intend to connect some of Huxley’s ideas to the history of SC as it is put forth in Zuboffs book.

By doing so I aim to demonstrate the dangers that SC has established for the constitution of

democracy and people's free will.

Aldous Leonard Huxley was born in England in 1894. He published his first book in

1916 and would continue writing for the rest of his life (Murrey, 2003). In one of Huxley's

novels, Brave New World, Huxley describes a dystopian world where all individuals have been

stripped of their free will and desires and are categorized into a scientifically developed caste

system, which determines their place in the community (Huxley, 1932). In the novel, Huxley

expresses his worries concerning the future of politics, civilization, and human liberty, and the

role that scientific developments could play in the future. The threats that new technologies

could establish to the liberty and freedom of the population is a topic that Huxley would

continue to write on for the rest of his life and is the main topic of his 1962 Berkeley speech.

The title of Huxley’s 1962 Berkeley speech is The Ultimate Revolution. In the speech, Huxley

cites the revolutions of the past and categorizes them into the political revolution, the economic

revolution, and the religious revolution. All these revolutions, according to Huxley, aimed at

Page 37: Surveillance Capitalism - Skemman

36

changing the frame, laws, and standards of society and, by doing so, gaining the submission of

the individual to new societal standards. The difference between the revolutions of the past and

what Huxley predicted in 1962 to be the ultimate revolution, which he believed had then

already begun to take place, is that whereas prior revolutions modified environments to gain

control over the population, the ultimate revolution would involve modifications on the minds,

and bodies of the population (Huxley, 1962/2019).

Brute force versus love

Huxley points out that direct action on the bodies and minds of human beings has of

course been taken place since the dawn of time, but the difference between these cases and

what would take place in the ultimate revolution lies in the fact that in the past these acts have

mainly been of a violent nature, be it imprisonment, torture, or other methods of physically,

and mentally bending people’s mind to your own will. You can, according to Huxley, however

only suppress people forcibly for so long; at some point, you will always need some form of

consent for your actions from the masses, and the greater is the consent the more likely your

revolution is to succeed and become a permanent way of living. The ultimate revolution

according to Huxley would therefore make people love their servitude, without resorting to

direct violence of any kind (Huxley, 1962/2019).

Huxley described his worries in the following way: “It seems to me that the nature of

the ultimate revolution with which we are now faced is precisely this: That we are in process

of developing a whole series of techniques which will enable the controlling oligarchy who

have always existed and presumably will always exist to get people to love their servitude”

(Huxley, 1962/2019). There are many similarities between some of the predictions that Huxley

made in his Berkeley speech and the history of SC as put forth in Zuboff’s book. One of the

similarities between Huxley’s predictions and today’s reality, as this is described in Zuboff’s

book, lies in the fact that the gadgets and the software that many of us carry on us at all times,

Page 38: Surveillance Capitalism - Skemman

37

and supposedly make life easier and more comfortable, contain the built-in extraction

architecture that is extracting behavioral data from us as we go about our daily lives. As seen

in the previous chapters, this data is then processed and sold, meaning that the population is in

fact willingly supplying information to, and creating wealth for surveillance capitalists, while

taking pleasure in new technologies. The consumer experience of hyper-relevance as described

by Darmody and Zwick also touches on some of the themes Huxley discusses. A form of a

control system that is not based on force but the willingness of people to part take in said

system, and willingly give up control and take pleasure in this new system.

Standardization

In his speech, Huxley also discusses the scientific caste system he believed was

materializing – a system that he first described in his novel Brave New World:

I wrote thirty years ago, a fable, Brave New World, which is an

account of society making use of all the devices available and some of the

devices which I imagined to be possible making use of them in order to, first

of all, to standardize the population, to iron out inconvenient human differ-

ences, to create, to say, mass produced models of human beings arranged in

some sort of scientific caste system (Huxley,1962/2019, p.4).

Standardization is also an essential part of the SC system insofar as the guaranteed

commercial outcome is the reason for collecting behavioral surplus, i.e., making predictions

and following through in order to make these predictions come true or to steer the individual

into the direction a given company may desire (Zuboff, 2019).

As we have seen in the previous chapters, the suggestive power of the hyper-connected

gadgets with which we surround ourselves has become greater over time, as these gadgets

become a more intricate part of our personal lives. The personalization of these software-

Page 39: Surveillance Capitalism - Skemman

38

intensive gadgets allows them to make more and more personalized suggestions to us, based

on their intricate knowledge of us, but the fact is that what is suggested to you is only partially

based on your preferences, i.e., only as long as your preferences match the desired commercial

outcomes of the search engine, smart system or whichever other gadgets at issue (Zuboff,

2019).

Approximately 4.66 billion people are active Internet users (Clement, 2020), and

approximately 3.5 billion Google searches are made per day (Mohsin, 2020). This means that

Google has tremendous control over what people see, hear, and read. The information that

people receive in this way will mold the decisions that they make (Zuboff, 2019). The offered

information is not neutral but determined in auctions in which the third party with the most

money can buy the most ads and pay for the highest-ranking on all search results. Thus,

companies that are already wealthy can therefore become wealthier and, at the same time, turn

themselves into the new standard (Zuboff, 2019).

Comparing dystopias

Huxley also spoke of the importance of people's suggestibility with regards to the

ultimate revolution. With the rise of SC, the placement of the extraction architecture has

allowed for companies to garner an insight into people's lives that is unprecedented in both its

depth and breadth. Furthermore, the social normalization of gadgets and/or software making

suggestions about people’s choices has strengthened the position of surveillance capitalists.

Another important factor is the scope of people one can reach. Tellingly, Huxley describes

already in his speech the difference between dictators of the past who, on one hand, had to rely

on the word of mouth to spread their ideas and Hitler who, on the other hand, could reach

millions of people at the same time via the radio (Huxley,1962/2019). Today’s internet dwarfs

the radio on all accounts.

Page 40: Surveillance Capitalism - Skemman

39

In his speech, Huxley compares his dystopian novel, Brave New World, and George

Orwell's dystopian novel, 1984. He notes that the difference between the two dystopian visions

are due to the fact that Orwell's book was written in 1949, during the time of Josef Stalin, and

right after the collapse of the Nazi regime. It was, therefore, a vision of the future very much

colored by the present and the immediate past, whereas Brave New World was written in 1936,

before World war II, which Huxley claims gave him more space to ponder other future means

of control rather than the brute force of a totalitarian dictatorship such as the one described in

1984. Huxley believed that the vision described in Brave New World would be closer to the

truth than the society described in 1984. His belief was rooted in his conviction that convincing

the masses of loving their servitude is a much more efficient way of control than brute force

(Huxley, 1962/2019).

People as instruments

In her book, Zuboff describes the power of SC as instrumentarian (Zuboff, 2019,

p.352). By befogging social norms and breaking down the barrier between the offline and the

online world, surveillance capitalists have created an environment that allows for companies

and governments to use people as instruments, at their disposal, to be used to pursue their own

goals (Zuboff, 2019, p.352). The description of the instrumentarian power put forth in Zuboff’s

book is quite similar to the scientific dictatorship described in Huxley’s speech. It is not a

power of violence and brute force but a ubiquitous, omnipresent bio-power that controls

absolutely, yet without force or friction. For as Zuboff describes it:

It is in the nature of instrumentarian power to operate remotely and move in

stealth. It does not grow through terror, murder, the suspension of democratic

institutions, massacre, or explosion. Instead, it grows through declaration, self-

authorization, rhetorical misdirection, euphemism, and the quiet, audacious backstage

moves specifically crafted to elude awareness as it replaces individual freedom with

Page 41: Surveillance Capitalism - Skemman

40

others’ knowledge and replaces society with certainty. It does not confront democracy

rather erodes it from within, eating away at the human capabilities and self-

understanding required to sustain a democratic life (Zuboff, 2019, p. 352).

With the rapid growth of today's technology, the movement into the surveillance

capitalist system is at times presented as inevitable. Zuboff however claims that the move into

SC is: “Not a necessary product of digital technology or the Internet. It is a specifically

constructed human choice, an unprecedented market form, an original solution to emergency

and the underlying mechanism market form, an original solution to which a new asset class is

created on the cheap and converted to revenue.” (Zuboff, 2019).

In his speech, Huxley warned that if we must not let new scientific developments

take us by surprise, we should instead look critically at the road that these new technologies are

taking us down before we reach our final destination. And we find ourselves deeply in love

with our servitude (Huxley, 1962/2019)

Conclusion

Much like the controlling oligarchy that Huxley warned could use scientific

advancements to gain unprecedented control over people’s lives and decision rights, the power

obtained by surveillance capitalists, that Zuboff describes in her book, does not control through

means of violence or brute force. Instead, it weaves itself into people's lives with more and

more precision; seeking more and more consent and appreciation with every step. If we allow

for it to happen this force could slowly take over people's decision rights. By constructing the

illusion that softwares and gadgets are helping us to make the right choice; by bringing us the

right knowledge, at the right time, for the right reasons, surveillance capitalism has made us

embrace a system that could threaten the existence of a free will. We may think that our

connected devices are working for us, making our lives easier, reaching conclusions faster, and

strengthening our position as consumers, but could it be that we are in fact the ones doing the

Page 42: Surveillance Capitalism - Skemman

41

real work, churning out behavioral surplus at an exponential scale to be analyzed and

monopolized by others with whom we have no direct contractual connection? Unpredictability

and uncertainty is maybe not something that we cherish a lot in our everyday existence, and

perhaps it is even something that many of us wish to be without. It is however in my opinion a

big part of what makes us human, and to exchange the uncertainty, and unpredictability that

comes with human decisions made by human minds, using human logic, with algorithmically

built decisions, put together with mathematical precision, is indeed a step away from humanity

onto something else.

Even though we are only sardines in the ocean and not Moby Dicks, as Floridi puts it,

being aware of the issues at hand is still important. Though you are being controlled by an

algorithm that does not know you by name and does not have more interest in you than your

next door neighbor, the fact remains the same, you are being controlled. Some may state that

they do not care who knows what about them since their lives are not interesting and they have

nothing to hide. But of course, it does not end there, for it is not only a question about who

knows what about you, the real question is who or what controls you. Are we willing to give

up our decision rights for a world of hyper-relevance with customized advertising? This is

something we have to decide on, together. Although governments and privacy advocates are

pushing for regulations and fines for surveillance capitalists, the change, I believe, must come

from the general population, for if we love our servitude no fine or regulation will keep us from

slaving away.

Page 43: Surveillance Capitalism - Skemman

42

References

Adler-Bell, S., & Miller, M. (2018, December). The datafication of employment how

surveillance and capitalism are shaping workers’ futures without their knowledge.

The Century Foundation. https://tcf.org/content/report/datafication-employment-

surveillance-capitalism-shaping-workers-futures-without-knowledge/?agreed=1

Altaweel, I., Good, N., & Hoofnagle, C. J. (2015, December 14). Web Privacy Census.

Technology Science. https://techscience.org/a/2015121502/

Authorization and authentication based on an individual’s social network. (2015). Google

Patents. https://patents.google.com/patent/US9100400B2/en

Brin, S., & Page, L. (1998). The anatomy of a large-scale hypertextual web search engine.

Stanford University.

http://www.cse.fau.edu/~xqzhu/courses/cap6777/google.search.engine.pdf

Clement, J. (2020, November 24). Worldwide digital population as of October 2020. Statista.

https://www.statista.com/statistics/617136/digital-population-worldwide/

Cunningham, L. (2011, October 1). Google’s Eric Schmidt expounds on his senate testimony.

Washington Post. https://www.washingtonpost.com/national/on-leadership/googles-

eric-schmidt-expounds-on-his-senate-

testimony/2011/09/30/gIQAPyVgCL_story.html

Darmody, A., & Zwick, D. (2020). Manipulate to empower: Hyper-relevance and the

contradictions of marketing in the age of surveillance capitalism. Big Data & Society,

7(1), 1–12. https://doi.org/10.1177/2053951720904112

Duhigg, C. (2012, February 22). How companies learn your secrets. The New York Times.

https://www.nytimes.com/2012/02/19/magazine/shopping-

habits.html?pagewanted=1&_r=1&hp

Page 44: Surveillance Capitalism - Skemman

43

European Union. (2017, June 27). Antitrust: Commission fines Google €2.42 billion for

abusing dominance as search engine by giving illegal advantage to own comparison

shopping service [Press release].

https://ec.europa.eu/commission/presscorner/detail/en/IP_17_1784

Floridi, L. (2014). Open data, data protection, and group privacy. Philosophy & Technology,

27(1), 1–3. https://doi.org/10.1007/s13347-014-0157-8

Gesenhues, A. (2015, April 17). Spotify’s New “Playlist Targeting” Lets Brands Segment Ad

Audiences Based On Activities Or Moods. Marketing Land.

https://marketingland.com/spotifys-new-playlist-targeting-lets-brands-segment-ad-

audiences-based-on-activities-or-moods-125502

Grauer, Y. (2017, November 24). Staggering variety of clandestine trackers found in popular

android apps. The Intercept. https://theintercept.com/2017/11/24/staggering-variety-

of-clandestine-trackers-found-in-popular-android-apps/

Huxley, A.L. (1932). Brave new world. Chatto & Windus.

Huxley, A. L. (2019). The ultimate revolution [Speech transcript]. Christopher Germann.

https://christopher-germann.de/aldous-huxley-the-ultimate-revolution-berkeley-1962/

(Original work published 1962)

Issenberg, S. (2012). The Victory Lab: The Secret Science of Winning Campaigns. Random

House.

Kramer, A. D. I., Guillory, J. E., & Hancock, J. T. (2014). Experimental evidence of massive-

scale emotional contagion through social networks. Proceedings of the National

Academy of Sciences, 111(24), 8788–8790. https://doi.org/10.1073/pnas.1320040111

Kravets, D. (2011, June 3). Judge: Google can be sued for wiretapping in street view debacle.

Wired. https://www.wired.com/2011/06/google-wiretap-breach/

Page 45: Surveillance Capitalism - Skemman

44

Kreiss, D., & Howard, P. (2010). New challenges to political privacy: Lessons from the first

U.S. presidential race in the web 2.0 era. International Journal Of Communication,

4(19), 1032–1050. https://ijoc.org/index.php/ijoc/article/viewFile/870/473

Lee, J.8. (2002, November 28). Postcards from planet Google. The New York Times.

https://www.nytimes.com/2002/11/28/technology/postcards-from-planet-google.html

Levin, S. (2017, May 1). Facebook told advertisers it can identify teens feeling “insecure”

and “worthless.” The Guardian.

https://www.theguardian.com/technology/2017/may/01/facebook-advertising-data-

insecure-teens

Lin, S. (2014, September 4). Making of maps: The cornerstones. Google Blog.

https://maps.googleblog.com/2014/09/making-of-maps-cornerstones.html

Manthorpe, R. (2018, April 2). Google’s nemesis: Meet the British couple who took on a

giant, won... and cost it £2.1 billion. Wired. https://www.wired.co.uk/article/fine-

google-competition-eu-shivaun-adam-raff

McStay, A. (2020). Emotional AI, soft biometrics and the surveillance of emotional life: An

unusual consensus on privacy. Big Data & Society, 7(1), 1–12.

https://doi.org/10.1177/2053951720904386

Mohsin, M. (2020, December 17). 10 Google search statistics you need to know in 2021.

Oberlo. https://www.oberlo.com/blog/google-search-statistics

Murray, N. (2003). Aldous Huxley. Time Warner Books.

O’Brien, K. J. (2010, May 16). Google’s data collection angers European officials. The New

York Times. https://www.nytimes.com/2010/05/16/technology/16google.html

Only rich can afford PAs, but anyone can have Google now: Hal Varian, chief economist,

Google. (2013, December 15). The Economic Times.

Page 46: Surveillance Capitalism - Skemman

45

Our approach to search. (n.d.). Google. Retrieved November 2, 2020, from

https://www.google.com/search/howsearchworks/mission/

Ovum report: The future of e-commerce - the road to 2026. (2017). Criteo.

https://www.criteo.com/es/wp-content/uploads/sites/8/2017/09/ovum-the-future-of-e-

commerce-the-road-to-2026.pdf

Pridmore, J., & Mols, A. (2020). Personal choices and situated data: Privacy negotiations and

the acceptance of household intelligent personal assistants. Big Data & Society, 7(1),

1–12. https://doi.org/10.1177/2053951719891748

Pulliam, S. (2000, November 28). The New Dot-Com Mantra: “Just Pay Me in Cash, Please.”

The Wall Street Journal. https://www.wsj.com/articles/SB975360847577543818

Puzzanghera, J. (2011, September 22). Eric Schmidt defends Google in senate antitrust

hearing. Los Angeles Times. https://www.latimes.com/business/la-xpm-2011-sep-22-

la-fi-google-antitrust-20110922-story.html

Rosemain, M. (2020, December 10). French watchdog fines Google, Amazon for breaching

cookies rules. Reuters. https://www.reuters.com/article/google-privacy-france/french-

watchdog-fines-google-amazon-for-breaching-cookies-rules-idUKKBN28K0NG

Rushe, D., & Paul, K. (2020, October 21). US justice department sues Google over

accusation of illegal monopoly. The Guardian.

https://www.theguardian.com/technology/2020/oct/20/us-justice-department-antitrust-

lawsuit-against-google

Smith, D. (2015, January 25). Google chairman: ’The Internet Will Disappear’. Business

Insider. https://www.businessinsider.com/google-chief-eric-schmidt-the-internet-will-

disappear-2015-1?r=US&IR=T

Page 47: Surveillance Capitalism - Skemman

46

Sullivan, D. (2020, May 20). A reintroduction to our knowledge graph and knowledge

panels. Google Blog. https://blog.google/products/search/about-knowledge-graph-

and-knowledge-panels/

Swisher, K. (2000, December 19). Dot-Com bubble has burst; will things worsen in 2001?

The Wall Street Journal. https://www.wsj.com/articles/SB97709118336535099

Thaler, R. H., & Sunstein, C. R. (2008). Nudge: Improving decisions about health, wealth

and happiness (1st Edition). Penguin.

What is the GDPR? Overview: General data protection regulation (GDPR) in the EU. (n.d.).

Cookiebot. Retrieved October 6, 2020, from https://www.cookiebot.com/en/gdpr/

Wolfe, J. (2017, July 28). Roomba vacuum maker iRobot betting big on the “smart” home.

Reuters. https://www.reuters.com/article/us-irobot-strategy-idUSKBN1A91A5

Yeung, K. (2017). ‘Hypernudge’: Big Data as a mode of regulation by design. Information,

Communication & Society, 20(1), 118–136.

https://doi.org/10.1080/1369118x.2016.1186713

Zuboff, S. (2019). The age of surveillance capitalism: The fight for a human future at the new

frontier of power. Profile Books.