In its early years, a key driver of Facebook’s approach to growth was to “move fast and break things” in pursuit of “domination.” This growth strategy has been reflected in Facebook’s approach to protecting users’ personal data–which may have violated a 2011 FTC consent decree. One such potential violation was the 2014 release of personal data on roughly 87 million of Facebook users to Cambridge Analytica (CA), a political consulting firm working for several Republican candidates, including Donald Trump in his successful 2016 presidential campaign (for a bitingly humorous take on Facebook’s repeated privacy lapses, see this May 2, 2018 video segment from Full Frontal with Samantha Bee).
In the wake of revelations about Facebook’s (and other platforms’) role in the 2016 election, a growing number of people–apparently including Mark Zuckerberg–believe it’s time to update the company’s slogan and strategy to reflect the more socially conscious goals reflected in Zuckerberg’s February 16, 2017 Facebook post and subsequent statements.
To help put recent events and future possibilities into context, its worth reviewing Facebook’s history, especially the period following its historic IPO in 2012, roughly a year after the 2011 FTC consent decree was signed.
Going public & selling mobile ads: 2012 as a turning point
In addition to being the year Facebook stock began trading on public markets, 2012 was also the year the company began selling mobile advertising, at a time when smartphone penetration and functionality were expanding dramatically, but were still in relatively early stages of development.
In an Oct. 12, 2017 article in the New York Times, Cade Metz provided an overview of how Facebook’s mobile advertising system evolved:
As more people have moved more of their online activity from PCs to mobile phones, the lines between ads and organic content have continued to blur, particularly on popular social networking service like Instagram, Twitter and Facebook…Indeed, ads are often the same as organic content, just with money behind them.
The reason, Metz explains is that:
Mobile phones offer less room on their screens for ads. Usually, there is only space for a single column of information, and that must accommodate both ads and other content. The result is that ads have moved into a more prominent position…
Responding to the screen limitations, Facebook…created a new ad system that made ads an integral part of the News Feed, which dominates the screen on mobile phones…Facebook allows businesses and other advertisers to serve pages straight into the News Feeds of people they had no other connection to, targeting their particular interests and behavior.
As these pages appear, people can comment on them and “like” them, just as they can with anything else that shows up in their feeds. And if people click the like button, these pages will continue to show up in their feeds — and the feeds of their Facebook “friends” — for free…
On Facebook, people describe themselves and leave all sorts of digital bread crumbs that show their interests. Then Facebook matches these with other data it collects.
Facebook’s ad system provides ways to target geographic locations, personal interests, characteristics and behavior, including activity on other internet services and even in physical stores. Advertisers can target people based on their political affiliation; how likely they are to engage with political content; whether they like to jog, hike or hunt; what kind of beer they like; and so on.
If advertisers provide a list for email addresses, Facebook can try to target the people those addresses belong to. It can also do what is called “look-alike matching.” In this case, Facebook’s algorithms serve ads to people believed to be similar to the people those addresses belong to.
Based on Facebook’s financial performance since 2012, it seems pretty clear that its mobile ad strategy has been a dramatic success from a shareholder’s perspective.
In the roughly six years since it went public, the company’s revenues and profits have been on a steep upward trajectory, driven mainly by mobile ad revenues, which totaled $11 billion in 4Q17 and accounted for 88% of the quarter’s total revenue.
In 2012, Facebook’s annual revenue and net income were $5.1 billion and $53 million, respectively. After five years of selling mobile ads and doubling its base of active users to more than 2.1 billion, 2017 revenue had expanded eightfold to $40.7 billion, while net income had skyrocketed to $15.9 billion, a 300-fold increase from 2012 levels.
This dramatic growth in revenue and, even more so in net income, was the key driver of the fivefold increase in Facebook’s stock price from $38 on the day of its 2012 IPO to a high of $190 in early February 2018 (on March 30, roughly two weeks after the CA news broke, it closed below $160, before gradually recovering to $185 by May 10).
Too busy driving growth to focus on privacy?
In 2011, the year before it went public, Facebook entered into a consent decree with the Federal Trade Commission (FTC) in which it agreed to not share its users’ data without their consent and could be subject to fines of up to $40,000 per violation.
In spite of this commitment by Facebook, CA was able to access a vast trove of user data in 2014, roughly three years after the consent decree was signed. Even more damning, says Roger McNamee, an early mentor to Zuckerberg, was the fact that Facebook staff had worked closely with CA staff on the Trump campaign even after learning that CA had obtained the data without proper authorization and then been told by CA that it had destroyed the data.
In a March 27, 2018 interview with NPR, NYU law professor Tim Wu, who was an FTC advisor when the consent decree was negotiated, said Facebook had “basically broken the promise they made to the country in 2011 and the promises they kept making to everybody through [Facebook’s] privacy settings,” which Wu suggested were largely ineffective. The period from 2012 to 2015 said Wu, was one where Facebook management was “obsessed with revenue generation.”
[T]he fact is that privacy – it’s like kryptonite to their business model. You know, they have to be able to promise their advertisers that they have the goods on everyone and they have the power to manipulate people. And so if they are also extremely tight on privacy, that tends to throw a wrench into the machine.
Conflicting definitions of “serving users”
The problem, explained Wu, is “actually a very fundamental one, because Facebook is “always in the position of serving two masters.”
If its actual purpose was just trying to connect friends and family, and it didn’t have a secondary motive of trying to also prove to another set of people that it could gather as much data as possible and make it possible to manipulate or influence or persuade people, then it wouldn’t be a problem. For example, if they were a nonprofit, it wouldn’t be a problem…I think there’s a sort of intrinsic problem with having for-profit entities with this business model in this position of so much public trust because they’re always at the edge because their profitability depends on it.
In a NYT op-ed piece published shortly after the Cambridge Analytica story broke, Zeynep Tufekci echoed Wu’s point, that Facebook’s basic business model is the problem:
A business model based on vast data surveillance and charging clients to opaquely target users based on this kind of extensive profiling will inevitably be misused. The real problem is that billions of dollars are being made at the expense of the health of our public sphere and our politics, and crucial decisions are being made unilaterally, and without recourse or accountability.
In Tufekci’s view, the Cambridge Analytica debacle was:
…an all-too-natural consequence of Facebook’s business model, which involves having people go to the site for social interaction, only to be quietly subjected to an enormous level of surveillance. The results of that surveillance are used to fuel a sophisticated and opaque system for narrowly targeting advertisements and other wares to Facebook’s users. Facebook makes money, in other words, by profiling us and then selling our attention to advertisers, political actors and others. These are Facebook’s true customers, whom it works hard to please.
Understanding & addressing social harms
In response to concerns related to this dynamic within Facebook and the digital platform sector as a whole, a number of tech industry veterans joined together in early 2018 to launch the Center for Humane Technology. The group’s co-founders include McNamee, Executive Director Tristan Harris, a former Design Ethicist at Google who has been described as the “closest thing Silicon Valley has to a conscience,” Chief Strategy Officer Aza Raskin, former head of user experience at Mozilla, and COO Randima Fernando, who from 2010 to 2017 was Executive Director of Mindful Schools.
As the “Problem” page on the Center’s web site explains:
Our society is being hijacked by technology. What began as a race to monetize our attention is now eroding the pillars of our society: mental health, democracy, social relationships, and our children…Facebook, Twitter, Instagram, Google have produced amazing products that have benefited the world enormously. But these companies are also caught in a zero-sum race for our finite attention, which they need to make money. Constantly forced to outperform their competitors, they must use increasingly persuasive techniques to keep us glued. They point AI-driven news feeds, content, and notifications at our minds, continually learning how to hook us more deeply—from our own behavior. Unfortunately, what’s best for capturing our attention isn’t best for our well-being...These are not neutral products. They are part of a system designed to addict us…
Phones, apps, and the web are so indispensable to our daily lives—a testament to the benefits they give us—that we’ve become a captive audience. With two billion people plugged into these devices, technology companies have inadvertently enabled a direct channel to manipulate entire societies with unprecedented precision.
Among the problems associated with today’s digital media, according to the Center’s web site, are deterioration of mental health and self-esteem (with children being especially vulnerable) and the quality of our social relationships and democracy. It also claims that today’s networked digital technology is “different from anything in the past, including TV, radio, and computers,” due to its use of artificial intelligence, its 24/7 influence on our lives via ever-present digital devices, and its unprecedented level of personalization and social control.
In some respects, reading the words of these increasingly concerned tech-industry experts reminds me of a 1962 Twilight Zone episode in which aliens from outer space arrive on earth with a book entitled “To Serve Man.” Initially the book’s title and the impressive technology wielded by their visitors from space lead most humans to believe the purpose of the visit is to be of service to mankind. Unfortunately, it’s only at the end of the episode, as humans are eagerly boarding the alien spaceships, that the world’s top cryptographers discover that To Serve Man is actually a cookbook.
Fortunately the existential threats we face as a human race today are less urgent than those faced by those boarding the alien spacecraft in this classic TV episode. Nevertheless these threats do exist, especially for the more vulnerable members of society. And so do important questions about how we can best utilize digital technology to help address these challenges in a timely and equitable manner.
In the rest of this series I’ll be discussing potential directions we can pursue to help minimize the harms and maximize the benefits of these powerful technologies. To get started I’ll review a number of proposals for dealing with questions of ownership, control, benefits and harms related to the collection and use the massive amounts of data that are generated by our use of digital technology.
Below is an outline, with links, to all the posts in this series. Unless otherwise noted, bolding in quotations is mine, added for emphasis.
- Creating a Future Where Technology Serves Neither Kings Nor Capital, But Humanity
- Connecting as citizens & humans, not just as users & consumers
- More (and more effective) democracy as part of the solution
- The tech sector can help lead the next phase in democracy’s evolution
- Democracy & Digital Platforms: A Match Made in Heaven or in Hell?
- The Facebook F-up as a wake-up call
- Where to look for solutions?
- Serving Users (to Advertisers to Benefit Shareholders)
- An IPO + mobile ads: 2012 as a turning point for Facebook
- Too busy driving growth to focus on privacy?
- Conflicting definitions of “serving users”
- Understanding & addressing social harms
- Data as Power: Approaches to Righting the Balance
- Our data is tracked & locked in a “black box” we don’t control or understand
- The EU tightens privacy protections amid mixed signals in the U.S
- Platforms as “information fiduciaries”
- Reallocating power & benefits when users share their data
- Shifting from an “Attention Economy” to a more efficient “Intention Economy”
- Who owns and controls the data used to develop AI?
- Data as labor that should be financially compensated
- Data as an infrastructural public good
- A “data tax” that generates a “data dividend” we all share
- Data portability as means to enhance competition & consumer choice
- The Power of Dominant Platforms: It’s Not Just About “Bigness”
- New forms of concentrated power call for new remedies
- Platforms wield transmission, gatekeeping & scoring power
- Antitrust needs an updated framework to address platform power
- Creating a civic infrastructure of checks & balances for the digital economy
- Democracy & Corporate Governance: Challenging the Divine Right of Capital
- A “generative” or “extractive” business model?
- Dethroning kings & capital
- Moving beyond capitalism’s aristocratic form
- Embracing economic democracy as a next-step Enlightenment
- Platform Cooperativism: Acknowledging the Rights of “Produsers”
- Reclaiming the Internet’s sharing & democratizing potential
- Scaling a platform co-op: easier said than done
- The #BuyTwitter campaign as a call for change
- Encouraging the wisdom of crowds or the fears of mobs?
- Interactions Between Political & Platform Systems
- Feedback loops reinforce strengths & weaknesses, benefits & harms
- Facebook’s role in the election as an example
- If we don’t fix government, can government help fix Facebook?
- A Purpose-Built Platform to Strengthen Democracy
- Is Zuck’s lofty vision compatible with Facebook’s business model?
- Designed to bolster democracy, not shareholder returns
- Democratic Oversight of Platform Management by “Produsers”
- Facebook, community and democracy
- Is Facebook a community or a dictatorship?
- Giving users a vote in Facebook’s governance
- Technology can help users participate in FB governance
- Evolving from corporate dictatorship toward digital democracy