Expanding Democratic Governance in the Digital Anthropocene

As I explain on the bio page of this blog, I believe:

[T]he human race is, in fact, in a race. Not a race against other humans (though it sometimes seems that way), but rather a race against time, and between two tendencies we all share…One tendency is grounded in fear and can manifest, among other things, as hatred, prejudice, greed, addiction and violence…A countervailing tendency is grounded in empathy, compassion and a sense of deep and satisfying connection to oneself, to others, and to the natural world.

The digital anthropocene: a pivotal & high-risk phase of human history

The reason I describe this dynamic as a race is that the accelerating pace of technology development has pushed our planet’s life support systems into a pivotal stage that I and others refer to as the digital anthropocene. In a post on the blog of Michigan State University’s Quello Center, I described key characteristics of this stage in human history as I see it, including:

  1. substantial (and currently destructive) impacts of human activities on natural systems, a planetary phase referred to as the Anthropocene;
  2. continued and arguably mounting evidence that the status-quo dynamics within our dominant political and economic systems are aggravating rather than reducing inequalities, conflict, degradation of environmental and democratic self-governance systems, and potentially avoidable human suffering;
  3. the dramatic expansion in scope, content and functionality of digitally-mediated connectivity among humans and “things” via ever-more-capable information and communication technology (ICT).

When considering our future, it doesn’t seem too big a leap in logic and imagination to argue that, if humanity’s fear-driven tendencies predominate, the digital anthropocene will manifest self-reinforcing cycles of dystopian dynamics along roughly the following lines:

1) The financial extraction engines of corporate capitalism, the corruption of political democracy, the weakening of social safety nets, and environmental degradation and disruption will combine to increase inequality, social stress, violence and human suffering, especially among those at the bottom of the economic pyramid.

2) The increasing capabilities of digital technologies will be used by those who have and/or seek power to aggravate and manipulate human fears, weaknesses and tribalistic tendencies. This will both reflect and increase social stress and proliferation of win-lose and increasingly violent and environmentally destructive “solutions” to social problems. The power of digital media will be used to justify and promote these “solutions” by reinforcing belief systems upon which they are based (e.g., fascism, nativism, racism, nihilism, religiosity that embraces violence and/or rejects science).

3) In this increasingly threatening environment, social, political and economic institutions will use digital technology to defend their power in ways that further suppress freedom, erode trust, aggravate social tensions, and bolster rather than constrain the appeal of nihilistic and/or fascistic messages and movements.

4) The manifestation of digital technology’s immense constructive potential—including its ability to support broad access to higher levels of health, education, prosperity, empathy, democracy and peaceful coexistence (among individuals, social groups, nations and with the natural environment)—will be stifled and distorted in the increasingly stress- and antagonism-filled environment characterized by the above dynamics.

In short, a fear-dominated digital anthropocene will be characterized by mutually-reinforcing pressures toward increased inequality, oppression, violence, suffering and ecological decline. Not a pretty picture…and, in my view, one we have the responsibility and capacity to avoid.

Empathy + technology: a powerful recipe for shared prosperity & peace

My hope is that this series of blog posts can make some small contribution to efforts seeking to avoid the above scenario by harnessing digital technology to humankind’s more benign and enlightened tendencies in ways that create feedback cycles that strengthen these tendencies. As I see it, a key ingredient for success in such efforts is to leverage and harmonize two powerful sets of human capabilities: 1) to empathize, communicate and cooperate with each other and; 2) to develop, employ and improve technology to: a) better understand our world and the challenges we face living in it and; b) design and implement tools, institutions, strategies and systems that improve our ability to address these challenges.

In this more peaceful, cooperative and empathy-rich version of the digital anthropocene, the evolution of digital technology is used to support the design and implementation of win-win strategies leading to a more equitably prosperous, peaceful and sustainable human society; one characterized by expanded access to education, healthcare, economic opportunity, civic participation and communication capabilities that support the expanded scope of empathetic consciousness described by Jeremy Rifkin in his book The Empathic Civilization: The Race to Global Consciousness in a World in Crisis

As suggested by his February 16, 2017 Facebook post, Mark Zuckerberg seems optimistic that this more positive version of the digital anthropocene is possible and even likely:

As we’ve made our great leaps from tribes to cities to nations, we have always had to build social infrastructure like communities, media and governments for us to thrive and reach the next level. At each step we learned how to come together to solve our challenges and accomplish greater things than we could alone. We have done it before and we will do it again.

Not surprisingly, Zuckerberg sees Facebook playing a key role in this next “great leap” in “learn[ing] how to come together to solve our challenges and accomplish greater things than we could alone.”

[T]he most important thing we at Facebook can do is develop the social infrastructure to give people the power to build a global community that works for all of us…for supporting us, for keeping us safe, for informing us, for civic engagement, and for inclusion of all.

More (and more effective) democracy as part of the solution

While I share much of Zuckerberg’s optimism, I believe his vision is more likely to become a reality if we pursue the kind of democratizing political and economic governance reforms discussed in this series, which includes the following posts:

  • Democracy & Digital Platforms: A Match Made in Heaven or in Hell? introduces the core thesis of this series, that: 1) the recent revelations about Facebook, Cambridge Analytica and the 2016 election reflect deficiencies in democratic functionality in both our “public” and digital platform sectors, as well as strong two-way feedback loops between them and; 2) we should treat these events as a wake up call demanding democracy-enhancing changes in both of these sectors.
  • Serving Users (to Advertisers to Benefit Shareholders) examines key characteristics, risks and problematic impacts associated with micro-targeted advertising-based digital-economy business models, focusing specifically on Facebook’s evolution since it went public and began selling mobile ads in 2012.
  • Data as Power: Approaches to Righting the Balance considers a range of strategies for achieving a healthier balance of power and benefits related to the collection and use of data generated by and about citizens. This post recommends a collaborative effort involving policymakers, tech industry leaders, independent technology experts and other stakeholders to evaluate the strengths, weaknesses and compatibility of these data policy options, with the goal of developing a broad consensus on what combination of them is most likely to achieve this healthy rebalancing.
  • Interactions Between Political and Platform Systems considers the two-way interactions between digital platforms and public sector governance systems, using Facebook’s role in the 2016 U.S. election and that election’s aftermath as an example illustrating the nature of this interactive relationship, including its potential to support both destructive and constructive feedback loops.
  • A Purpose-Built Platform to Help Repair & Strengthen Democracy recommends that the dominant platform companies contribute financial and in-kind support toward (but do not control) the creation of an online platform whose sole purpose is to use digital technology to mobilize civic participation in ways that increase the ability of political democracy to reflect the needs and desires of its citizens, especially regarding broad and inclusive access to the resources and opportunities that support a safe and fulfilling life and reduce social tensions and violence.
  • Democratic Oversight of Platform Management by “Produsers” recommends that platforms apply their technical expertise to an expansion of democratic functionality in their internal oversight and management systems by providing their users with voting rights in the election of board members and key decisions about company policies that impact these users. That post’s advocacy of increased democratic oversight of platforms by their users is informed by several others in the series.

The tech sector can help lead the next phase in democracy’s evolution

Though intended for a range of audiences interested in the issues it considers, a key purpose of this series of posts is to encourage a more consistent and ambitious embrace of democracy by Facebook’s Mark Zuckerberg and his counterparts at Google, Amazon and other entities in the platform and digital tech sectors.

As discussed in a later post, several commentators have highlighted an apparent contradiction—or at least a perceptual blind spot—in Zuckerberg’s attitude toward democracy. On one hand he seems eager to support the expansion of democracy in what we consider “public” or “political” arenas of collective self-governance.  On the other hand, these critics point out, Facebook operates largely as a dictatorship, with one “ruler” (Zuckerberg) and more than 2 billion “subjects” whose lives are impacted—often for their good, but sometimes in harmful ways—by the decisions he makes about Facebook’s operations.

With this apparent inconsistency/blind spot in mind—and with genuine respect for their impressive talents, resources and accomplishments—my invitation to Zuckerberg and his tech industry peers is to recognize that their users deserve the voting rights, freedoms and protections of citizens not only in the collective governance systems we call political; they also deserve these rights as users and contributors of value to the massive global online platforms that, to use FDR’s words (which echo Zuckerberg’s own), are “a kind of private government which is a power unto itself.’”

I agree with Zuckerberg that “a global community that works for all of us” is not only needed, but can be achieved. And, as this series of posts explains in more detail, I believe Facebook and its platform peers can most effectively help create that community by taking the following steps :

1) grant their users a fuller set of rights and protections as “citizen contributors,” a.k.a., “produsers.”

2) apply their world-class technical and financial resources to the creation of efficient, engaging and effective democratic governance tools and;

3) apply these tools to their internal corporate management while also supporting their use to achieve healthier functionality in the public sphere of self-governance (e.g., increasing trust and participation among a better informed population, and reducing corruption, deception, manipulation and other abuses of power).

If the creation of such a community is something you’d like to see, and perhaps are working toward in some capacity, I invite you to read the other posts in this series and share your own thoughts about the topics they address and the suggestions they offer.

********

Below is an outline, with links, to all the posts in this series. Unless otherwise noted, bolding in quotations is mine, added for emphasis.

  • Expanding Democratic Governance in the Digital Anthropocene
    • The digital anthropocene: a pivotal & high-risk phase of human history
    • Empathy + technology: a powerful recipe for shared prosperity & peace
    • More (and more effective) democracy as part of the solution
    • The tech sector can help lead the next phase in democracy’s evolution
  • Democracy & Digital Platforms: A Match Made in Heaven or in Hell?
    • The Facebook F-up as a wake-up call
    • Where to look for solutions?
  • Serving Users (to Advertisers to Benefit Shareholders)
    • An IPO + mobile ads: 2012 as a turning point for Facebook
    • Too busy driving growth to focus on privacy?
    • Serving users or serving users to advertisers?
    • Understanding & addressing social harms
  • Data as Power: Approaches to Righting the Balance
    • Our data is tracked & locked in a “black box” we don’t control or understand
    • The EU tightens privacy protections amid mixed signals in the U.S.
    • Platforms as “information fiduciaries”
    • Reallocating power & benefits when users share their data
    • Shifting from an “Attention Economy” to a more efficient “Intention Economy”
    • Who owns and controls the data used to develop AI?
    • Data as labor that should be financially compensated
    • Data as an infrastructural public good
    • A “data tax” that generates a “data dividend” we all share
    • Data portability as means to enhance competition & consumer choice
  • The Power of Dominant Platforms: It’s Not Just About “Bigness”
    • New forms of concentrated power call for new remedies
    • Platforms wield transmission, gatekeeping & scoring power
    • Antitrust needs an updated framework to address platform power
    • Creating a civic infrastructure of checks & balances for the digital economy
  • Democracy & Corporate Governance: Challenging the Divine Right of Capital
    • A “generative” or “extractive” business model?
    • Dethroning kings & capital 
    • Moving beyond capitalism’s aristocratic form
    • Embracing economic democracy as a next-step Enlightenment
  • Platform Cooperativism: Acknowledging the Rights of “Produsers”
    • Reclaiming the Internet’s sharing & democratizing potential
    • Scaling a platform co-op: easier said than done
    • The #BuyTwitter campaign as a call for change
    • Encouraging the wisdom of crowds or the fears of mobs?
  • Interactions Between Political & Platform Systems
    • Feedback loops reinforce strengths & weaknesses, benefits & harms
    • Facebook’s role in the election as an example
    • If we don’t fix government, can government help fix Facebook?
  • A Purpose-Built Platform to Strengthen Democracy
    • Is Zuck’s lofty vision compatible with Facebook’s business model?
    • Designed to bolster democracy, not shareholder returns
  • Democratic Oversight of Platform Management by “Produsers”
    • Facebook, community and democracy
    • Is Facebook a community or a dictatorship?
    • Giving users a vote in Facebook’s governance
    • Technology can help users participate in FB governance
    • Evolving from corporate dictatorship toward digital democracy
Advertisements
Posted in Communication Policy, Next Generation Internet, Political Reform, Uncategorized | Tagged , , , , , , | Leave a comment

Democracy & Digital Platforms: A Match Made in Heaven or in Hell?

In the wake of revelations about Facebook’s role in the 2016 U.S. election and use of the platform to inflame social unrest in other countries, there seems to be a growing consensus that the evolution of digital platforms has reached a point where “move fast and break things” to achieve “domination,” and “get right up to the creepy line and not cross it” are no longer suitable steering mechanisms for the giant digital platforms that have carved out a central, powerful and growing role in modern society.

What has yet to emerge, however, is a consensus about what guiding principles and governance systems should be applied to an industry that is extremely complex, dynamic, fast-growing, socially valuable, economically and politically powerful, and deeply entrenched in virtually every sector of modern life; and where all of these characteristics are likely to intensify in the future.

At the same time we face growing evidence that the healthy function and future prospects for political democracy are facing serious threats in the U.S. and the world. This, in turn, aggravates the challenge of designing and executing government policies that help constrain harms arising from the ongoing evolution of digital technologies. These harms are then more readily manifested and, as the Cambridge Analytica and Russian-interference revelations illustrate, feed a further degradation of already fragile systems of democratic self-governance. The result is a dynamic of toxic synergy and destructive feedback loops that aggravate rather than alleviate the many challenges we face in this country and the world.

The core argument of this series of posts is that efforts to turn these negative feedback loops into positive ones are more likely to be effective if they include:

1) the application of democratic principles and functionality in the internal management of dominant digital platforms, coupled with a rebalancing of rights, power and benefits related to data, to better protect the welfare of platform users and reflect the value they contribute to these platforms through their usage;

2) leveraging the financial and technical resources of digital platforms and the broader tech sector to develop one or more online platform specifically designed to improve the healthy function of political democracy.

The Facebook F-Up as a Wake-Up Call

As I write this, Facebook is facing what is likely the most intense crisis of confidence in its roughly 14 years of existence.  While obviously important for the company, this development also raises broad and increasingly urgent questions about the role of digital platforms in modern life, including their interactions with the principle and practice of democratic self-governance and the individual freedoms it relies upon and supports.

Facebook’s confidence crisis was triggered when it was discovered in 2017 that the world’s largest social network was used to distribute deeply dishonest and highly targeted negative advertising and other content during the 2016 campaign. Making matters worse was evidence that “fake news” had apparently been more widely distributed than actual news, and that some of the “fake news” and targeted negative ads came from a Moscow-based organization with ties to the Kremlin, as explained in a federal indictment issued in February 2018. This, in turn, contributed to events that led to a Special Counsel investigation into these Russian efforts to interfere with U.S. elections, including whether they involved cooperation from members of the Trump campaign. As one might expect, these developments have intensified the polarized acrimony and governmental dysfunction already deeply entrenched in the American political system.

Facebook’s crisis in public confidence grew even more severe when a whistleblower revealed that Cambridge Analytica’s (CA), a UK-based data analytics and political consulting firm that had worked for the Trump campaign, had gained unauthorized (by the users) access to personal data on 50 million Facebook users (a figure Facebook later updated to 87 million). In the days that followed, Christopher Wylie—the young pink-haired whistleblower who had been deeply involved in CA’s efforts (funded and overseen by Steve Bannon and the Mercer family) to obtain and politically weaponize this huge cache of Facebook data—could be seen on virtually every news show, and was called to testify before the UK parliament. Making matters even worse for Facebook (and especially CA) was the release by the UK’s Channel 4 broadcast service of clandestinely recorded videos in which CA executives bragged about their role in the Trump campaign and others around the world, including how the company augmented its big-data targeting activities with more traditional campaign dirty tricks such as videotaped entrapments of opposing candidates with offers of financial or sexual favors.

These developments prompted an unprecedented wave of media and governmental scrutiny of Facebook’s business practices, including two days of congressional testimony by company founder and CEO Mark Zuckerberg. The CA revelations also triggered a Federal Trade Commission (FTC) investigation of whether Facebook had violated a 2011 FTC consent decree requiring it to better protect customer data privacy.  The consent decree provided for penalties of up to $40,000 per violation, a potentially massive amount given that the unauthorized release involved data for 87 million Facebook users.

These post-election revelations and the widespread sense of outrage they triggered are likely to intensify long-simmering concerns about how Facebook, other online platforms, the government and our increasingly digitized society are managing a range of issues, including:

  • the need for more effective privacy protections against unwanted surveillance by governments and private companies, and the use of personal data to misinform and/or manipulate online users as they engage economically as consumers and in civil society as citizens and voters;
  • the enormous and still growing market power of online platforms, and existing and potential abuses of that power in ways that reduce healthy competition and choice in our economy and weaken the overall health of society;
  • hostile foreign government‘s use of digital platforms like Facebook to further weaken our already financially-corrupted and dysfunctional democracy, and its ability to undertake cyberattacks that threaten our power grid and other key infrastructure;
  • the use of our ever-present digital devices to hack our brains by maximizing our “engagement” and using it to generate data and algorithms that support ever-more-effective triggering of the addictive user behaviors that drive many of today’s digital-economy business models, while generating social harms we need to better understand and address;
  • the use of data collected by digital platforms and other entities to develop Artificial Intelligence (AI) systems, and the potential of these systems to cause a range of difficult-to-predict-and-control consequences, some on a massive scale of potentially harmful impact.

This mix of concerns is feeding a generalized unease that the optimistic vision of digital technology enhancing democracy and expanding freedom and opportunity for all may be replaced by a harsh reality characterized by further degradation of democratic systems and concentration and abuse of economic and political power.

Where to look for solutions?

Their recent actions suggest that Congress and the FTC are at least temporarily taking more seriously their responsibilities to help address these problems. But the former’s overall dysfunction and lack of technical expertise (as evidenced in recent hearings), and the latter’s failure to adequately enforce its consent decree until a whistleblower spoke out, leaves me skeptical about relying heavily on them to develop, update and enforce policies that constrain socially harmful tendencies of a tech sector that is extremely dynamic and increasingly central to the function of our social, economic and political systems.

At the other extreme from faith in government regulation as the preferred remedial strategy is the combination of techno-utopian vision and entrepreneurial hubris driving the self-regulation approach favored by many tech industry leaders. In my view, Zuckerberg manifests one version of this perspective, along with the blind spots that seem to accompany it. With two billion users and a CEO who also is the controlling shareholder, it’s not too big a stretch to argue that Facebook operates as the world’s largest dictatorship. Yet Zuckerberg seems to view the company as a potential savior of global democracy, public safety and civil society.  As he put it in a long February 16, 2017 Facebook post shortly after the 2016 presidential election:

“the most important thing we at Facebook can do is develop the social infrastructure to give people the power to build a global community that works for all of us…for supporting us, for keeping us safe, for informing us, for civic engagement, and for inclusion of all.”

Though revelations about Facebook’s privacy-related missteps over the years raise legitimate questions about its commitment to prioritize the lofty intentions reflected in this statement, I’m going to assume in this series of posts that Zuckerberg is sincere in expressing them. And that, regardless of the internal and external pressures he may feel that pull him and the company in other directions, he and his top management team would like their legacy to be Facebook’s success in achieving these admirable and important social goals.

With this in mind, in later posts I’m going to suggest some strategies that Facebook and other digital platforms can support (whether voluntarily or by government mandate) to help build the more inclusive, informed, safe and civically engaged global community envisioned in Zuckerberg’s post-election Facebook post. To help provide context for these suggestions, the next post in this series will provide a brief outline of Facebook’s history and the evolution of its business model.

********

Below is an outline, with links, to all the posts in this series. Unless otherwise noted, bolding in quotations is mine, added for emphasis.

  • Expanding Democratic Governance in the Digital Anthropocene
    • The digital anthropocene: a pivotal & high-risk phase of human history
    • Empathy + technology: a powerful recipe for shared prosperity & peace
    • More (and more effective) democracy as part of the solution
    • The tech sector can help lead the next phase in democracy’s evolution
  • Democracy & Digital Platforms: A Match Made in Heaven or in Hell?
    • The Facebook F-up as a wake-up call
    • Where to look for solutions?
  • Serving Users (to Advertisers to Benefit Shareholders)
    • An IPO + mobile ads: 2012 as a turning point for Facebook
    • Too busy driving growth to focus on privacy?
    • Serving users or serving users to advertisers?
    • Understanding & addressing social harms
  • Data as Power: Approaches to Righting the Balance
    • Our data is tracked & locked in a “black box” we don’t control or understand
    • The EU tightens privacy protections amid mixed signals in the U.S
    • Platforms as “information fiduciaries”
    • Reallocating power & benefits when users share their data
    • Shifting from an “Attention Economy” to a more efficient “Intention Economy”
    • Who owns and controls the data used to develop AI?
    • Data as labor that should be financially compensated
    • Data as an infrastructural public good
    • A “data tax” that generates a “data dividend” we all share
    • Data portability as means to enhance competition & consumer choice
  • The Power of Dominant Platforms: It’s Not Just About “Bigness”
    • New forms of concentrated power call for new remedies
    • Platforms wield transmission, gatekeeping & scoring power
    • Antitrust needs an updated framework to address platform power
    • Creating a civic infrastructure of checks & balances for the digital economy
  • Democracy & Corporate Governance: Challenging the Divine Right of Capital
    • A “generative” or “extractive” business model?
    • Dethroning kings & capital 
    • Moving beyond capitalism’s aristocratic form
    • Embracing economic democracy as a next-step Enlightenment
  • Platform Cooperativism: Acknowledging the Rights of “Produsers”
    • Reclaiming the Internet’s sharing & democratizing potential
    • Scaling a platform co-op: easier said than done
    • The #BuyTwitter campaign as a call for change
    • Encouraging the wisdom of crowds or the fears of mobs?
  • Interactions Between Political & Platform Systems
    • Feedback loops reinforce strengths & weaknesses, benefits & harms
    • Facebook’s role in the election as an example
    • If we don’t fix government, can government help fix Facebook?
  • A Purpose-Built Platform to Strengthen Democracy
    • Is Zuck’s lofty vision compatible with Facebook’s business model?
    • Designed to bolster democracy, not shareholder returns
  • Democratic Oversight of Platform Management by “Produsers”
    • Facebook, community and democracy
    • Is Facebook a community or a dictatorship?
    • Giving users a vote in Facebook’s governance
    • Technology can help users participate in FB governance
    • Evolving from corporate dictatorship toward digital democracy
Posted in Communication Policy, Next Generation Internet, Uncategorized | Tagged , , , , , , , , , , , , | Leave a comment

Serving Users (to Advertisers to Benefit Shareholders)

In its early years, a key driver of Facebook’s approach to growth was to “move fast and break things” in pursuit of “domination.” This growth strategy has been reflected in Facebook’s approach to protecting users’ personal data–which may have violated a 2011 FTC consent decree. One such potential violation was the 2014 release of personal data on roughly 87 million of Facebook users to Cambridge Analytica (CA), a political consulting firm working for several Republican candidates, including Donald Trump in his successful 2016 presidential campaign (for a bitingly humorous take on Facebook’s repeated privacy lapses, see this May 2, 2018 video segment from Full Frontal with Samantha Bee).

In the wake of revelations about Facebook’s (and other platforms’) role in the 2016 election, a growing number of people–apparently including Mark Zuckerberg–believe it’s time to update the company’s slogan and strategy to reflect the more socially conscious goals reflected in Zuckerberg’s February 16, 2017 Facebook post and subsequent statements.

To help put recent events and future possibilities into context, its worth reviewing Facebook’s history, especially the period following its historic IPO in 2012, roughly a year after the 2011 FTC consent decree was signed.

Going public & selling mobile ads: 2012 as a turning point

In addition to being the year Facebook stock began trading on public markets, 2012 was also the year the company began selling mobile advertising, at a time when smartphone penetration and functionality were expanding dramatically, but were still in relatively early stages of development.

In an Oct. 12, 2017 article in the New York Times, Cade Metz provided an overview of how Facebook’s mobile advertising system evolved:

As more people have moved more of their online activity from PCs to mobile phones, the lines between ads and organic content have continued to blur, particularly on popular social networking service like Instagram, Twitter and Facebook…Indeed, ads are often the same as organic content, just with money behind them.

The reason, Metz explains is that:

Mobile phones offer less room on their screens for ads. Usually, there is only space for a single column of information, and that must accommodate both ads and other content. The result is that ads have moved into a more prominent position…

Responding to the screen limitations, Facebook…created a new ad system that made ads an integral part of the News Feed, which dominates the screen on mobile phones…Facebook allows businesses and other advertisers to serve pages straight into the News Feeds of people they had no other connection to, targeting their particular interests and behavior.

As these pages appear, people can comment on them and “like” them, just as they can with anything else that shows up in their feeds. And if people click the like button, these pages will continue to show up in their feeds — and the feeds of their Facebook “friends” — for free…

On Facebook, people describe themselves and leave all sorts of digital bread crumbs that show their interests. Then Facebook matches these with other data it collects.

Facebook’s ad system provides ways to target geographic locations, personal interests, characteristics and behavior, including activity on other internet services and even in physical stores. Advertisers can target people based on their political affiliation; how likely they are to engage with political content; whether they like to jog, hike or hunt; what kind of beer they like; and so on.

If advertisers provide a list for email addresses, Facebook can try to target the people those addresses belong to. It can also do what is called “look-alike matching.” In this case, Facebook’s algorithms serve ads to people believed to be similar to the people those addresses belong to.

Based on Facebook’s financial performance since 2012, it seems pretty clear that its mobile ad strategy has been a dramatic success from a shareholder’s perspective.

In the roughly six years since it went public, the company’s revenues and profits have been on a steep upward trajectory, driven mainly by mobile ad revenues, which totaled $11 billion in 4Q17, a whopping 88% of the quarter’s total revenue.

In 2012, Facebook’s annual revenue and net income were $5.1 billion and $53 million, respectively. After five years of selling mobile ads and doubling its base of active users to more than 2.1 billion, 2017 revenue had expanded eightfold to $40.7 billion, while net income had skyrocketed to $15.9 billion, a 300-fold increase from 2012 levels.

This dramatic growth in revenue and, even more so in net income, was the key driver of the fivefold increase in Facebook’s stock price from $38 on the day of its 2012 IPO to a high of $190 in early February 2018 (on March 30, roughly two weeks after the CA news broke, it closed below $160, before gradually recovering to $185 by May 10).

Too busy driving growth to focus on privacy?

In 2011, the year before it went public, Facebook entered into a consent decree with the Federal Trade Commission (FTC) in which it agreed to not share its users’ data without their consent and could be subject to fines of up to $40,000 per violation.

In spite of this commitment by Facebook, CA was able to access a vast trove of user data in 2014, roughly three years after the consent decree was signed.  Even more damning, says Roger McNamee, an early mentor to Zuckerberg, was the fact that Facebook staff had worked closely with CA staff on the Trump campaign even after learning that CA had obtained the data without proper authorization and then been told by CA that it had destroyed the data.

In a March 27, 2018 interview with NPR, NYU law professor Tim Wu, who was an FTC advisor when the consent decree was negotiated, said Facebook had “basically broken the promise they made to the country in 2011 and the promises they kept making to everybody through [Facebook’s] privacy settings,” which Wu suggested were largely ineffective. The period from 2012 to 2015 said Wu, was one where Facebook management was “obsessed with revenue generation.”

[T]he fact is that privacy – it’s like kryptonite to their business model. You know, they have to be able to promise their advertisers that they have the goods on everyone and they have the power to manipulate people. And so if they are also extremely tight on privacy, that tends to throw a wrench into the machine.

Serving users or serving users to advertisers?

The problem, explained Wu, is “actually a very fundamental one, because Facebook is “always in the position of serving two masters.”

If its actual purpose was just trying to connect friends and family, and it didn’t have a secondary motive of trying to also prove to another set of people that it could gather as much data as possible and make it possible to manipulate or influence or persuade people, then it wouldn’t be a problem. For example, if they were a nonprofit, it wouldn’t be a problem…I think there’s a sort of intrinsic problem with having for-profit entities with this business model in this position of so much public trust because they’re always at the edge because their profitability depends on it.

In a NYT op-ed piece published shortly after the Cambridge Analytica story broke, Zeynep Tufekci echoed Wu’s point, that Facebook’s basic business model is the problem:

A business model based on vast data surveillance and charging clients to opaquely target users based on this kind of extensive profiling will inevitably be misused. The real problem is that billions of dollars are being made at the expense of the health of our public sphere and our politics, and crucial decisions are being made unilaterally, and without recourse or accountability.

In Tufekci’s view, the Cambridge Analytica debacle was:

…an all-too-natural consequence of Facebook’s business model, which involves having people go to the site for social interaction, only to be quietly subjected to an enormous level of surveillance. The results of that surveillance are used to fuel a sophisticated and opaque system for narrowly targeting advertisements and other wares to Facebook’s users. Facebook makes money, in other words, by profiling us and then selling our attention to advertisers, political actors and others. These are Facebook’s true customers, whom it works hard to please.

Understanding & addressing social harms

In response to concerns related to this dynamic within Facebook and the digital platform sector as a whole, a number of tech industry veterans joined together in early 2018 to launch the Center for Humane Technology. The group’s co-founders include McNamee,  Executive Director Tristan Harris, a former Design Ethicist at Google who has been described as the “closest thing Silicon Valley has to a conscience,” Chief Strategy Officer Aza Raskin, former head of user experience at Mozilla, and COO Randima Fernando, who from 2010 to 2017 was Executive Director of Mindful Schools.

As the “Problem” page on the Center’s web site explains:

Our society is being hijacked by technology. What began as a race to monetize our attention is now eroding the pillars of our society: mental health, democracy, social relationships, and our children…Facebook, Twitter, Instagram, Google have produced amazing products that have benefited the world enormously. But these companies are also caught in a zero-sum race for our finite attention, which they need to make money. Constantly forced to outperform their competitors, they must use increasingly persuasive techniques to keep us glued. They point AI-driven news feeds, content, and notifications at our minds, continually learning how to hook us more deeply—from our own behavior. Unfortunately, what’s best for capturing our attention isn’t best for our well-being...These are not neutral products. They are part of a system designed to addict us…

Phones, apps, and the web are so indispensable to our daily lives—a testament to the benefits they give us—that we’ve become a captive audience. With two billion people plugged into these devices, technology companies have inadvertently enabled a direct channel to manipulate entire societies with unprecedented precision.

Among the problems associated with today’s digital media, according to the Center’s web site, are deterioration of mental health and self-esteem (with children being especially vulnerable) and the quality of our social relationships and democracy. It also claims that today’s networked digital technology is “different from anything in the past, including TV, radio, and computers,” due to its use of artificial intelligence, its 24/7 influence on our lives via ever-present digital devices, and its unprecedented level of personalization and social control.

In some respects, reading the words of these increasingly concerned tech-industry experts reminds me of a 1962 Twilight Zone episode in which aliens from outer space arrive on earth with a book entitled To Serve Man.” Initially the book’s title and the impressive technology wielded by their visitors from space lead most humans to believe the purpose of the visit is to be of service to mankind. Unfortunately, it’s only at the end of the episode, as humans are eagerly boarding the alien spaceships, that the world’s top cryptographers discover that To Serve Man is actually a cookbook.

Fortunately the existential threats we face as a human race today are less urgent than those faced by those boarding the alien spacecraft in this classic TV episode. Nevertheless these threats do exist, especially for the more vulnerable members of society. And so do important questions about how we can best use (or perhaps NOT use) digital technology to help address these challenges in a timely and equitable manner.

In the rest of this series I’ll be discussing potential directions we can pursue as a society to help minimize the harms and maximize the benefits of these powerful technologies.  To get started I’ll review a number of proposals for dealing with questions of ownership, control, benefits and harms related to the collection and use of the massive amounts of data we generate every day via our interactions with networked digital technologies.

********

Below is an outline, with links, to all the posts in this series. Unless otherwise noted, bolding in quotations is mine, added for emphasis.

  • Expanding Democratic Governance in the Digital Anthropocene
    • The digital anthropocene: a pivotal & high-risk phase of human history
    • Empathy + technology: a powerful recipe for shared prosperity & peace
    • More (and more effective) democracy as part of the solution
    • The tech sector can help lead the next phase in democracy’s evolution
  • Democracy & Digital Platforms: A Match Made in Heaven or in Hell?
    • The Facebook F-up as a wake-up call
    • Where to look for solutions?
  • Serving Users (to Advertisers to Benefit Shareholders)
    • An IPO + mobile ads: 2012 as a turning point for Facebook
    • Too busy driving growth to focus on privacy?
    • Serving users or serving users to advertisers?
    • Understanding & addressing social harms
  • Data as Power: Approaches to Righting the Balance
    • Our data is tracked & locked in a “black box” we don’t control or understand
    • The EU tightens privacy protections amid mixed signals in the U.S
    • Platforms as “information fiduciaries”
    • Reallocating power & benefits when users share their data
    • Shifting from an “Attention Economy” to a more efficient “Intention Economy”
    • Who owns and controls the data used to develop AI?
    • Data as labor that should be financially compensated
    • Data as an infrastructural public good
    • A “data tax” that generates a “data dividend” we all share
    • Data portability as means to enhance competition & consumer choice
  • The Power of Dominant Platforms: It’s Not Just About “Bigness”
    • New forms of concentrated power call for new remedies
    • Platforms wield transmission, gatekeeping & scoring power
    • Antitrust needs an updated framework to address platform power
    • Creating a civic infrastructure of checks & balances for the digital economy
  • Democracy & Corporate Governance: Challenging the Divine Right of Capital
    • A “generative” or “extractive” business model?
    • Dethroning kings & capital 
    • Moving beyond capitalism’s aristocratic form
    • Embracing economic democracy as a next-step Enlightenment
  • Platform Cooperativism: Acknowledging the Rights of “Produsers”
    • Reclaiming the Internet’s sharing & democratizing potential
    • Scaling a platform co-op: easier said than done
    • The #BuyTwitter campaign as a call for change
    • Encouraging the wisdom of crowds or the fears of mobs?
  • Interactions Between Political & Platform Systems
    • Feedback loops reinforce strengths & weaknesses, benefits & harms
    • Facebook’s role in the election as an example
    • If we don’t fix government, can government help fix Facebook?
  • A Purpose-Built Platform to Strengthen Democracy
    • Is Zuck’s lofty vision compatible with Facebook’s business model?
    • Designed to bolster democracy, not shareholder returns
  • Democratic Oversight of Platform Management by “Produsers”
    • Facebook, community and democracy
    • Is Facebook a community or a dictatorship?
    • Giving users a vote in Facebook’s governance
    • Technology can help users participate in FB governance
    • Evolving from corporate dictatorship toward digital democracy
Posted in Communication Policy, Next Generation Internet, Uncategorized | Tagged , , , , , , , , , , , | Leave a comment

Data as Power: Approaches to Righting the Balance

Not surprisingly, revelations about the unauthorized release of personal data on 87 million Facebook users to a firm using the data to design and target manipulative political messages has triggered an upsurge in concern about the adequacy of data privacy at Facebook and, more generally, in the digital economy as a whole. In that one respect this disturbing episode may prove to be a blessing, by triggering more focused public attention on privacy and other issues tied to the data practices of companies (and governments) operating in our increasingly connected and digitally monitored society.

Our data is tracked & locked in a “black box” we don’t control or understand

To a large extent, concerns about privacy reflect a more generalized sense of vulnerability and asymmetry of power in the digital age, as we increasingly rely on and share personal data with giant corporations that, as University of Maryland law professor Frank Pasquale puts it, operate as “black boxes.” In his book, The Black Box Society: The Secret Algorithms That Control Money and Information, Pasquale explains how “[i]mportant corporate actors have unprecedented knowledge of the minutiae of our daily lives, while we know little to nothing about how they use this knowledge to influence the important decisions that we— and they— make.”

Pasquale explains how the term “black box” is helpful in understanding and addressing the nature of this asymmetry in transparency and information-based power:

The term “black box” is a useful metaphor,…given its own dual meaning. It can refer to a recording device, like the data-monitoring systems in planes, trains, and cars. Or it can mean a system whose workings are mysterious; we can observe its inputs and outputs, but we cannot tell how one becomes the other. We face these two meanings daily: tracked ever more closely by firms and government, we have no clear idea of just how far much of this information can travel, how it is used, or its consequences…

The law, so aggressively protective of secrecy in the world of commerce, is increasingly silent when it comes to the privacy of person. That incongruity is the focus of this book. How has secrecy become so important to industries ranging from Wall Street to Silicon Valley? What are the social implications of the invisible practices that hide the way people and businesses are labeled and treated? How can the law be used to enact the best possible balance between privacy and openness?

Issues related to data privacy are complex from a legal and technical perspective, and I won’t attempt to discuss them in detail in this post. Instead, I’m going to briefly summarize a number of approaches intended to strike a healthier balance between data privacy and openness and between citizens and companies.

The EU tightens privacy protections amidst mixed signals in the U.S. 

As it turns out, the Facebook/Cambridge Analytica revelations occurred as the European Union was in the final stages of preparing for the May 25, 2018 implementation of its General Data Protection Regulation (GDPR), a new and aggressive set of data privacy-related rules.

In an April 1, 2018 New York Times op-ed piece, former FCC chair Tom Wheeler praised the GDPR as “powerful in its simplicity,” contrasting it with the approach taken in the U.S. under the Trump Administration.

[The GDPR] ensures that consumers own their private information and thus have the right to control its usage and that internet companies have an obligation to give consumers the tools to exercise that control.

The European rules, for instance, require companies to provide a plain-language description of their information-gathering practices, including how the data is used, as well as have users explicitly “opt in” to having their information collected. The rules also give consumers the right to see what information about them is being held, and the ability to have that information erased.

Wheeler’s praise for the GDPR contrasts with his view of privacy protection in the U.S. As he explained, in 2017, the last year of his FCC term, Congress repealed new privacy rules established in 2016 by the FCC. The repeal occurred quietly and on a party-line vote, at a time when the media and public attention were focused on Republicans’ attempt to repeal Obamacare and distracted by the daily flow of presidential tweets and “Russiagate” revelations. As Wheeler notes, lobbying for repeal came not only from access providers like Comcast and AT&T, which were directly affected by the FCC rules; it also came from web giants like Facebook and Google, which were not directly impacted by the rules, but whose business models are heavily dependent on advertising revenues and AI capabilities built on a foundation of user data.

Wheeler also pointed to the weakness of Federal Trade Commission (FTC) oversight of privacy issues, which currently applies to companies like Facebook and Google. As he explained, these rules “merely require internet companies to have a privacy policy available for consumers to see,” and “a company can change that policy whenever it wants as long as it says it is doing so.”

Wheeler’s view of the FTC as a relatively toothless platform regulator may soon be tested as the agency (which, after a long wait, finally has a full staff of five Commissioners) moves forward with its investigation of the release of Facebook user data to Cambridge Analytica.

In a post on the Harvard Law Review blog, David Vladeck, who headed up the Commission’s Bureau of Consumer Protection when the decree was negotiated, and is now faculty director of Georgetown Law’s Center on Privacy and Technology, suggests the investigation will find violations of the agreement. In an op-ed published at Jurist.org, Chris Hoofnagle, adjunct professor in the Berekley School of Law and School of Information, and author of Federal Trade Commission Privacy Law and Policy, sounded less convinced that clear violations will be proven. But Hoofnagle does view the FTC investigation as “likely to uncover new, unrelated wrongdoing that will give Facebook strong incentives to agree to broader terms and even pay penalties.” He also recommended that the FTC tailor its interventions to account for the “dataism” idealogy embraced by Facebook’s leaders, in part by holding them personally liable for deceptive acts related to the company’s privacy-related practices. This latter recommendation was also included as one of nine FTC regulatory steps proposed in a March 22, 2018 Guardian commentary written by Barry Lynn and Matt Stoller of the Open Markets Institute.

In the run-up to its implementation, questions abound regarding how EU regulators will interpret and enforce the GDPR, how companies will attempt to satisfy its requirements, and how those these two dynamics will interact, including regulatory penalties, legal challenges to enforcement, and impacts (both intended and unintended) on privacy protection, market dynamics and the use of digital services.

With Facebook having rolled out some privacy-related changes in advance of the GDPR’s implementation date, some critics have expressed pre-launch skepticism about the social network’s level of compliance (e.g., see here and here), and also about the overall readiness of companies to satisfy GDPR requirements.

As Europe moves forward with GDPR enforcement, its experience is likely to generate useful lessons for policymakers and companies involved in the digital economy around the world.  At the same time, other models are being proposed as a means to strike a healthier balance between privacy and openness, and between the power of digital platforms and their users. I briefly discuss some of these below.

Platforms as “information fiduciaries”

Yale law professor Jack Balkin has suggested the concept of “information fiduciary” as an effective legal tool for achieving this healthier balance. In a blog post that later evolved into a law review paper, Balkin explained the rationale for this approach and how it might work in practice.

Traditionally, a fiduciary is a person who has a relationship of trust with a party (the beneficiary), and who is authorized to hold something valuable– for example– the beneficiary’s assets or other property– and manage them on the beneficiary’s behalf. Fiduciaries have duties of loyalty and of care...The fiduciary’s duty of loyalty may…create a duty of honesty to disclose to the beneficiary how the fiduciary is handling the assets or property. Usually the duty of loyalty also requires that the fiduciary avoid creating conflicts of interest between the fiduciary and beneficiary, and also includes a duty against self dealing— i.e., using the beneficiary’s assets to benefit the fiduciary because of the danger that the assets will be used to the beneficiary’s detriment…

[S]uppose that an online service provider is an information fiduciary. Then the OSP has a duty not to use its end users’ personal information against the end users’ interests, even without an explicit contractual promise. That fiduciary duty might be recognized by the common law, or it might be fleshed out by statute or administrative regulation, as it often is in the case of the professions…The [information] fiduciary relationship creates a duty that, in this particular context, trumps the interest in freedom of expression

A fiduciary duty would limit the rights the company would otherwise enjoy to collect, collate, use and sell personal information about the end user…The online service provider would…have to consider whether its information practices created a conflict of interest and act accordingly. Moreover, the online service provider’s duties of loyalty and care might require it to disclose how it was using the customer’s personal information…

According to a Verge article by Russell Bandom, Balkin sees the information fiduciary approach as providing more potent privacy protection than the consent-based approach emphasized in the GDPR as well as the CONSENT Act proposed in the U.S. by Democratic senators Markey and Blumenthal.

Balkin says [the consent-based] approach is too easy for platforms to game. “It’s very easy to get consent from end users,” Balkin says. “They’ll just click and go. So consent-based reforms often look really great on paper but don’t have any practical effect.” Even if we add mandatory opt-ins for data collection (as in the Markey Bill) or clearer descriptions of how data is used (as mandated by the GDPR), there’s a good chance users will simply click through the warnings without reading them.

Balkin’s fiduciary approach would attack the problem from a different angle. Instead of counting on users to understand the data they’re sharing, it establishes up front that services are in a privileged position and bear the blame if things go wrong. In some ways, this is already how Facebook talks about its relationship with users. Over and over again this week, Zuckerberg talked about earning user’s trust, and how the platform only works when users trust Facebook with their data. Balkin’s fiduciary rule would put that trust in legal terms: establishing that Facebook users have no choice but to share data with Facebook, and as a result, requiring that the company be careful with that data and not employ it against the user’s interest. If Facebook failed to uphold those duties, they could be taken to court, although the nature of the proceeding and the potential penalties would depend on how the rule is written.

Reallocating power & benefits when users share their data

As I see it, there are two general categories of value generated by the sharing of personal data enabled by the world’s increasingly ubiquitous digital connectivity. One category is tied directly to individual-level desires, preferences and commercial and non-commercial interactions. The second is related more to the kind of mass-level data collection and analysis involved in developing artificial intelligence (AI) capabilities, especially those based on machine learning (ML).

For example, if I want to buy a new car that fits my budget and personal preferences, the data most directly relevant to my purchase decision is that which best enables the accurate and efficient matching of my individual buyer characteristics to the characteristics of available cars.

At the other end of the spectrum are data collection and analysis activities that involve much larger amounts of data and are more likely to have much broader social impacts than simply improving the efficiency of a specific market interaction. As Evgeny Morozov notes in a March 31, 2018 Guardian piece:

[The] full value of [some] data emerges only once it’s aggregated across many individuals…a lot of the data that we generate, when we walk down a tax-funded city street equipped with tax-funded smart street lights, is perhaps better conceptualised as data to which we might have social and collective use rights as citizens, but not necessarily individual ownership rights as producers or consumers.

In the remainder of this post I’ll be discussing potential approaches to these two data categories, all of which are grounded in the principle that platform users should have greater control over the use of their personal data and benefit more from that usage.

Shifting from an “Attention Economy” to a more efficient “Intention Economy”

In 2012, the year Facebook went public and began selling mobile ads, a book entitled The Intention Economy: When Customers Take Charge, was released. The book was written by Doc Searls, who had earlier co-authored the 1999 early Internet-era classic, The Cluetrain Manifesto.  In The Intention Economy Searls critiques and offers an alternative to today’s digital Attention Economy, whose advertising technologies, he recently explained, have become even more sophisticated and invasive since the book was written. In The Intention Economy Searls explains that:

[W]hy build an economy around Attention, when Intention is where the money comes from?…The Intention Economy grows around buyers, not sellers. It leverages the simple fact that buyers are the first source of money, and that they come ready-made. You don’t need advertising to make them…The Intention Economy is about markets, not marketing. You don’t need marketing to make Intention Markets…In the Intention Economy, the buyer notifies the market of the intent to buy, and sellers compete for the buyer’s purchase. Simple as that.

Key to building an Intention Economy, explains Searls, is the development of what he calls Vendor Relationship Management (VRM) tools.

These tools will…become the means by which individuals control their relationships with multiple social networks and social media…Relationships between customers and vendors will be voluntary and genuine, with loyalty anchored in mutual respect and concern, rather than coercion. So, rather than “targeting,” “capturing,” “acquiring,” “managing,” “locking in,” and “owning” customers, as if they were slaves or cattle, vendors will earn the respect of customers…[R]ather than guessing what might get the attention of consumers—or what might “drive” them like cattle—vendors will respond to actual intentions of customers…Customer intentions, well expressed and understood, will improve marketing and sales, because both will work with better information, and both will be spared the cost and effort wasted on guesses about what customers might want, flooding media with messages that miss their marks.

Searls’ work with colleagues at Harvard’s Berkman Klein Center and elsewhere led to the creation of ProjectVRM which, in turn, led to the creation of Customer Commons.  The latter’s mission is “to restore the balance of power, respect and trust between individuals and organizations that serve them.” As its website explains:

Customer Commons holds a vision of the customer as an independent actor who retains autonomous control over his or her personal data, desires and intentions. Customers must also be able to assert their own terms of engagement, in ways that are both practical and easy to understand for all sides.

In a November 18, 2016 Medium post, Searls provided an update on ProjectVRM’s efforts to develop software and services that “make customers both independent and better able to engage with business.” He also noted that the VRM community is poised to move on to a second phase of development, and that this effort could scale up more quickly “if the investment world finally…recognizes how much more value will come from independent and engaging customers than from captive and dependent ones.”  And that shift in investor sentiment, he suggested, may be aided by the recognition “that the great edifice of guesswork ‘adtech’ has become is about to get burned down by regulation anyway.”

Searls last comment ties back to the impending implementation of Europe’s GDPR, which he describes as “the world’s most heavily weaponized law protecting personal privacy.”    It’s purpose, he says, “is to blow away the (mostly US-based) surveillance economy, especially tracking-based “adtech,” which supports most commercial publishing online.”

But Searls also sees “a silver lining for advertising in the GDPR’s mushroom cloud, in the form of the oldest form of law in the world: contracts.” To make his point he provides a simple example:

[I]f an individual proffers a term to a publisher that says:

—and that publisher agrees to it, that publisher is compliant with the GDPR, plain and simple.

In a post on the Berkman Klein Center’s VRM blog, Searls argues that this simple contractual agreement, in addition to complying with the GDPR and any similar regulation in the U.S. or other countries, will also begin to rebalance the “asymmetric power relationship between people and publishers called client-server.” In language reminiscent of The Cluetrain Manifesto, Searls explains that:

Client-server, by design, subordinates visitors to websites. It does this by putting nearly all responsibility on the server side, so visitors are just users or consumers, rather than participants with equal power and shared responsibility in a truly two-way relationship between equals.

It doesn’t have to be that way. Beneath the Web, the Net’s TCP/IP protocol—the gravity that holds us all together in cyberspace—remains no less peer-to-peer and end-to-end than it was in the first place. Meaning there is nothing to the Net that prevents each of us from having plenty of power on our own…In legal terms, we can operate as first parties rather than second ones. In other words, the sites of the world can click “agree” to our terms, rather than the other way around.

Searls goes on to explain how Customer Commons and the Linux Journal, where he currently serves as editor-in-chief, are taking initial steps to implement this vision:

Customer Commons is working on [developing] those terms. The first publication to agree to readers’ terms is Linux Journal, where I am now the editor-in-chief. The first of those terms will say “just show me ads not based on tracking me,” and is hashtagged #DoNotByte.

Noting that the approach of Customer Commons is based in part on the copyright models developed earlier by Creative Commons (which was also incubated at the Berkman Klein Center), Searls explains that Customer Commons’ personal privacy terms will come in three forms of code, Legal, Human Readable and Machine Readable.

Who owns and controls the data used to develop AI?

While the Information Fiduciary and Customer Commons models hold promise for increasing trust and rebalancing power in the relationship between individual users and online platforms and marketers, other models may be particularly well suited to address issues tied to the collection of the massive amounts of data required to drive the evolution of ML-based AI technologies and systems.

My initial research suggests there are at least two directions for ownership and control that could be applied here. One approach–discussed in a five-page paper entitled Should We Treat Data as Labor? Moving Beyond “Free” and an upcoming book entitled Radical Markets: Uprooting Capitalism and Democracy for a Just Society–would treat the contribution of user-generated data as the equivalent of labor, with terms and compensation established by market-based mechanisms and institutional arrangements that support the evolution and efficient function of a data-as-labor market. A different approach, advocated by author Evgeny Morozov and Facebook co-founder Chris Hughes, envisions data ownership rights as (per Morozov) “social and collective use rights as citizens, but not necessarily individual ownership rights as producers or consumers.”

Data as labor that should be financially compensated

In an article on the Brookings Institute web site, the authors of the Should We Treat Data as Labor paper explain the context and rationale for their proposal:

Many fear that Artificial Intelligence (AI) will end up replacing humans in employment – which could have huge consequences for the share of national income going to these displaced workers. In fact, companies in all sorts of industries are increasingly requiring less labor to do the same amount of work. How much work will end up being displaced by robots is still unknown, but as a society we should worry about what the future will look like when this happens. The paper’s main contribution is a proposal to treat data as labor, instead of capital owned by these tech firms. We think this might be a way to provide income and a new source of meaning to people’s lives in a world where many traditional occupations no longer exist.

In a New York Times article entitled Your Data Is Crucial to a Robotic Age. Shouldn’t You Be Paid for It?, Eduardo Porter discusses the themes raised in the paper and the book, citing the latter’s authors, Eric A. Posner of the University of Chicago Law School and E. Glen Weyl, principal researcher at Microsoft (Weyl is also one of the five authors of the paper).

Data is the crucial ingredient of the A.I. revolution…”Among leading A.I. teams, many can likely replicate others’ software in, at most, one to two years,” notes the technologist Andrew Ng. “But it is exceedingly difficult to get access to someone else’s data. Thus data, rather than software, is the defensible barrier for many businesses.”

We may think we get a fair deal, offering our data as the price of sharing puppy pictures. By other metrics, we are being victimized: In the largest technology companies, the share of income going to labor is only about 5 to 15 percent, Mr. Posner and Mr. Weyl write. That’s way below Walmart’s 80 percent. Consumer data amounts to work they get free.

“If these A.I.-driven companies represent the future of broader parts of the economy,” they argue, “without something basic changing in their business model, we may be headed for a world where labor’s share falls dramatically from its current roughly 70 percent to something closer to 20 to 30 percent.”

Citing the significant monopsony power enjoyed by online giants like Google, Facebook and Amazon, the paper suggests that building a strong “data as labor” component in the digital economy will require some form of “countervailing power by large scale social institutions.” It goes on to suggest three possible avenues for such countervailing power: competition, “data labor unions” and government, concluding that “all three of these factors must coordinate for [the data as labor model] to succeed, just as in historical labor movements.”

Data as an infrastructural public good

A different approach to ownership and control of data generated by connected citizens and used to develop AI technologies is to treat it as a shared social good. This view has been put forth by Evgeny Morozov in a series of opinion pieces in the Guardian.  In a  December 3, 2016 column Morozov described data as “an essential, infrastructural good that should belong to all of us; it should not be claimed, owned, or managed by corporations.”

Enterprises should, of course, be allowed to build their services around it but only once they pay their dues. The ownership of this data – and the advanced AI built on it – should always remain with the public. This way, citizens and popular institutions can ensure that companies do not hold us hostage, imposing fees for using services that we ourselves have helped to produce. Instead of us paying Amazon a fee to use its AI capabilities – built with our data – Amazon should be required to pay that fee to us.

In a later Guardian piece, published July 1, 2017, Morozov explains a bit more of his vision:

All of the nation’s data, for example, could accrue to a national data fund, co-owned by all citizens (or, in the case of a pan-European fund, by Europeans). Whoever wants to build new services on top of that data would need to do so in a competitive, heavily regulated environment while paying a corresponding share of their profits for using it. Such a prospect would scare big technology firms much more than the prospect of a fine.

Morozov continues to sketch out his vision of data as a public infrastructure good in a March 31, 2018 Guardian piece published in the wake of the Facebook-Cambridge Analytica revelations.

[W]e can use the recent data controversies to articulate a truly decentralised, emancipatory politics, whereby the institutions of the state (from the national to the municipal level) will be deployed to recognise, create, and foster the creation of social rights to data. These institutions will organise various data sets into pools with differentiated access conditions. They will also ensure that those with good ideas that have little commercial viability but promise major social impact would receive venture funding and realise those ideas on top of those data pools.

A “data tax” that generates a “data dividend” we all share

In an April 27, 2018 Guardian piece, Chris Hughes, a Facebook co-founder, proposed an approach similar to Morozov’s “data as infrastructural public good” model.

The gist of Hughes’ proposal is to combine a “data tax” with a “data dividend” distributed to citizens.  As a potential model for such an approach he cites Alaska’s Permanent Fund Dividend:

There is a template for how to do this. In Alaska, unlike in the lower 48 states, the rights to minerals, oil and natural gas, are owned by the state, and not by any single landowner. At the moment of the oil boom in the 1970s in Alaska, a Republican governor there forged an agreement between the public and the oil companies: you are welcome to profit from our natural resources, but you must share some of the wealth with the people. He created a savings account for all Alaskans called the Permanent Fund, and voters approved it overwhelmingly in a statewide referendum.

Oil companies pay a significant portion of their gross revenues to the state, and a portion of that money is earmarked to fund a savings account for the people…While oil and gas companies have thrived in the state, the Permanent Fund Dividend has dramatically reduced the number of people living in poverty in Alaska and is a major reason Alaska has the lowest levels of income inequality in the nation.

In the case of the data dividend, any large company making a significant portion of its profits from data that Americans create could be subject to a data tax on gross revenues. This would encompass not only Facebook and Google, but banks, insurance companies, large retail outlets, and any other companies that derive insights from the data you share with them. A 5% tax, even by a conservative estimate, could raise over $100bn a year. If the dividend were distributed to each American adult (although one could argue teenagers should be included given their heavy internet use), each person would receive a check for about $400 per year.

The amount of data we produce about ourselves and the profits from it would almost certainly grow in coming years, causing the fund to grow very large, very fast. You could easily imagine each individual receiving well over $1,000 a year in just the next decade. Unlike oil, this data is not an exhaustible resource, enabling the fund to disburse the total revenues each year.

To close out his Guardian piece, Hughes’ cites three key questions that need further exploration regarding his proposal: 1) is data the right thing to tax; 2) how do you define which companies would be subject to the tax and; 3) how do you ensure the tax doesn’t become a justification for giving up on other regulation?

Data portability as means to enhance competition & consumer choice

Another potential model for righting the balance of data-related power that today overwhelmingly favors digital platforms is what has come to be known as “data portability.”

In a June 30, 2017 New York Times opinion piece, University of Chicago business school professors Luigi Zingales and Guy Rolnik laid out the basic argument for a data portability model. They started by noting that Google’s 90% market share in search and Facebook’s penetration of 89% of Internet users are manifestations of powerful network effects that tend to pull these markets toward a monopoly.

According to Zingales and Rolnik, a relevant model for addressing this tendency toward monopoly is the telecom sector’s “number portability” rules. By making it easier for mobile phone customers to switch carriers, these rules contributed to  increased market competition and price reductions.  “The same is possible,” they claimed, “in the social network space.

It is sufficient to reassign to each customer the ownership of all the digital connections that she creates — what is known as a “social graph.” If we owned our own social graph, we could sign into a Facebook competitor — call it MyBook — and, through that network, instantly reroute all our Facebook friends’ messages to MyBook, as we reroute a phone call.

While the data portability model sounds appealing in principle, a number of experts are skeptical about the extent to which it can have the level of impact on competition that the telecom industry’s number portability rules have had.

For example, Joshua Gans points out that “social graphs” are far more complex and dynamic than telephone numbers, making the “portability” process more challenging.

Some of my Facebook posts are public and I read many public posts from the media, fan groups and companies. That is all part of my social graph but how would we work all of that? That said, there may be solutions there. The larger issue is how these links work is constantly evolving yet having a consumer controlled social graph may make it difficult to be responsive. After all, think about how you manage the social graph that is your pre-programmed fast dial numbers on a phone (if you even do those things). They quickly go out of date and you can’t be bothered updating them.

Will Rhinehart argues that the data portability model, as described by Zingales and Rolnik, misunderstands what makes data valuable and what gives the dominant online platforms their power.

Contrary to the claims of portability proponents…it isn’t data that gives Facebook power. The power rather lies in how this data is structured, processed, and contextualized. What Mark Zuckerburg has called the social graph is really his term for a much larger suite of interrelated databases and software tools that helps to analyze and understand the collected data…Requiring data portability does little to deal with the very real challenges that face the competitors of Facebook, Amazon, and Google. Entrants cannot merely compete by collecting the same kind of data. They need to build better sets of tools to understand information and make it useful for consumers.

MIT researchers Chelsea Barabas, Heha Narula and Ethan Zuckerman have also concluded that the practical challenges facing social network startups are substantial, multifaceted and extend well beyond the issue of data portability. In an article based on their study of “several of [the] most promising efforts to ‘re-decentralize’ the web,” they discuss the mix of challenges facing these startups.

While they cite interoperability and the “hoarding of user views and data” as factors that give dominant platforms competitive advantage, the MIT researchers found that this was only one of multiple challenges faced by social network startups and driving the market toward monopolization.

We join [social networks] because our friends are there…And while existing social networks have perfected their interfaces based on feedback from millions of users, new social networks are often challenging for new users to navigate.

Other startup challenges cited by the MIT researchers include managing security threats and higher costs relative to larger incumbents that benefit from economies of scale in the acquisition of key resources like storage and bandwidth.

Though these experts’ comments point to the limits of data portability as a stimulant of successful platform competition, Gans suggests that moving in this direction can and should be part of a broader solution aimed at striking a healthier balance of data-related rights, power and benefits.

In terms of social graph, consumers surely have a right to share information they have provided Facebook with others, and Facebook should probably make that easy even if it falls short of some portability proposal.

######

As the above discussion hopefully makes clear, there are a number of promising approaches to achieving a healthier balance of rights, power and benefits related to the collection and use of data generated by and about citizens. Given recent events, it seems timely for policymakers in the U.S. and other countries to join with tech industry leaders and experts, and other digital economy stakeholders, in a serious and ongoing dialog about the relative strengths, weaknesses and compatibility of these approaches. This dialog should also take into account the lessons learned from Europe’s experience as it attempts to address these issues via the GDPR. It should also strive for some measure of consensus on how best to achieve this rebalancing of power and benefits.

In subsequent posts I’ll be switching gears from a focus on specifically data-related issues to a broader consideration of problems and potential remedies related to the power of digital platforms, the functions and outcomes of democracy in both the political and economic spheres, and the interactions between these two important issues.

********

Below is an outline, with links, to all the posts in this series. Unless otherwise noted, bolding in quotations is mine, added for emphasis.

  • Expanding Democratic Governance in the Digital Anthropocene
    • The digital anthropocene: a pivotal & high-risk phase of human history
    • Empathy + technology: a powerful recipe for shared prosperity & peace
    • More (and more effective) democracy as part of the solution
    • The tech sector can help lead the next phase in democracy’s evolution
  • Democracy & Digital Platforms: A Match Made in Heaven or in Hell?
    • The Facebook F-up as a wake-up call
    • Where to look for solutions?
  • Serving Users (to Advertisers to Benefit Shareholders)
    • An IPO + mobile ads: 2012 as a turning point for Facebook
    • Too busy driving growth to focus on privacy?
    • Serving users or serving users to advertisers?
    • Understanding & addressing social harms
  • Data as Power: Approaches to Righting the Balance
    • Our data is tracked & locked in a “black box” we don’t control or understand
    • The EU tightens privacy protections while the U.S. does the reverse
    • Platforms as “information fiduciaries”
    • Reallocating power & benefits when users share their data
    • Shifting from an “Attention Economy” to a more efficient “Intention Economy”
    • Who owns and controls the data used to develop AI?
    • Data as labor that should be financially compensated
    • Data as an infrastructural public good
    • A “data tax” that generates a “data dividend” we all share
    • Data portability as means to enhance competition & consumer choice
  • The Power of Dominant Platforms: It’s Not Just About “Bigness”
    • New forms of concentrated power call for new remedies
    • Platforms wield transmission, gatekeeping & scoring power
    • Antitrust needs an updated framework to address platform power
    • Creating a civic infrastructure of checks & balances for the digital economy
  • Democracy & Corporate Governance: Challenging the Divine Right of Capital
    • A “generative” or “extractive” business model?
    • Dethroning kings & capital 
    • Moving beyond capitalism’s aristocratic form
    • Embracing economic democracy as a next-step Enlightenment
  • Platform Cooperativism: Acknowledging the Rights of “Produsers”
    • Reclaiming the Internet’s sharing & democratizing potential
    • Scaling a platform co-op: easier said than done
    • The #BuyTwitter campaign as a call for change
    • Encouraging the wisdom of crowds or the fears of mobs?
  • Interactions Between Political & Platform Systems
    • Feedback loops reinforce strengths & weaknesses, benefits & harms
    • Facebook’s role in the election as an example
    • If we don’t fix government, can government help fix Facebook?
  • A Purpose-Built Platform to Strengthen Democracy
    • Is Zuck’s lofty vision compatible with Facebook’s business model?
    • Designed to bolster democracy, not shareholder returns
  • Democratic Oversight of Platform Management by “Produsers”
    • Facebook, community and democracy
    • Is Facebook a community or a dictatorship?
    • Giving users a vote in Facebook’s governance
    • Technology can help users participate in FB governance
    • Evolving from corporate dictatorship toward digital democracy
Posted in Communication Policy, Next Generation Internet, Uncategorized | Tagged , , , , , , , , | Leave a comment

The Power of Dominant Platforms: It’s Not Just About “Bigness”

This post will focus mainly on an article by Brooklyn Law School professor K. Sabeel Rahman that examines the power of digital platforms in an historical context. Though I try to summarize key points and include excerpts from the article, entited The New Octopus, I’d strongly recommend reading it in full to anyone interested in the issues addressed in this series of posts.

In the article’s introductory section, Rahman points out that issues of corporate power in the digital age have both similarities and differences in relation to those faced during the Progressive Era a century earlier. Citing Supreme Court Justice Brandeis’ famous reference to “the curse of bigness,” Rahman explains that:

As in the Progressive Era, technological revolutions have radically transformed our social, economic, and political life. Technology platforms, big data, AI—these are the modern infrastructures for today’s economy. And yet the question of what to do about technology is fraught, for these technological systems paradoxically evoke both bigness and diffusion: firms like Amazon and Alphabet and Apple are dominant, yet the internet and big data and AI are technologies that are by their very nature diffuse.

The problem, however, is not bigness per se. Even for Brandeisians, the central concern was power: the ability to arbitrarily influence the decisions and opportunities available to others.

New forms of concentrated power call for new remedies 

The challenge then, says Rahman, is to develop strategies that can effectively counter excessive concentrations of power in the digital age.

The problem of scale, then, has always been a problem of power and contestability. In both our political and our economic life, arbitrary power is a threat to liberty. The remedy is the institutionalization of checks and balances. But where political checks and balances take a common set of forms—elections, the separation of powers—checks and balances for private corporate power have proven trickier to implement.

These various mechanisms—regulatory oversight, antitrust laws, corporate governance, and the countervailing power of organized labor— together helped create a relatively tame, and economically dynamic, twentieth-century economy. But today, as technology creates new kinds of power and new kinds of scale, new variations on these strategies may be needed.

Noting that “technological power today operates in distinctive ways that make it both more dangerous and potentially more difficult to contest,” Rahman cites three types of infrastructural power wielded by digital platforms: transmission, gatekeeping and scoring. Noting that the impacts of these forms of power “grow as more and more goods and services are built atop a particular platform,” he explains that its exercise is “more subtle than explicit control…enabl[ing] a firm to exercise tremendous influence over what might otherwise look like a decentralized and diffused system.”

Platforms wield transmission, gatekeeping & scoring power

As an example of transmission power, which Rahman describes as “the ability of a firm to control the flow of data or good,” he cites Amazon. “As a shipping and logistics infrastructure,” he explains, Amazon “can be seen as directly analogous to the railroads of the nineteenth century.” Its transmission power, he explains, “places Amazon in a unique position to target prices and influence search results in ways that maximize its returns,…favor its preferred producers” and even allow it to “make or break businesses and whole sectors, just like the railroads of yesteryear.”

With regard to gatekeeping power, Rahman explains that the exercise of this power does not require a firm to control the entire infrastructure of transmission, only to “control the gateway to an otherwise decentralized and diffuse landscape.” Examples of this kind of power, he says, are Facebook News Feed and Google Search.

[G]atekeeping power subordinates two kinds of users on either end of the “gate.” Content producers fear hidden or arbitrary changes to the algorithms for Google Search or the Facebook News Feed, whose mechanics can make the difference between the survival and destruction of media content producers. Meanwhile, end users unwittingly face an informational environment that is increasingly the product of these algorithms—which are optimized not to provide accuracy but to maximize user attention spent on the site. The result is a built-in incentive for platforms like Facebook or YouTube to feed users more content that confirms preexisting biases and provide more sensational versions of those biases, exacerbating the fragmentation of the public sphere into different “filter bubbles.”

Rahman goes on to explain that platforms’ gatekeeping decisions can have huge social and political consequences.

While the United States is only now grappling with concerns about online speech and the problems of polarization, radicalization, and misinformation, studies confirm that subtle changes—how Google ranks search results for candidates prior to an election, for instance, or the ways in which Facebook suggests to some users rather than others that they vote on Election Day—can produce significant changes in voting behavior, large enough to swing many elections.

The third type of power considered by Rahman is the scoring power “exercised by ratings systems, indices and ranking databases.” Citing socially destructive impacts of gamed credit ratings as an example, he notes that “scoring power is not a new phenomenon.” But he adds that “big data and the proliferation of AI enable…much wider use of similarly flawed scoring systems [and] as these systems become more widespread, their power—and risk—magnifies.”

In his book The Black Box Society: The Secret Algorithms That Control Money and Information,” University of Maryland law professor Frank Pasquale examines the potentially destructive asymmetry of power reflected in the expansion of algorithmic scoring in our increasingly monitored world.  As he explains in the book’s introduction:

The success of individuals, businesses, and their products depends heavily on the synthesis of data and perceptions into reputation. In ever more settings, reputation is determined by secret algorithms processing inaccessible data…Although internet giants say their algorithms are scientific and neutral tools, it is very difficult to verify those claims. And while they have become critical economic infrastructure, trade secrecy law permits managers to hide their methodologies, and business practices, deflecting scrutiny.

Antitrust needs an updated framework to address platform power

The last section of Rahman’s article considers a range of possible approaches to constraining these infrastructural powers of platforms, including the use of antitrust regulations. As a potential example of the latter, he cites prohibitions on Amazon “being both a platform and a producer of its own goods and content sold on its own platform, as a way of preventing the incentive to self-deal.”

Among those advocating for stronger antitrust enforcement in the digital economy is the Open Markets Institute, headed by Barry Lynn, author of Cornered: The New Monopoly Capitalism and the Economics of Destruction.  Shortly after the Cambridge Analytica revelations, the Institute proposed that Facebook be required to spin off its ad network as well as Instagram and WhatsApp, the two competing social networks it acquired between 2012 and 2014. It also recommended that Facebook be prohibited from making any further acquisitions for at least five years. And in a Yale Law Journal paper entitled Amazon’s Antitrust Paradox, Lina Khan, the Institute’s Director of Legal Policy, called for a new approach to antitrust in the digital age, arguing that “the current framework in antitrust—specifically its pegging competition to “consumer welfare,” defined as short-term price effects—is unequipped to capture the architecture of market power in the modern economy.” The final sections of Khan’s paper “consider two potential regimes for addressing Amazon’s power: restoring traditional antitrust and competition policy principles or applying common carrier obligations and duties.”

In a 2018 paper published in Telecommunications Policy, Natascha Just, Associate Professor in the Department of Media and Information at Michigan State University, also calls for an updated approach to antitrust, citing the challenges facing efforts to control “market dominance and anticompetitive behavior in times of platformization.” These challenges, Just suggests “are forcing a paradigm change in the area of competition policy.”

[T]heoretical advances and new market conditions require (1) a shift in attention from traditional price-oriented analyses to systematic inclusions of non-price competition factors like quality, innovation, and privacy, (2) due consideration of attention markets and the acknowledgement of markets in the absence of price, as well as (3) alertness to the role of user data and big data that has become a new asset class in digital economies.

Creating a civic infrastructure of checks & balances for the digital economy

In addition to antitrust, Rahman considered a range of other potential approaches to restraining the power of dominant platforms.

  • The creation of independent oversight and ombudsman bodies within Facebook, Google and other tech platforms. To be effective and legitimate, Rahman says, these “would need to have significant autonomy and independence” and “engage a wider range of disciplines and stakeholders in their operations.”
  • The development of “more explicit professional and industry standards of conduct,” perhaps facilitated by third-party scoring systems (e.g., similar to the LEED program that certifies green building practices).
  • Creation of new interdisciplinary governmental institutions for oversight of “algorithms, the use of big data, search engines, and the like, subjecting them to risk assessments, audits, and some form of public participation.” The risk here, as with any government regulation, Rahman notes, is that “industry is likely to be several steps ahead of government, especially if it is incentivized to seek returns by bypassing regulatory constraints.” A related issue is that of regulatory capture, especially in a governmental system lacking strong safeguards against such capture.
  • Privacy-related restrictions and/or a “big data tax” as “structural inhibitors of some kinds of big data and algorithmic uses.”

To close his article, Rahman ties these digital age issues of platform power back to the challenges and lessons of the Progressive Era:

A key theme for Progressive Era critics of corporate power was the confrontation between the democratic capacities of the public and the powers of private firms. Today, as technology creates new forms of power, we must also create new forms of countervailing civic power. We must build a new civic infrastructure that imposes new kinds of checks and balances

Moving fast and breaking things is inevitable in moments of change. The issue is which things we are willing to break—and how broken we are willing to let them become. Moving fast may not be worth it if it means breaking the things upon which democracy depends.

In the following post I’ll be discussing Marjorie Kelly’s perspective on corporate power and governance. In my view, her diagnosis of the problem and suggested remedies overlap to a significant degree with the analysis presented in Rahman’s article. And both inform my own suggestions for strategies to constrain harms from the operation of digital platforms while encouraging their benefits. These suggestions are discussed in later posts in this series.

********

Below is an outline, with links, to all the posts in this series. Unless otherwise noted, bolding in quotations is mine, added for emphasis.

  • Expanding Democratic Governance in the Digital Anthropocene
    • The digital anthropocene: a pivotal & high-risk phase of human history
    • Empathy + technology: a powerful recipe for shared prosperity & peace
    • More (and more effective) democracy as part of the solution
    • The tech sector can help lead the next phase in democracy’s evolution
  • Democracy & Digital Platforms: A Match Made in Heaven or in Hell?
    • The Facebook F-up as a wake-up call
    • Where to look for solutions?
  • Serving Users (to Advertisers to Benefit Shareholders)
    • An IPO + mobile ads: 2012 as a turning point for Facebook
    • Too busy driving growth to focus on privacy?
    • Serving users or serving users to advertisers?
    • Understanding & addressing social harms
  • Data as Power: Approaches to Righting the Balance
    • Our data is tracked & locked in a “black box” we don’t control or understand
    • The EU tightens privacy protections amid mixed signals in the U.S. 
    • Platforms as “information fiduciaries”
    • Reallocating power & benefits when users share their data
    • Shifting from an “Attention Economy” to a more efficient “Intention Economy”
    • Who owns and controls the data used to develop AI?
    • Data as labor that should be financially compensated
    • Data as an infrastructural public good
    • A “data tax” that generates a “data dividend” we all share
    • Data portability as means to enhance competition & consumer choice
  • The Power of Dominant Platforms: It’s Not Just About “Bigness”
    • New forms of concentrated power call for new remedies
    • Platforms wield transmission, gatekeeping & scoring power
    • Antitrust needs an updated framework to address platform power
    • Creating a civic infrastructure of checks & balances for the digital economy
  • Democracy & Corporate Governance: Challenging the Divine Right of Capital
    • A “generative” or “extractive” business model?
    • Dethroning kings & capital 
    • Moving beyond capitalism’s aristocratic form
    • Embracing economic democracy as a next-step Enlightenment
  • Platform Cooperativism: Acknowledging the Rights of “Produsers”
    • Reclaiming the Internet’s sharing & democratizing potential
    • Scaling a platform co-op: easier said than done
    • The #BuyTwitter campaign as a call for change
    • Encouraging the wisdom of crowds or the fears of mobs?
  • Interactions Between Political & Platform Systems
    • Feedback loops reinforce strengths & weaknesses, benefits & harms
    • Facebook’s role in the election as an example
    • If we don’t fix government, can government help fix Facebook?
  • A Purpose-Built Platform to Strengthen Democracy
    • Is Zuck’s lofty vision compatible with Facebook’s business model?
    • Designed to bolster democracy, not shareholder returns
  • Democratic Oversight of Platform Management by “Produsers”
    • Facebook, community and democracy
    • Is Facebook a community or a dictatorship?
    • Giving users a vote in Facebook’s governance
    • Technology can help users participate in FB governance
    • Evolving from corporate dictatorship toward digital democracy
Posted in Communication Policy, Next Generation Internet, Uncategorized | Tagged , , , , , , , | Leave a comment

Democracy & Corporate Governance: Challenging the Divine Right of Capital

In October 2017 the New York Times invited nine “technologists, academics, politicians and journalists” to propose steps that could help “fix” Facebook, “as a product, a company or both.” While all of the suggested “fixes” struck me as likely to be helpful, two struck me as more fundamental in their approach, by focusing on the structure and processes of corporate governance rather than specific changes in Facebook’s policies and/or government regulations related to issues such as data ownership, privacy, transparency, competition, the role of advertising and algorithms, and other aspects of platform functionality.

One of these suggestions came from Ellen Pao, chief diversity and inclusion officer at the Kapor Center for Social Impact and a former chief executive of Reddit. While Pao’s suggestion that “Facebook needs to replace its focus on engagement quantity with interaction quality,” struck me as similar to policy changes proposed by other experts, she also argued that that, in order to make this kind of change, Facebook needs to “replace at least half of the leadership team and board with underrepresented people of color who are informed and value diversity and inclusion.”

The last and perhaps most fundamental remedy proposed in the Times piece came from Tim Wu, Professor at Columbia Law School, author of “The Attention Merchants: The Epic Scramble to Get Inside Our Heads” and originator of the term “net neutrality.

Wu suggested that Facebook become a public benefit corporation. This change, he says, would make it much easier for the company to realize its lofty social ambitions, since it would yield a corporate charter requiring the company to more explicitly and more fully commit itself to “do[ing] something that would aid the public,” and for its board members to “take that public benefit into account when making decisions.”

Mark Zuckerberg has said that Facebook’s goals are “bringing us closer together” and “building a global community.” Worthy, beautiful goals, but easier said than done when Facebook is also stuck delivering ever-increasing profits and making its platform serve the needs of advertisers…As a nonprofit or public benefit corporation (like Kickstarter), Facebook…could shed its “two masters” dilemma, truly pursue its lofty goals and become a firm of which its users and the world could actually be proud.

What struck me about the suggestions made by Wu and Pao is that, rather than recommending specific changes in Facebook policies, they consider the underlying issues of corporate purpose and governance that are key determinants of which policies are adopted and which are not. Their proposed structural and corporate charter-level changes seem designed to ensure that Facebook, already the world’s largest social network, also operates more like a social enterprise, which Wikipedia defines as “an organization that applies commercial strategies to maximize improvements in financial, social and environmental well-being.”

As I read the proposals from Wu and Pao I was reminded of the work of Marjorie Kelly,  co-founder and for 20 years president of Business Ethics magazine. In her two books, Kelly has: 1) examined key differences between “generative” and “extractive” ownership models and; 2) raised fundamental questions about the ownership rights of corporate shareholders in relation to those that might be claimed by a broader range of stakeholders involved with and/or impacted by a corporation’s activities. (I’ve previously discussed Kelly’s books on this site and here and here on Michigan State University’s Quello Center blog).

In a speech at the 2012 annual conference of the Business Alliance for Local Living Economies, Kelly highlighted the fundamental importance of ownership in our economy and our world, and the problems associated with the dominant ownership models of modern corporate capitalism. “Questions about who owns the wealth-producing infrastructure of an economy [and] whose interests it serves,” she argued, “are among the largest issues any society can face.”

Since I can’t match the clarity and eloquence of Kelly’s writing, my discussion of her books in this post will rely heavily on excerpts from them. While I hope this approach will convey key insights from the books, I urge anyone interested in these issues to read the well-written books themselves.

A “generative” or “extractive” business model?

The central theme of Kelly’s most recent book, Owning our Future, (published in 2012, the year Facebook went public and began selling mobile ads) is the distinction between “generative” and “extractive” ownership models, and the key elements characterizing each model. As she explains, “[t]he first and most important difference” between generative and extractive forms of ownership is that the former have a “Living Purpose,” as contrasted with the “Financial Purpose” pursued by extractive ownership models. This distinction echoes the underlying point of Wu’s suggestion that Facebook become a public benefit corporation.

Generative means the carrying on of life, and generative design is about the institutional framework for doing so. In their basic purpose, and in their living impact, these designs have an aim of generating the conditions where all life can thrive. They are built around a Living Purpose.

This is in contrast to the dominant ownership designs of today, which we might call extractive. Their aim is maximum extraction of financial wealth. They are built around a single-minded Financial Purpose.

But, according to Kelly, “purpose alone isn’t enough.”  Also needed, she says, is “the presence of at least one other structural element that holds that purpose in place.”  These additional elements of generative design are:

Membership. Who’s part of the enterprise? Who has a right to a say in profits, and who takes the risk of ownership? Corporations today have Absentee Ownership. Generative ownership has Rooted Membership, with ownership held in human hands.
Governance. Extractive ownership involves Governance by Markets, where control is linked to share price. Generative ownership involves Mission-Controlled Governance, with control held in mission-oriented hands.
Finance. Instead of the Casino Finance of traditional stock market ownership, generative approaches involve Stakeholder Finance, where capital becomes a long-term friend.
Networks. If traditional approaches use Commodity Networks, where goods trade based solely on price, generative economies use Ethical Networks, which offer collective support for social and ecological norms.

Kelly goes on to explain that, while “[n]ot every ownership model has every one of these design elements…the more elements that are used, the more effective the design.”

Dethroning the divine right of kings & capital

Kelly’s first book The Divine Right of Capital was published in 2001, as the late-90s Dotcom Bubble was bursting, and roughly halfway between the founding of Google in 1998 and Facebook in 2004. The book provides an historically-grounded and potentially paradigm-shifting lens through which we can: 1) understand the dominant role of corporations in the modern capitalist economy and; 2) consider how best to address the problems associated with that dominance.

In the book’s introduction, Kelly asks fundamental questions about how the wealth of giant public corporations is created. While these questions were timely when the book was published very early in the Internet’s history, they seem even more so today, when Facebook and other giant platform companies are extracting massive financial value from their users’ activities and funneling the lion’s share of that value to shareholders.

In an era when stock market wealth has seemed to grow on trees—and trillions have vanished as quickly as falling leaves—it’s an apt time to ask ourselves, where does wealth come from? More precisely, where does the wealth of public corporations come from? Who creates it?

To judge by the current arrangement in corporate America, one might suppose capital creates wealth—which is strange, because a pile of capital sitting there creates nothing. Yet capital providers—stockholders—lay claim to most wealth that public corporations generate. Corporations are believed to exist to maximize returns to shareholders. This is the law of the land, much as the divine right of kings was once the law of the land.

Just as the divine right of kings was the core myth that for centuries helped sustain the legal claims of royalty to own and control massive assets, similar claims about the rights of corporate shareholders are sustained by another myth considered unchallengeable and backed by force of law: that shareholder returns must be maximized. As Kelly puts it, “[w]e might call it our secular version of the divine right of kings.”

Kelly argues that, just as the divine right of kings was ultimately rejected as an arbitrary, self-serving and fundamentally unjust myth, a similar critique can and should be applied to the “divine right of capital” as expressed in the principle of shareholder primacy as applied to modern corporations. As she explains:

When we say that a corporation did well, we mean that its shareholders did well. The company’s local community might be devastated by plant closings. Employees might be shouldering a crushing workload. Still we will say, “The corporation did well.”

One does not see rising employee income as a measure of corporate success. Indeed, gains to employees are losses to the corporation. And this betrays an unconscious bias: that employees are not really part of the corporation. They have no claim on wealth they create, no say in governance, and no vote for the board of directors. They’re not citizens of corporate society, but subjects…

The oddity of it all is veiled by the incantation of a single, magical word: ownership. Because we say stockholders own corporations, they are permitted to contribute very little, and take quite a lot…

Why have the rich gotten richer while employee income has stagnated? Because that’s the way the corporation is designed. Why are companies demanding exemption from property taxes and cutting down three-hundred-year-old forests? Because that’s the way the corporation is designed. “A rising tide lifts all boats,” the saying goes. But the corporation functions more like a lock-and-dam operation, raising the water level in one compartment by lowering it in another.

Moving beyond capitalism’s aristocratic form

In Kelly’s view (one that I share), what’s needed is for free market capitalism to evolve from its “aristocratic form” to a form based on principles of economic democracy.

The problem is not the free market, but the design of the corporation…It is true that through history capitalism has been a system that has largely served the interests of capital. But then, government until the early twentieth century largely served the interests of kings. It wasn’t necessary to throw out government in order to do away with monarchy—instead we changed the basis of sovereignty on which government rested. We might do the same with the corporation, asserting that employees and the community rightfully share economic sovereignty with capital owners.

What we have known until now is capitalism’s aristocratic form. But we can embrace a new democratic vision of capitalism, not as a system for capital, but a system of capital—a system in which all people are allowed to accumulate capital according to their productivity, and in which the natural capital of the environment and community is preserved.

To provide some historical context for understanding how and why corporations have expanded their power as institutions of “aristocratic capitalism,” Kelly refers back to the American Revolution’s attempt to free the colonies from an aristocratic form of control by the English Crown.

The major companies of their era, like the East India Company, were arms of the Crown. America was founded by similar, though smaller, Crown companies. The founding generation in America seemingly felt that in bringing the Crown to heel, they had immunized themselves from corporate predation. This may be the reason that they left us few tools, at the federal level, for governing corporations: the word corporation itself appears nowhere in the Constitution.

This lack of tools to constrain the unbridled growth of corporate power, operating largely free of government control, has allowed modern corporations to, in Kelly’s words, “evolve into something new in civilization—more massive, more powerful than our democratic forefathers dreamed possible.” To support her argument she cites FDR’s description of major corporations as “a kind of private government which is a power unto itself.’” (as discussed in another post, Mark Zuckerberg has described Facebook in similar terms).

Kelly cites a number of changes in the nature of major corporations that have made the notion of shareholder primacy “increasingly out of step with reality.”

Increasing size. Today, among the world’s one hundred largest economies, fifty-one are corporations. They have revenues larger than nation-states, yet maintain the guise of being the “private property” of shareholders.

The shrinking of ownership functions. While we still call stockholders the owners of major public firms, they do not—for the most part—manage, fund, or accept liability for “their” companies. Ownership function has shrunk to virtually one dimension: extracting wealth.

The rise of the knowledge economy. For many companies, knowledge is the new source of competitive advantage. To allow shareholders to claim the corporation’s increasing wealth—when employees play a greater role in creating that wealth—is a misallocation of resources.

The increasing damage to our ecosystem. The rules of accounting were written in the fifteenth century, when to the Western mind nature seemed an unlimited reservoir of resources, and an unlimited sink for wastes. That is no longer true, but the rules of accounting retain fossilized images of those ancient attitudes.

It’s worth noting that the third item in the above list refers to employees but makes no mention of the contributions to corporate wealth creation by digital platform users, which was only a nascent and barely monetized phenomenon when the book was written, but today has grown to encompass massive numbers of users and equally massive creation of social value and shareholder wealth.

The chapters in the first part of The Divine Right of Capital explore six key principles that prioritize the needs of corporate shareholders over the needs of others involved with and/or impacted by corporations:

  1. Worldview: In the worldview of corporate financial statements, the aim is to pay property holders as much as possible, and employees as little as possible.
  2. Privilege: Stockholders claim wealth they do little to create, much as nobles claimed privilege they did not earn.
  3. Property: Like a feudal estate, a corporation is considered a piece of property—not a human community—so it can be owned and sold by the propertied class.
  4. Governance: Corporations function with an aristocratic governance structure, where members of the propertied class alone may vote.
  5. Liberty: Corporate capitalism embraces a predemocratic concept of liberty reserved for property holders, which thrives by restricting the liberty of employees and the community.
  6. Sovereignty: Corporations assert they are private and the free market will self-regulate, much as feudal barons asserted a sovereignty independent of the Crown.

Part 2 of the book explores an an economic structure based not on the shareholder primacy myth but on principles of economic democracy.  And it discusses how a societal shift to this new structure would have historical parallels to the Enlightenment’s rejection of the divine right of kings.

Embracing economic democracy as an expanded Enlightenment

“If we study the era of the Enlightenment, in which America was founded,” Kelly writes, “we find it did not begin with crafting laws and structures.” Instead, “[i]t began with challenging the principles on which the monarchy stood, and with articulating new principles of democracy.”

With that in mind, Kelly offers six principles of economic democracy as contrasted with today’s dominant system based on the principles of economic aristocracy:

  1. Enlightenment: Because all persons are created equal, the economic rights of employees and the community are equal to those of capital owners.
  2. Equality: Under market principles, wealth does not legitimately belong only to stockholders. Corporate wealth belongs to those who create it, and community wealth belongs to all.
  3. Public good: As semipublic governments, public corporations are more than pieces of private property or private contracts. They have a responsibility to the public good.
  4. Democracy: The corporation is a human community, and like the larger community of which it is a part, it is best governed democratically.
  5. Justice: In keeping with equal treatment of persons before the law, wealthy persons may not claim greater rights than others, and corporations may not claim the rights of persons.
  6. (r)Evolution: As it is the right of the people to alter or abolish government, it is the right of the people to alter or abolish the corporations that now govern the world. Intellectual principles like these may seem to be mere abstractions, airy things with little relevance to the real world. But as Michel Foucault observed, ideas are mechanisms of power. “A stupid despot may constrain his slaves with iron chains,” he wrote, “but a true politician binds them even more strongly by the chain of their own ideas.” Ideas are the foundation of the social order. If we are to build a new order, we must build on the base of ideas.

To achieve this new order, suggests Kelly:

It may be that the only truly radical change we need is in our minds—in the collective pictures of reality we unconsciously hold. We accept that corporations are pieces of private property owned by shareholders, just as our ancestors believed that nations were private territories owned by kings. We live with these myths like buried shells, old bombs from an ancient war—the war we thought we had won, between monarchy and democracy.

But ideas can change. And the world changes accordingly.

It is useful to recall that the institution of kingship dominated the globe for millennia, as a nearly universal form of government stretching back to the dawn of civilization. The very idea of monarchy once seemed eternal and divine, until a tiny band of revolutionaries in America dared to stand up and speak of equality. They created an unlikely and visionary new form of government, which today has spread around the world. And the power of kings can now be measured in a thimble.

With Kelly’s perspective on ownership and economic democracy in mind, the next post in this series will consider an enterprise model in which digital platforms are owned by their “produsers,” a term intended to highlight the fact that “users” of a platform also “produce” the content and data that, along with other factors of production (e.g., software and hardware), drive the market and social value of these platforms.

********

Below is an outline, with links, to all the posts in this series. Unless otherwise noted, bolding in quotations is mine, added for emphasis.

  • Expanding Democratic Governance in the Digital Anthropocene
    • The digital anthropocene: a pivotal & high-risk phase of human history
    • Empathy + technology: a powerful recipe for shared prosperity & peace
    • More (and more effective) democracy as part of the solution
    • The tech sector can help lead the next phase in democracy’s evolution
  • Democracy & Digital Platforms: A Match Made in Heaven or in Hell?
    • The Facebook F-up as a wake-up call
    • Where to look for solutions?
  • Serving Users (to Advertisers to Benefit Shareholders)
    • An IPO + mobile ads: 2012 as a turning point for Facebook
    • Too busy driving growth to focus on privacy?
    • Serving users or serving users to advertisers?
    • Understanding & addressing social harms
  • Data as Power: Approaches to Righting the Balance
    • Our data is tracked & locked in a “black box” we don’t control or understand
    • The EU tightens privacy protections amid mixed signals in the U.S.
    • Platforms as “information fiduciaries”
    • Reallocating power & benefits when users share their data
    • Shifting from an “Attention Economy” to a more efficient “Intention Economy”
    • Who owns and controls the data used to develop AI?
    • Data as labor that should be financially compensated
    • Data as an infrastructural public good
    • A “data tax” that generates a “data dividend” we all share
    • Data portability as means to enhance competition & consumer choice
  • The Power of Dominant Platforms: It’s Not Just About “Bigness”
    • New forms of concentrated power call for new remedies
    • Platforms wield transmission, gatekeeping & scoring power
    • Antitrust needs an updated framework to address platform power
    • Creating a civic infrastructure of checks & balances for the digital economy
  • Democracy & Corporate Governance: Challenging the Divine Right of Capital
    • A “generative” or “extractive” business model?
    • Dethroning kings & capital 
    • Moving beyond capitalism’s aristocratic form
    • Embracing economic democracy as a next-step Enlightenment
  • Platform Cooperativism: Acknowledging the Rights of “Produsers”
    • Reclaiming the Internet’s sharing & democratizing potential
    • Scaling a platform co-op: easier said than done
    • The #BuyTwitter campaign as a call for change
    • Encouraging the wisdom of crowds or the fears of mobs?
  • Interactions Between Political & Platform Systems
    • Feedback loops reinforce strengths & weaknesses, benefits & harms
    • Facebook’s role in the election as an example
    • If we don’t fix government, can government help fix Facebook?
  • A Purpose-Built Platform to Strengthen Democracy
    • Is Zuck’s lofty vision compatible with Facebook’s business model?
    • Designed to bolster democracy, not shareholder returns
  • Democratic Oversight of Platform Management by “Produsers”
    • Facebook, community and democracy
    • Is Facebook a community or a dictatorship?
    • Giving users a vote in Facebook’s governance
    • Technology can help users participate in FB governance
    • Evolving from corporate dictatorship toward digital democracy
Posted in Communication Policy, Economics, Next Generation Internet, Uncategorized | Tagged , , , , | Leave a comment

Platform Cooperativism: Acknowledging the Rights of “Produsers”

The discussion below builds on a prior post focused on the work of Marjorie Kelly, whose two books provide a framework for analyzing characteristics and impacts of alternative allocations of enterprise ownership rights. That framework describes key differences between “generative” and “extractive” ownership models, and draws provocative and paradigm-challenging parallels between the principle of shareholder primacy as manifested in today’s corporate-dominated economy and the pre-Enlightenment belief in the divine right of kings as a doctrine justifying royalty’s claim to near-absolute political and economic power.  Just as the pre-Enlightenment era’s injustices and imbalances in power were ameliorated by a transition to political democracy, Kelly argues for a similar evolution from capitalism’s “aristocratic form” to one characterized by what she and others refer to as economic democracy.

This post will consider cooperative ownership as a strategy for applying the principles of economic democracy to achieve a healthier balance of power in the function of online digital platforms. As discussed at various points in this series of posts, these platforms play an increasingly important role in human society by:

  • providing feature-rich multimedia communication capabilities to large and often transnational networked user bases;
  • gathering data contributed voluntarily by platform users and generated by their platform usage in ways they may not understand or approve of;
  • using this data to create algorithms that are becoming increasingly influential tools of private and public governance, yet are largely opaque in their design and operation and largely immune from democratic oversight;
  • wielding immense and arguably increasing market power, and internalizing large and growing financial surpluses generated via their interactions in the global economy;
  • generating substantial but still-not-well-understood positive and negative spillover effects.

Reclaiming the Internet’s sharing & democratizing potential

The application of economic democracy principles to the platform sector is the central focus of what has become known as the platform cooperative movement.  In a Shareable post published December 21, 2014, Nathan Schneider, one of the movement’s pioneering thinkers, summarized the problems that he and other platform co-op advocates were observing in the operation and priorities of dominant digital platforms.

High hopes for a liberating Internet have devolved into the dominance of a few mega-companies and the NSA’s watchful algorithms. Platforms entice users to draw their communities into an apparently free and open commons, only to gradually enclose it by tweaking terms of service, diluting privacy, or charging fees for essential features. Thanks to users’ unpaid labor of friending and posting, tech companies can employ far fewer people, and extract five to 10 times more profit per employee, than businesses in other industries. Fiduciary responsibility to their investors requires that they turn on the people who made them successful.

Roughly two years later, Schneider reiterated these concerns in an October 13, 2016 article in the Nation magazine entitled The Rise of a Cooperatively Owned Internet. In it he observed that:

It’s been pretty clear for a while now that the corporate Internet behemoths that claim to be involved in “sharing” and “democratizing” are doing little of either—not where it really matters. The venture capital that inflates them, and the IPOs that hand them over to Wall Street, result in an imperative to sell their users’ personal data, labor, and relationships to the highest bidder. It’s a business model based on surveillance and precarity. Many people on the front lines of the digital economy have realized that the ownership designs of the Internet’s dominant companies need to change. A few have started to figure out how.

During the period between the publication of these two articles, Schneider had helped organize the platform cooperative movement’s first conference, held at the New School in NYC in November 2015. Entitled “Platform Cooperativism: The Internet, Ownership, Democracy,” it attracted more than a thousand attendees, including New York City Council members, CEOs, investors, platform creators, and leading scholars. Follow-up events were held in November 2016 and 2017.

In January 2016, a report by Trebor Scholz, another pioneering thinker in the platform cooperative movement, was published by the Rosa Luxemburg Foundation. In the report, entitled Challenging the Corporate Sharing Economy, Scholz, used the term “produser” as a way to signify the productive contribution users make by engaging with the functionality provided by digital platforms. He also pointed to a more democratic approach to owning and managing digital platforms:

Produser-owned platforms are a response to monopolistic platforms like Facebook and Google that are luring users with the promise of the “free service” while monetizing their content and data. What if we’d own our own version of Facebook, Spotify, or Netflix? What if the photographers at Shutterstock.com would own the platform where their photos are being sold?

According to a review of Scholz’s 2016 report by David Bollier, co-founder of the Commons Strategies Group:

Scholz explains that platform cooperativism as a humane alternative to the gig economy relies upon three strategies: 1) cloning the technological heart of Uber, Airbnb and others; 2) developing social solidarity in the ownership and management of the platforms; and 3) reframing the ideas of innovation and efficiency with an eye on benefiting all, not just delivering profits to the few.

After the late 2015 conference, Schneider and Scholz teamed up to edit a book entitled Ours To Hack and to Own, which can be downloaded for free as a pdf file or purchased as a hard copy or e-book at Amazon and other sites.  The book includes more than three dozen essays from leading thinkers in the platform cooperative movement and related areas, along with brief profiles of entities adopting business models based on key principles of the movement. I strongly recommend it for anyone interested in this topic. You can also access an online directory of these cooperative companies and projects here.

Scaling a platform co-op: easier said than done

While some of these cooperatively owned and managed entities may thrive and grow to significant scale, a key challenge facing the platform cooperative movement (and any digital platform startup for that matter) is how to finance and manage growth to a scale sufficient to benefit from the network effects and scale economies that dominant platforms leverage to: 1) drive subscriber growth and engagement (and arguably, addiction) and; 2) develop sophisticated and largely opaque systems of data extraction and analysis and algorithm-based targeting and control, which enable these platforms to very profitably monetize users’ attention.

As Evgeny Morozov put it in a December 3, 2016 opinion piece in the Guardian:

There is no reason why a cooperative of drivers in a small town cannot build an app to help them beat Uber locally. But there is also no good reason to believe that this local cooperative can actually build a self-driving car: this requires massive investment and a dedicated infrastructure to harvest and analyse all of the data. One can, of course, also create data ownership cooperatives, but it’s unlikely they will scale to a point of competing with Google or Amazon.

A similar skepticism was expressed by MIT researchers, Chelsea Barabas, Heha Narula and Ethan Zuckerman based on their study of efforts to develop decentralized social networks as competitors to Facebook and other dominant platforms. In an article entitled Decentralized Social Networks Sound Great. Too Bad They’ll Never Work, they discuss the challenges faced by social network upstarts:

The three of us investigated several of these most promising efforts to “re-decentralize” the web, to better understand their potential to shake up the dominance of Facebook, Google, and Twitter. The projects we examined are pursuing deeply exciting new ideas. However, we doubt that decentralized systems alone will address the threats to free expression caused by today’s mega-platforms, for several key reasons.

First, these tools will face challenges acquiring users and gaining the attention of developers…Social networks, in particular, are difficult to bootstrap due to network effects—we join them because our friends are there, not for ideological reasons like decentralization. And while existing social networks have perfected their interfaces based on feedback from millions of users, new social networks are often challenging for new users to navigate.

These platforms also pose new security threats. Decentralized networks generally allow anyone to join and don’t link accounts to real-world identities like phone numbers. These systems often use public key cryptography to ensure account security. But managing public keys is hard for most users, and building software that is both cryptographically secure and easy to use is difficult…

Platforms tend to optimize for advertising revenue, prioritizing attention-grabbing or feel-good content. Designing robust reward mechanisms to curate content that keeps people informed rather than entertained remains a problem. If distributed platforms could solve it, they could theoretically tackle media challenges like echo chambers and filter bubbles, but such dilemmas still present a serious challenge for new systems.

Finally, platforms benefit from economies of scale — it’s cheaper to acquire resources like storage and bandwidth in bulk. And with network effects, which make larger platforms more useful, you have a recipe for consolidation…Market consolidation is also driven by user-targeted advertising models, which encourage hoarding of user views and data, discourage interoperability, and drive platforms to become ever larger.

The #BuyTwitter campaign as a call for change

In 2016, the enormity of the funding and scaling challenges facing startups hoping to compete with the online giants, coupled with Twitter’s underperformance relative to Wall Street expectations, led Schneider to propose that Twitter be transformed into a user cooperative. Interest in the idea (see here and here) led to the following stockholder proposal, which was put to a vote at Twitter’s annual meeting on May 22, 2017.

Stockholders request that Twitter, Inc. engage consultants with significant experience in corporate governance, preferably including conversion of companies to cooperatives or employee ownership, to prepare a report on the nature and feasibility of selling the platform to its users via a cooperative or similar structure with broad-based ownership and accountability mechanisms. The requested report shall be available to stockholders and investors by October 1, 2017, prepared at reasonable cost and omitting proprietary information.

The rationale for the Exit to Democratic User Ownership proposal put up for a vote was:

As Twitter users and stockholders, we see how vital the platform is for global media. We are among millions of users that value Twitter as a platform for democratic voice. And in 2016, we’ve seen Twitter’s future in the balance, from challenges of hate speech and abuse to the prospect of a buyout.

That is why we want the company to consider more fully aligning its future with those whose participation make it so valuable: its users. As of today, 3,300 individuals signed a petition at http://wearetwitter.global urging Twitter to build democratic user ownership.

For successful enterprises like the Green Bay Packers, REI, and the Associated Press, their popularity, resilience, and profitability is a result of their ownership structure. Examples of online companies include successful startups like Managed by Q, which allocates equity to office cleaners, and Stocksys United, a stock-photo platform owned by its photographers.

We believe these models point the way forward for Twitter, Inc., overcoming challenges to thrive as a cooperative platform.

A community-owned Twitter could result in new and reliable revenue streams, since we, as users, could buy in as co-owners, with a stake in the platform’s success. Without the short-term pressure of the stock markets, we can realize Twitter’s potential value, which the current business model has struggled to do for many years. We could set more transparent accountable rules for handling abuse. We could re-open the platform’s data to spur innovation. Overall, we’d all be invested in Twitter’s success and sustainability. Such a conversion could also ensure a fairer return for the company’s existing investors than other options.

According to the #BuyTwitter web site, that proposal garnered 4.9% yes votes, more than the 3% support it’s advocates had targeted to be eligible to resubmit a stronger proposal in the future.

Encouraging the wisdom of crowds or the fears of mobs?

In considering the #BuyTwitter campaign, it’s worth remembering that Twitter’s 330 million active monthly user base is dwarfed by Facebook, which claims 2.1 billion monthly active users and a range of other popular platforms and services, and Google, which boasts seven services that have reached 1 billion users (Google Maps, YouTube, Chrome, Gmail, Search, and Google Play) and more than 2 billion monthly active devices employing its Android operating system.  And Twitter’s revenue is only a small fraction of the revenues generated by Facebook and Google, which together dominate the online advertising market. In short, while Twitter is a substantial player in the platform space, Facebook and Google are in a separate class of dominance, wielding unprecedented social, economic and political power as the go-to platforms relied on by billions of users and the lion’s share of advertisers seeking to attract those users’ attention.

The reality is that any effort aimed at turning Facebook and/or Google into a user cooperative would make the #BuyTwitter campaign seem like a cakewalk in comparison. That being said, these companies’ massive size and dominant positions in the search, social media, advertising, AI and other key sectors has repeatedly raised concerns about social harms associated with that dominance; concerns that have become all the more urgent in the wake of the 2016 U.S. elections and social media-inflamed outbreaks of ethnic and religious violence in countries like Myanmar and Sri Lanka.

In this environment of heightened public awareness and regulatory scrutiny, it strikes me as useful to consider potential steps these giants could take—either voluntarily or by legal mandate—to provide an increased level of democratic control to the “produsers” that contribute so much value to their highly profitable platform enterprises. It is, after all, these users who suffer when these companies’ business models and management systems fail to adequately nurture and leverage the wisdom of crowds and instead encourage filter bubbles and misinformation that aggravate fears and prejudice and can fray and even shred the social fabric that healthy democracies and just and peaceful societies depend on and seek to build upon.

I’ll be discussing some such potential steps in subsequent posts.

********

Below is an outline, with links, to all the posts in this series. Unless otherwise noted, bolding in quotations is mine, added for emphasis.

  • Expanding Democratic Governance in the Digital Anthropocene
    • The digital anthropocene: a pivotal & high-risk phase of human history
    • Empathy + technology: a powerful recipe for shared prosperity & peace
    • More (and more effective) democracy as part of the solution
    • The tech sector can help lead the next phase in democracy’s evolution
  • Democracy & Digital Platforms: A Match Made in Heaven or in Hell?
    • The Facebook F-up as a wake-up call
    • Where to look for solutions?
  • Serving Users (to Advertisers to Benefit Shareholders)
    • An IPO + mobile ads: 2012 as a turning point for Facebook
    • Too busy driving growth to focus on privacy?
    • Serving users or serving users to advertisers?
    • Understanding & addressing social harms
  • Data as Power: Approaches to Righting the Balance
    • Our data is tracked & locked in a “black box” we don’t control or understand
    • The EU tightens privacy protections amid mixed signals in the U.S.
    • Platforms as “information fiduciaries”
    • Reallocating power & benefits when users share their data
    • Shifting from an “Attention Economy” to a more efficient “Intention Economy”
    • Who owns and controls the data used to develop AI?
    • Data as labor that should be financially compensated
    • Data as an infrastructural public good
    • A “data tax” that generates a “data dividend” we all share
    • Data portability as means to enhance competition & consumer choice
  • The Power of Dominant Platforms: It’s Not Just About “Bigness”
    • New forms of concentrated power call for new remedies
    • Platforms wield transmission, gatekeeping & scoring power
    • Antitrust needs an updated framework to address platform power
    • Creating a civic infrastructure of checks & balances for the digital economy
  • Democracy & Corporate Governance: Challenging the Divine Right of Capital
    • A “generative” or “extractive” business model?
    • Dethroning kings & capital 
    • Moving beyond capitalism’s aristocratic form
    • Embracing economic democracy as a next-step Enlightenment
  • Platform Cooperativism: Acknowledging the Rights of “Produsers”
    • Reclaiming the Internet’s sharing & democratizing potential
    • Scaling a platform co-op: easier said than done
    • The #BuyTwitter campaign as a call for change
    • Encouraging the wisdom of crowds or the fears of mobs?
  • Interactions Between Political & Platform Systems
    • Feedback loops reinforce strengths & weaknesses, benefits & harms
    • Facebook’s role in the election as an example
    • If we don’t fix government, can government help fix Facebook?
  • A Purpose-Built Platform to Strengthen Democracy
    • Is Zuck’s lofty vision compatible with Facebook’s business model?
    • Designed to bolster democracy, not shareholder returns
  • Democratic Oversight of Platform Management by “Produsers”
    • Facebook, community and democracy
    • Is Facebook a community or a dictatorship?
    • Giving users a vote in Facebook’s governance
    • Technology can help users participate in FB governance
    • Evolving from corporate dictatorship toward digital democracy
Posted in Communication Policy, Next Generation Internet, Uncategorized | Tagged , , , , , , , | Leave a comment