Not surprisingly, revelations about the unauthorized release of personal data on 87 million Facebook users to a firm using the data to design and target manipulative political messages has triggered an upsurge in concern about the adequacy of data privacy at Facebook and, more generally, in the digital economy as a whole. In that one respect this disturbing episode may prove to be a blessing, by triggering more focused public attention on privacy and other issues tied to the data practices of companies (and governments) operating in our increasingly connected and digitally monitored society.
Our data is tracked & locked in a “black box” we don’t control or understand
To a large extent, concerns about privacy reflect a more generalized sense of vulnerability and asymmetry of power in the digital age, as we increasingly rely on and share personal data with giant corporations that, as University of Maryland law professor Frank Pasquale puts it, operate as “black boxes.” In his book, The Black Box Society: The Secret Algorithms That Control Money and Information, Pasquale explains how “[i]mportant corporate actors have unprecedented knowledge of the minutiae of our daily lives, while we know little to nothing about how they use this knowledge to influence the important decisions that we— and they— make.”
Pasquale explains how the term “black box” is helpful in understanding and addressing the nature of this asymmetry in transparency and information-based power:
The term “black box” is a useful metaphor,…given its own dual meaning. It can refer to a recording device, like the data-monitoring systems in planes, trains, and cars. Or it can mean a system whose workings are mysterious; we can observe its inputs and outputs, but we cannot tell how one becomes the other. We face these two meanings daily: tracked ever more closely by firms and government, we have no clear idea of just how far much of this information can travel, how it is used, or its consequences…
The law, so aggressively protective of secrecy in the world of commerce, is increasingly silent when it comes to the privacy of person. That incongruity is the focus of this book. How has secrecy become so important to industries ranging from Wall Street to Silicon Valley? What are the social implications of the invisible practices that hide the way people and businesses are labeled and treated? How can the law be used to enact the best possible balance between privacy and openness?
Issues related to data privacy are complex from a legal and technical perspective, and I won’t attempt to discuss them in detail in this post. Instead, I’m going to briefly summarize a number of approaches intended to strike a healthier balance between data privacy and openness and between citizens and companies.
The EU tightens privacy protections amidst mixed signals in the U.S.
As it turns out, the Facebook/Cambridge Analytica revelations occurred as the European Union was in the final stages of preparing for the May 25, 2018 implementation of its General Data Protection Regulation (GDPR), a new and aggressive set of data privacy-related rules.
In an April 1, 2018 New York Times op-ed piece, former FCC chair Tom Wheeler praised the GDPR as “powerful in its simplicity,” contrasting it with the approach taken in the U.S. under the Trump Administration.
[The GDPR] ensures that consumers own their private information and thus have the right to control its usage and that internet companies have an obligation to give consumers the tools to exercise that control.
The European rules, for instance, require companies to provide a plain-language description of their information-gathering practices, including how the data is used, as well as have users explicitly “opt in” to having their information collected. The rules also give consumers the right to see what information about them is being held, and the ability to have that information erased.
Wheeler’s praise for the GDPR contrasts with his view of privacy protection in the U.S. As he explained, in 2017, the last year of his FCC term, Congress repealed new privacy rules established in 2016 by the FCC. The repeal occurred quietly and on a party-line vote, at a time when the media and public attention were focused on Republicans’ attempt to repeal Obamacare and distracted by the daily flow of presidential tweets and “Russiagate” revelations. As Wheeler notes, lobbying for repeal came not only from access providers like Comcast and AT&T, which were directly affected by the FCC rules; it also came from web giants like Facebook and Google, which were not directly impacted by the rules, but whose business models are heavily dependent on advertising revenues and AI capabilities built on a foundation of user data.
Wheeler also pointed to the weakness of Federal Trade Commission (FTC) oversight of privacy issues, which currently applies to companies like Facebook and Google. As he explained, these rules “merely require internet companies to have a privacy policy available for consumers to see,” and “a company can change that policy whenever it wants as long as it says it is doing so.”
Wheeler’s view of the FTC as a relatively toothless platform regulator may soon be tested as the agency (which, after a long wait, finally has a full staff of five Commissioners) moves forward with its investigation of the release of Facebook user data to Cambridge Analytica.
In a post on the Harvard Law Review blog, David Vladeck, who headed up the Commission’s Bureau of Consumer Protection when the decree was negotiated, and is now faculty director of Georgetown Law’s Center on Privacy and Technology, suggests the investigation will find violations of the agreement. In an op-ed published at Jurist.org, Chris Hoofnagle, adjunct professor in the Berekley School of Law and School of Information, and author of Federal Trade Commission Privacy Law and Policy, sounded less convinced that clear violations will be proven. But Hoofnagle does view the FTC investigation as “likely to uncover new, unrelated wrongdoing that will give Facebook strong incentives to agree to broader terms and even pay penalties.” He also recommended that the FTC tailor its interventions to account for the “dataism” idealogy embraced by Facebook’s leaders, in part by holding them personally liable for deceptive acts related to the company’s privacy-related practices. This latter recommendation was also included as one of nine FTC regulatory steps proposed in a March 22, 2018 Guardian commentary by Barry Lynn and Matt Stoller of the Open Markets Institute.
In the run-up to its implementation, questions abound regarding how EU regulators will interpret and enforce the GDPR, how companies will attempt to satisfy its requirements, and how those these two dynamics will interact, including regulatory penalties, legal challenges to enforcement, and impacts (both intended and unintended) on privacy protection, market dynamics and the use of digital services.
With Facebook having rolled out some privacy-related changes in advance of the GDPR’s implementation date, some critics have expressed pre-launch skepticism about the social network’s level of compliance (e.g., see here and here), and also about the overall readiness of companies to satisfy GDPR requirements.
As Europe moves forward with GDPR enforcement, its experience is likely to generate useful lessons for policymakers and companies involved in the digital economy around the world. At the same time, other models are being proposed as a means to strike a healthier balance between privacy and openness, and between the power of digital platforms and their users. I briefly discuss some of these below.
Platforms as “information fiduciaries”
Yale law professor Jack Balkin has suggested the concept of “information fiduciary” as an effective legal tool for achieving this healthier balance. In a blog post that later evolved into a law review paper, Balkin explained the rationale for this approach and how it might work in practice.
Traditionally, a fiduciary is a person who has a relationship of trust with a party (the beneficiary), and who is authorized to hold something valuable– for example– the beneficiary’s assets or other property– and manage them on the beneficiary’s behalf. Fiduciaries have duties of loyalty and of care...The fiduciary’s duty of loyalty may…create a duty of honesty to disclose to the beneficiary how the fiduciary is handling the assets or property. Usually the duty of loyalty also requires that the fiduciary avoid creating conflicts of interest between the fiduciary and beneficiary, and also includes a duty against self dealing— i.e., using the beneficiary’s assets to benefit the fiduciary because of the danger that the assets will be used to the beneficiary’s detriment…
[S]uppose that an online service provider is an information fiduciary. Then the OSP has a duty not to use its end users’ personal information against the end users’ interests, even without an explicit contractual promise. That fiduciary duty might be recognized by the common law, or it might be fleshed out by statute or administrative regulation, as it often is in the case of the professions…The [information] fiduciary relationship creates a duty that, in this particular context, trumps the interest in freedom of expression…
A fiduciary duty would limit the rights the company would otherwise enjoy to collect, collate, use and sell personal information about the end user…The online service provider would…have to consider whether its information practices created a conflict of interest and act accordingly. Moreover, the online service provider’s duties of loyalty and care might require it to disclose how it was using the customer’s personal information…
According to a Verge article by Russell Bandom, Balkin sees the information fiduciary approach as providing more potent privacy protection than the consent-based approach emphasized in the GDPR as well as the CONSENT Act proposed in the U.S. by Democratic senators Markey and Blumenthal.
Balkin says [the consent-based] approach is too easy for platforms to game. “It’s very easy to get consent from end users,” Balkin says. “They’ll just click and go. So consent-based reforms often look really great on paper but don’t have any practical effect.” Even if we add mandatory opt-ins for data collection (as in the Markey Bill) or clearer descriptions of how data is used (as mandated by the GDPR), there’s a good chance users will simply click through the warnings without reading them.
Balkin’s fiduciary approach would attack the problem from a different angle. Instead of counting on users to understand the data they’re sharing, it establishes up front that services are in a privileged position and bear the blame if things go wrong. In some ways, this is already how Facebook talks about its relationship with users. Over and over again this week, Zuckerberg talked about earning user’s trust, and how the platform only works when users trust Facebook with their data. Balkin’s fiduciary rule would put that trust in legal terms: establishing that Facebook users have no choice but to share data with Facebook, and as a result, requiring that the company be careful with that data and not employ it against the user’s interest. If Facebook failed to uphold those duties, they could be taken to court, although the nature of the proceeding and the potential penalties would depend on how the rule is written.
Reallocating power & benefits when users share their data
As I see it, there are two general categories of value generated by the sharing of personal data enabled by the world’s increasingly ubiquitous digital connectivity. One category is tied directly to individual-level desires, preferences and commercial and non-commercial interactions. The second is related more to the kind of mass-level data collection and analysis involved in developing artificial intelligence (AI) capabilities, especially those based on machine learning (ML).
For example, if I want to buy a new car that fits my budget and personal preferences, the data most directly relevant to my purchase decision is that which best enables the accurate and efficient matching of my individual buyer characteristics to the characteristics of available cars.
At the other end of the spectrum are data collection and analysis activities that involve much larger amounts of data and are more likely to have much broader social impacts than simply improving the efficiency of a specific market interaction. As Evgeny Morozov notes in a March 31, 2018 Guardian piece:
[The] full value of [some] data emerges only once it’s aggregated across many individuals…a lot of the data that we generate, when we walk down a tax-funded city street equipped with tax-funded smart street lights, is perhaps better conceptualised as data to which we might have social and collective use rights as citizens, but not necessarily individual ownership rights as producers or consumers.
In the remainder of this post I’ll be discussing potential approaches to these two data categories, all of which are grounded in the principle that platform users should have greater control over the use of their personal data and benefit more from that usage.
Shifting from an “Attention Economy” to a more efficient “Intention Economy”
In 2012, the year Facebook went public and began selling mobile ads, a book entitled The Intention Economy: When Customers Take Charge, was released. The book was written by Doc Searls, who had earlier co-authored the 1999 early Internet-era classic, The Cluetrain Manifesto. In The Intention Economy Searls critiques and offers an alternative to today’s digital Attention Economy, whose advertising technologies, he recently explained, have become even more sophisticated and invasive since the book was written. In The Intention Economy Searls explains that:
[W]hy build an economy around Attention, when Intention is where the money comes from?…The Intention Economy grows around buyers, not sellers. It leverages the simple fact that buyers are the first source of money, and that they come ready-made. You don’t need advertising to make them…The Intention Economy is about markets, not marketing. You don’t need marketing to make Intention Markets…In the Intention Economy, the buyer notifies the market of the intent to buy, and sellers compete for the buyer’s purchase. Simple as that.
Key to building an Intention Economy, explains Searls, is the development of what he calls Vendor Relationship Management (VRM) tools.
These tools will…become the means by which individuals control their relationships with multiple social networks and social media…Relationships between customers and vendors will be voluntary and genuine, with loyalty anchored in mutual respect and concern, rather than coercion. So, rather than “targeting,” “capturing,” “acquiring,” “managing,” “locking in,” and “owning” customers, as if they were slaves or cattle, vendors will earn the respect of customers…[R]ather than guessing what might get the attention of consumers—or what might “drive” them like cattle—vendors will respond to actual intentions of customers…Customer intentions, well expressed and understood, will improve marketing and sales, because both will work with better information, and both will be spared the cost and effort wasted on guesses about what customers might want, flooding media with messages that miss their marks.
Searls’ work with colleagues at Harvard’s Berkman Klein Center and elsewhere led to the creation of ProjectVRM which, in turn, led to the creation of Customer Commons. The latter’s mission is “to restore the balance of power, respect and trust between individuals and organizations that serve them.” As its website explains:
Customer Commons holds a vision of the customer as an independent actor who retains autonomous control over his or her personal data, desires and intentions. Customers must also be able to assert their own terms of engagement, in ways that are both practical and easy to understand for all sides.
In a November 18, 2016 Medium post, Searls provided an update on ProjectVRM’s efforts to develop software and services that “make customers both independent and better able to engage with business.” He also noted that the VRM community is poised to move on to a second phase of development, and that this effort could scale up more quickly “if the investment world finally…recognizes how much more value will come from independent and engaging customers than from captive and dependent ones.” And that shift in investor sentiment, he suggested, may be aided by the recognition “that the great edifice of guesswork ‘adtech’ has become is about to get burned down by regulation anyway.”
Searls last comment ties back to the impending implementation of Europe’s GDPR, which he describes as “the world’s most heavily weaponized law protecting personal privacy.” It’s purpose, he says, “is to blow away the (mostly US-based) surveillance economy, especially tracking-based “adtech,” which supports most commercial publishing online.”
But Searls also sees “a silver lining for advertising in the GDPR’s mushroom cloud, in the form of the oldest form of law in the world: contracts.” To make his point he provides a simple example:
[I]f an individual proffers a term to a publisher that says:

—and that publisher agrees to it, that publisher is compliant with the GDPR, plain and simple.
In a post on the Berkman Klein Center’s VRM blog, Searls argues that this simple contractual agreement, in addition to complying with the GDPR and any similar regulation in the U.S. or other countries, will also begin to rebalance the “asymmetric power relationship between people and publishers called client-server.” In language reminiscent of The Cluetrain Manifesto, Searls explains that:
Client-server, by design, subordinates visitors to websites. It does this by putting nearly all responsibility on the server side, so visitors are just users or consumers, rather than participants with equal power and shared responsibility in a truly two-way relationship between equals.
It doesn’t have to be that way. Beneath the Web, the Net’s TCP/IP protocol—the gravity that holds us all together in cyberspace—remains no less peer-to-peer and end-to-end than it was in the first place. Meaning there is nothing to the Net that prevents each of us from having plenty of power on our own…In legal terms, we can operate as first parties rather than second ones. In other words, the sites of the world can click “agree” to our terms, rather than the other way around.
Searls goes on to explain how Customer Commons and the Linux Journal, where he currently serves as editor-in-chief, are taking initial steps to implement this vision:
Customer Commons is working on [developing] those terms. The first publication to agree to readers’ terms is Linux Journal, where I am now the editor-in-chief. The first of those terms will say “just show me ads not based on tracking me,” and is hashtagged #DoNotByte.
Noting that the approach of Customer Commons is based in part on the copyright models developed earlier by Creative Commons (which was also incubated at the Berkman Klein Center), Searls explains that Customer Commons’ personal privacy terms will come in three forms of code, Legal, Human Readable and Machine Readable.
Who owns and controls the data used to develop AI?
While the Information Fiduciary and Customer Commons models hold promise for increasing trust and rebalancing power in the relationship between individual users and online platforms and marketers, other models may be particularly well suited to address issues tied to the collection of the massive amounts of data required to drive the evolution of ML-based AI technologies and systems.
My initial research suggests there are at least two directions for ownership and control that could be applied here. One approach–discussed in a five-page paper entitled Should We Treat Data as Labor? Moving Beyond “Free” and an upcoming book entitled Radical Markets: Uprooting Capitalism and Democracy for a Just Society–would treat the contribution of user-generated data as the equivalent of labor, with terms and compensation established by market-based mechanisms and institutional arrangements that support the evolution and efficient function of a data-as-labor market. A different approach, advocated by author Evgeny Morozov and Facebook co-founder Chris Hughes, envisions data ownership rights as (per Morozov) “social and collective use rights as citizens, but not necessarily individual ownership rights as producers or consumers.”
Data as labor that should be financially compensated
In an article on the Brookings Institute web site, the authors of the Should We Treat Data as Labor paper explain the context and rationale for their proposal:
Many fear that Artificial Intelligence (AI) will end up replacing humans in employment – which could have huge consequences for the share of national income going to these displaced workers. In fact, companies in all sorts of industries are increasingly requiring less labor to do the same amount of work. How much work will end up being displaced by robots is still unknown, but as a society we should worry about what the future will look like when this happens. The paper’s main contribution is a proposal to treat data as labor, instead of capital owned by these tech firms. We think this might be a way to provide income and a new source of meaning to people’s lives in a world where many traditional occupations no longer exist.
In a New York Times article entitled Your Data Is Crucial to a Robotic Age. Shouldn’t You Be Paid for It?, Eduardo Porter discusses the themes raised in the paper and the book, citing the latter’s authors, Eric A. Posner of the University of Chicago Law School and E. Glen Weyl, principal researcher at Microsoft (Weyl is also one of the five authors of the paper).
Data is the crucial ingredient of the A.I. revolution…”Among leading A.I. teams, many can likely replicate others’ software in, at most, one to two years,” notes the technologist Andrew Ng. “But it is exceedingly difficult to get access to someone else’s data. Thus data, rather than software, is the defensible barrier for many businesses.”
We may think we get a fair deal, offering our data as the price of sharing puppy pictures. By other metrics, we are being victimized: In the largest technology companies, the share of income going to labor is only about 5 to 15 percent, Mr. Posner and Mr. Weyl write. That’s way below Walmart’s 80 percent. Consumer data amounts to work they get free.
“If these A.I.-driven companies represent the future of broader parts of the economy,” they argue, “without something basic changing in their business model, we may be headed for a world where labor’s share falls dramatically from its current roughly 70 percent to something closer to 20 to 30 percent.”
Citing the significant monopsony power enjoyed by online giants like Google, Facebook and Amazon, the paper suggests that building a strong “data as labor” component in the digital economy will require some form of “countervailing power by large scale social institutions.” It goes on to suggest three possible avenues for such countervailing power: competition, “data labor unions” and government, concluding that “all three of these factors must coordinate for [the data as labor model] to succeed, just as in historical labor movements.”
Data as an infrastructural public good
A different approach to ownership and control of data generated by connected citizens and used to develop AI technologies is to treat it as a shared social good. This view has been put forth by Evgeny Morozov in a series of opinion pieces in the Guardian. In a December 3, 2016 column Morozov described data as “an essential, infrastructural good that should belong to all of us; it should not be claimed, owned, or managed by corporations.”
Enterprises should, of course, be allowed to build their services around it but only once they pay their dues. The ownership of this data – and the advanced AI built on it – should always remain with the public. This way, citizens and popular institutions can ensure that companies do not hold us hostage, imposing fees for using services that we ourselves have helped to produce. Instead of us paying Amazon a fee to use its AI capabilities – built with our data – Amazon should be required to pay that fee to us.
In a later Guardian piece, published July 1, 2017, Morozov explains a bit more of his vision:
All of the nation’s data, for example, could accrue to a national data fund, co-owned by all citizens (or, in the case of a pan-European fund, by Europeans). Whoever wants to build new services on top of that data would need to do so in a competitive, heavily regulated environment while paying a corresponding share of their profits for using it. Such a prospect would scare big technology firms much more than the prospect of a fine.
Morozov continues to sketch out his vision of data as a public infrastructure good in a March 31, 2018 Guardian piece published in the wake of the Facebook-Cambridge Analytica revelations.
[W]e can use the recent data controversies to articulate a truly decentralised, emancipatory politics, whereby the institutions of the state (from the national to the municipal level) will be deployed to recognise, create, and foster the creation of social rights to data. These institutions will organise various data sets into pools with differentiated access conditions. They will also ensure that those with good ideas that have little commercial viability but promise major social impact would receive venture funding and realise those ideas on top of those data pools.
A “data tax” that generates a “data dividend” we all share
In an April 27, 2018 Guardian piece, Chris Hughes, a Facebook co-founder, proposed an approach similar to Morozov’s “data as infrastructural public good” model.
The gist of Hughes’ proposal is to combine a “data tax” with a “data dividend” distributed to citizens. As a potential model for such an approach he cites Alaska’s Permanent Fund Dividend:
There is a template for how to do this. In Alaska, unlike in the lower 48 states, the rights to minerals, oil and natural gas, are owned by the state, and not by any single landowner. At the moment of the oil boom in the 1970s in Alaska, a Republican governor there forged an agreement between the public and the oil companies: you are welcome to profit from our natural resources, but you must share some of the wealth with the people. He created a savings account for all Alaskans called the Permanent Fund, and voters approved it overwhelmingly in a statewide referendum.
Oil companies pay a significant portion of their gross revenues to the state, and a portion of that money is earmarked to fund a savings account for the people…While oil and gas companies have thrived in the state, the Permanent Fund Dividend has dramatically reduced the number of people living in poverty in Alaska and is a major reason Alaska has the lowest levels of income inequality in the nation.
In the case of the data dividend, any large company making a significant portion of its profits from data that Americans create could be subject to a data tax on gross revenues. This would encompass not only Facebook and Google, but banks, insurance companies, large retail outlets, and any other companies that derive insights from the data you share with them. A 5% tax, even by a conservative estimate, could raise over $100bn a year. If the dividend were distributed to each American adult (although one could argue teenagers should be included given their heavy internet use), each person would receive a check for about $400 per year.
The amount of data we produce about ourselves and the profits from it would almost certainly grow in coming years, causing the fund to grow very large, very fast. You could easily imagine each individual receiving well over $1,000 a year in just the next decade. Unlike oil, this data is not an exhaustible resource, enabling the fund to disburse the total revenues each year.
To close out his Guardian piece, Hughes’ cites three key questions that need further exploration regarding his proposal: 1) is data the right thing to tax; 2) how do you define which companies would be subject to the tax and; 3) how do you ensure the tax doesn’t become a justification for giving up on other regulation?
Data portability as means to enhance competition & consumer choice
Another potential model for righting the balance of data-related power that today overwhelmingly favors digital platforms is what has come to be known as “data portability.”
In a June 30, 2017 New York Times opinion piece, University of Chicago business school professors Luigi Zingales and Guy Rolnik laid out the basic argument for a data portability model. They started by noting that Google’s 90% market share in search and Facebook’s penetration of 89% of Internet users are manifestations of powerful network effects that tend to pull these markets toward a monopoly.
According to Zingales and Rolnik, a relevant model for addressing this tendency toward monopoly is the telecom sector’s “number portability” rules. By making it easier for mobile phone customers to switch carriers, these rules contributed to increased market competition and price reductions. “The same is possible,” they claimed, “in the social network space.
It is sufficient to reassign to each customer the ownership of all the digital connections that she creates — what is known as a “social graph.” If we owned our own social graph, we could sign into a Facebook competitor — call it MyBook — and, through that network, instantly reroute all our Facebook friends’ messages to MyBook, as we reroute a phone call.
While the data portability model sounds appealing in principle, a number of experts are skeptical about the extent to which it can have the level of impact on competition that the telecom industry’s number portability rules have had.
For example, Joshua Gans points out that “social graphs” are far more complex and dynamic than telephone numbers, making the “portability” process more challenging.
Some of my Facebook posts are public and I read many public posts from the media, fan groups and companies. That is all part of my social graph but how would we work all of that? That said, there may be solutions there. The larger issue is how these links work is constantly evolving yet having a consumer controlled social graph may make it difficult to be responsive. After all, think about how you manage the social graph that is your pre-programmed fast dial numbers on a phone (if you even do those things). They quickly go out of date and you can’t be bothered updating them.
Will Rhinehart argues that the data portability model, as described by Zingales and Rolnik, misunderstands what makes data valuable and what gives the dominant online platforms their power.
Contrary to the claims of portability proponents…it isn’t data that gives Facebook power. The power rather lies in how this data is structured, processed, and contextualized. What Mark Zuckerburg has called the social graph is really his term for a much larger suite of interrelated databases and software tools that helps to analyze and understand the collected data…Requiring data portability does little to deal with the very real challenges that face the competitors of Facebook, Amazon, and Google. Entrants cannot merely compete by collecting the same kind of data. They need to build better sets of tools to understand information and make it useful for consumers.
MIT researchers Chelsea Barabas, Heha Narula and Ethan Zuckerman have also concluded that the practical challenges facing social network startups are substantial, multifaceted and extend well beyond the issue of data portability. In an article based on their study of “several of [the] most promising efforts to ‘re-decentralize’ the web,” they discuss the mix of challenges facing these startups.
While they cite interoperability and the “hoarding of user views and data” as factors that give dominant platforms competitive advantage, the MIT researchers found that this was only one of multiple challenges faced by social network startups and driving the market toward monopolization.
We join [social networks] because our friends are there…And while existing social networks have perfected their interfaces based on feedback from millions of users, new social networks are often challenging for new users to navigate.
Other startup challenges cited by the MIT researchers include managing security threats and higher costs relative to larger incumbents that benefit from economies of scale in the acquisition of key resources like storage and bandwidth.
Though these experts’ comments point to the limits of data portability as a stimulant of successful platform competition, Gans suggests that moving in this direction can and should be part of a broader solution aimed at striking a healthier balance of data-related rights, power and benefits.
In terms of social graph, consumers surely have a right to share information they have provided Facebook with others, and Facebook should probably make that easy even if it falls short of some portability proposal.
######
As the above discussion hopefully makes clear, there are a number of promising approaches to achieving a healthier balance of rights, power and benefits related to the collection and use of data generated by and about citizens. Given recent events, it seems timely for policymakers in the U.S. and other countries to join with tech industry leaders and experts, and other digital economy stakeholders, in a serious and ongoing dialog about the relative strengths, weaknesses and compatibility of these approaches. This dialog should also take into account the lessons learned from Europe’s experience as it attempts to address these issues via the GDPR. It should also strive for some measure of consensus on how best to achieve this rebalancing of power and benefits.
In subsequent posts I’ll be switching gears from a focus on specifically data-related issues to a broader consideration of problems and potential remedies related to the power of digital platforms, the functions and outcomes of democracy in both the political and economic spheres, and the interactions between these two important issues.
********
Below is an outline, with links, to all the posts in this series. Unless otherwise noted, bolding in quotations is mine, added for emphasis.
- Digital Platforms & Democratic Governance: Standing at an Historic Crossroads
- The digital anthropocene: a pivotal & high-risk phase of human history
- Empathy + technology: a powerful recipe for shared prosperity & peace
- More (and more effective) democracy as part of the solution
- The tech sector can help lead the next phase in democracy’s evolution
- The Facebook F-Up as a Wake-Up Call
- A growing awareness of problems
- Where to look for solutions?
- Serving Users (to Advertisers to Benefit Shareholders)
- An IPO + mobile ads: 2012 as a turning point for Facebook
- Too busy driving growth to focus on privacy?
- Serving users or serving users to advertisers?
- Understanding & addressing social harms
- Data as Power: Approaches to Righting the Balance
- Our data is tracked & locked in a “black box” we don’t control or understand
- The EU tightens privacy protections while the U.S. does the reverse
- Platforms as “information fiduciaries”
- Reallocating power & benefits when users share their data
- Shifting from an “Attention Economy” to a more efficient “Intention Economy”
- Who owns and controls the data used to develop AI?
- Data as labor that should be financially compensated
- Data as an infrastructural public good
- A “data tax” that generates a “data dividend” we all share
- Data portability as means to enhance competition & consumer choice
- The Power of Dominant Platforms: It’s Not Just About “Bigness”
- New forms of concentrated power call for new remedies
- Platforms wield transmission, gatekeeping & scoring power
- Antitrust needs an updated framework to address platform power
- Creating a civic infrastructure of checks & balances for the digital economy
- Democracy & Corporate Governance: Challenging the Divine Right of Capital
- A “generative” or “extractive” business model?
- Dethroning kings & capital
- Moving beyond capitalism’s aristocratic form
- Embracing economic democracy as a next-step Enlightenment
- Platform Cooperativism: Acknowledging the Rights of “Produsers”
- Reclaiming the Internet’s sharing & democratizing potential
- Scaling a platform co-op: easier said than done
- The #BuyTwitter campaign as a call for change
- Encouraging the wisdom of crowds or the fears of mobs?
- Interactions Between Political & Platform Systems
- Feedback loops reinforce strengths & weaknesses, benefits & harms
- Facebook’s role in the election as an example
- If we don’t fix government, can government help fix Facebook?
- A Purpose-Built Platform to Strengthen Democracy
- Is Zuck’s lofty vision compatible with Facebook’s business model?
- Designed to bolster democracy, not shareholder returns
- Democratic Oversight of Platform Management by “Produsers”
- Facebook, community and democracy
- Is Facebook a community or a dictatorship?
- Giving users a vote in Facebook’s governance
- Technology can help users participate in FB governance
- Evolving from corporate dictatorship toward digital democracy