Main The Information Trade: How Big Tech Conquers Countries, Challenges Our Rights, and Transforms Our World

The Information Trade: How Big Tech Conquers Countries, Challenges Our Rights, and Transforms Our World

In this timely, provocative, and ultimately hopeful book, a widely respected government and tech expert reveals how Facebook, Google, Amazon, Tesla, and other tech giants are disrupting the way the world works, and outlines the growing risk they pose to our future if we do not act to contain them.

Today’s major technology companies—Google, Facebook, Amazon, Tesla, and others—wield more power than national governments. Because of their rising influence, Alexis Wichowski, a former press official for the State Department during the Obama administration, has re-branded these major tech companies “net states.”

In this comprehensive, engaging, and prescriptive book, she considers their growing and unavoidable influence in our lives, showing in eye-opening detail how these net states are conquering countries, disrupting reality, and jeopardizing our future—and what we can do to regulate and reform the industry before it does irreparable harm to the way we think, how we act, and how we’re governed. Combining original reporting and insights drawn from more than 100 interviews with technology and government insiders, including Microsoft president Brad Smith, Google CEO Eric Schmidt, the former Federal Trade Commission chair under President Obama, the co-founder of the Center for Humane Technology , and the managing director of Jigsaw—Google’s Department of Counterterrorism against extremis and cyber-attacks—The Information Tradeexplores what happens when we cede our power to them, willingly trading our personal freedom and individual autonomy for an easy, plugged-in existence.

Neither an industry apologist or fearmonger, Wichowski reminds us that we are not helpless victims; we still control our relationship with the technologies and the companies behind them. Most important, she shows us how we can curtail and control net states in practical, actionable ways—and makes urgently clear what’s at stake if we don’t.
Year: 2020
Edition: Retail
Publisher: HarperCollins
Language: english
Pages: 304
ISBN 10: 0062888986
ISBN 13: 978-0062888983
ISBN: B07SN4JPYH
File: EPUB, 1.29 MB
Download (epub, 1.29 MB)
 
You can write a book review and share your experiences. Other readers will always be interested in your opinion of the books you've read. Whether you've loved the book or not, if you give your honest and detailed thoughts then people will find new books that are right for them.
1

Far Reaching Consequences

Year: 2020
Language: english
File: EPUB, 300 KB
Dedication


To Jonathan, for being mine.

To Gerome, Novi, & Leo, for being all your own.





Epigraph


Where, after all, do universal human rights begin?

In small places, close to home—so close and so small

that they cannot be seen on any maps of the world . . .

Unless these rights have meaning there,

they have little meaning anywhere.

Without concerned citizen action to uphold them close to home,

we shall look in vain for progress in the larger world.

—ELEANOR ROOSEVELT,

former first lady of the United States

We don’t completely blame Facebook.

The germs are ours, but Facebook is the wind,

you know?

—HARINDRA DISSANAYAKE,

Sri Lankan presidential adviser





Contents


Cover

Title Page

Dedication

Epigraph

Introduction

One: Rise of the Citizen-User

Two: Net States IRL

Three: Privacy Allies and Adversaries

Four: Information-Age Warfighters

Five: A Great Wall of Watchers

Six: The All-Knowing Internet of Things

Seven: The Mind, Immersed

Eight: A Declaration of Citizen-User Rights

Conclusion: The Net State Pattern

Acknowledgments

Appendix

Notes

About the Author

Copyright

About the Publisher





Introduction


On January 9, 2007, 45,000 software developers, computer engineers, and everyday tech enthusiasts gathered in Silicon Valley’s go-to conference spot, San Francisco’s Moscone Center, a three-story, glass-enclosed conference space that shared a block with the Yerba Buena Ice Skating and Bowling Center and the Dosa Brothers Indian restaurant. The occasion: the 22nd annual Macworld Expo. The highlight: bearing witness to their patron saint, Apple visionary Steve Jobs.1

Wearing his signature uniform—black turtleneck, wire-frame glasses, white sneakers, and blue jeans—Jobs took to the stage. A giant backlit Apple logo loomed on a wall-size screen behind him.

Twenty-two minutes into a speech sprinkled with updates about various Apple products, Jobs stopped. A moment of silence passed. “This is a day I’ve been looking forward to for the past two and a half years,” he announced.2 Scattered applause peppered the room, but Jobs waved it away.

With something like defiance, he declared, “The most advanced phones are called ‘smartphones,’ so they say.” The audience burst into laughter. In 2007, when most people still carried flip phones and PDAs, the very notion of such a thing seemed absurd. Jobs went on to blast then-current “smartphones”—BlackBerries and Nokias, namely—as being difficult to navigate, even for basic functions. “What we want to do is make a product that’s way smarter than any cell phone and that’s easy to use. This is what iPhone is.”

The iPhone launch is worth cherishing. It may very well have been our last mass-magical tech moment, a time when the entire world got truly excited over a technological breakthrough.

This was a time before tech got scary.

It was almost four years before WikiLeaks released 251,287 diplomatic cables to the press, which contributed to the bloody and largely unsuccessful Arab Spring and drove home the terrible power and scale of leaks now possible in the digital age.3

It was six years before Edward Snowden’s revelations shattered public trust in the US government by unveiling the National Security Agency (NSA) mass covert data collection program that sought info on American citizens.

It was almost a decade before the Russian military’s Information Research Agency infiltrated the 2016 US presidential election through misinformation warfare, peeling away the belief that our social networks consisted of our friends, or at the very least, our compatriots.

And it was eleven years before Facebook was outed for giving political consulting firm Cambridge Analytica access to 87 million users’ data, finally tipping the world’s wide-scale disillusionment with the tech industry into outright anger.4

In 2007, we still loved our tech and its keepers. The proof is in the purchases. Half a million people bought iPhones the first weekend they were available.5 Buyers lined up around the US—for days, in some places.

“I feel wonderful. It’s exhilarating,” reported 51-year-old engineer David Jackson as he finally held an iPhone in his hands, having waited in line more than 24 hours for the moment. “Man, that was cool. I was shaking at the counter. I couldn’t even sign my name.”6

With the iPhone, Apple gave us what seemed like one of the greatest godsends of the digital era: a keyboardless, full-color, internet-enabled, do-everything device—one that was pretty and sleek and fit in your pocket, to boot.

We may not have recognized it at the time, but Apple did more with the iPhone than create a next-generation personal computer. They created the first wearable computer: a device that you could keep on your body, in your pocket, at all times. In 2019, this was a reality for roughly 2 billion smartphone users, whether they carried an iPhone or its chief competitor, an Android (Google) phone.7 The smartphone didn’t just make life easier; it didn’t just make us, as Apple’s ’90s-era slogan urged, “think different.” It made life different.


ALMOST EXACTLY 10 YEARS AFTER JOBS INTRODUCED THE IPHONE TO the world, another tech luminary addressed a similarly massive audience at the Moscone Center—for quite different reasons.

On Valentine’s Day 2017, Brad Smith—the affable, sandy-haired president of Microsoft—took the stage at the annual RSA Conference, the tech industry’s premier security conference. “Cyberspace,” he declared, “is the new battlefield.”8

“The world of potential war,” he warned, “has migrated from land to sea to air and now cyberspace. As a global technology sector, we need to pledge that we will protect customers.” He paused. “We will focus on defense.”

Let’s take a moment to digest this. The president of Microsoft—Microsoft, the company whose products are virtually synonymous with corporate cubicle culture—announced to 40,000 of the tech industry’s frontline programmers that they were, for all intents and purposes, at war.

“Because when it comes to attacks in cyberspace, we not only are the plane of battle, we are the world’s first responders.” He continued, “Instead of nation-state attacks being met by responses from other nation-states, they are being met by us.”

Let’s see that again: “They are being met by us.”

Who is “us”?

Smith was talking about something new—some higher-order embodiment of digital power. These new entities are tech companies’ next stage of evolution, a giant technological leap from Jobs’s iPhone.

These tech entities are no longer simply making spreadsheet software and calendar apps and gadgets. They are battlefields. They are weapons. And, most important, in this speech Smith declared that these new entities should be—must be—a force for good.

The problem here is that no one knows what to call these new things. As I first introduced in a 2017 WIRED article, I propose that we call them “net states.”9

Why not just keep calling them “the tech industry”? The short answer is that the tech industry is no monolith, with all its companies pursuing the same goals with the same business practices.

As hard as it may be to think of the world’s newest industry as traditional in any way, a handful of “traditional” companies have undergone a metamorphosis. And, in the same way we don’t keep calling butterflies “caterpillars” once they’ve transformed, these particular companies—Amazon, Apple, Facebook, Google, Microsoft, and Tesla, specifically—have morphed into something altogether different from “the tech industry.”

They no longer only make products and offer services. They’re reaching beyond their core technologies to assert themselves in our physical world. They’re inserting digital services into our lived environments in ways both unseen and, at times, unknown to us. And, most important, they’re exerting formidable influence over the way our world works on individual, societal, and geopolitical levels. These tech companies are unlike anything we’ve encountered before.

Net states vary in size and structure but generally exhibit four key qualities: They enjoy an international reach. Their core work is based in technology. Their pursuits are influenced, to a meaningful degree, by beliefs, not just a bottom line. And, perhaps most significant, they’re actively working to expand into areas formerly the domain of governments, areas that fall outside their primary products and services—areas they pursue at times separate from and even above the law.

Simply put, net states are not just out to make widgets or get people hooked on a single product. (This is why Tesla and its world-building businesses are included in the book and Twitter, with its single, stand-alone platform, is not.) Net states are out to change the world—not just in theory, but in defense, diplomacy, public infrastructure, and citizen services.

Net states are tech entities that act like countries. By acting like countries, net states alter our experiences as citizens. And they alter countries’ experiences as geopolitical powers.

Two examples—Silk Road and Project Maven—show this in action.


“IT IS WITH A HEAVY HEART THAT I COME BEFORE YOU TODAY,” WROTE a user, code-named Libertas, in his farewell letter.10 “A heart filled with sadness for the infringements of our freedoms by government oppressors.” He wrote, “Silk Road has fallen.”

Libertas was the Roman embodiment of freedom; the inspiration for the Statue of Liberty. It was also the pseudonym Gary Davis used on Silk Road—not the historic trading route in Central Asia; the illegal marketplace on the dark web that freely sold everything from drugs to hacking-for-hire services to humans from its launch in January 2011 to October 2013, when the FBI shut it down.11

Davis worked as a low-level site administrator for Silk Road—the Mafia equivalent of a bookie. Admin though he was, Davis hardly looked like a stereotypical hacker, showing up at his trial sporting a trim suit, well-groomed chinstrap beard, and seemingly well-rehearsed thousand-yard stare.12 While Davis was a minor figure in the Silk Road case, Silk Road itself was a major problem that had dogged the FBI for years. It thrived, selling illegal wares in plain view of the authorities to its consumers, who spent more than $1.2 billion in its two-plus years of operation.13 Yet with all transactions encrypted via Bitcoin, the authorities couldn’t figure out who was running it. When the FBI finally cracked the case, they scooped up everyone they could find associated with the site, including lowly admins like Davis.

On December 19, 2013, at 8 p.m., at the behest of the FBI, Irish authorities swooped into Davis’s hometown of Wicklow, a sleepy seaside town about an hour southwest of Dublin.14 Finally, after two long years of failed attempts to shut down the site, it looked as if the Silk Road case was under control: the FBI had found their suspects, and it was only a matter of time before they gathered the material evidence needed to put them away.

Then the investigation hit a brick wall. While Davis used an encrypted browser called TOR for his Silk Road–related work, he preferred Microsoft for his personal emails, which meant that Microsoft, whether they were aware of it or not, had been safeguarding content for an international drug trafficker.

As a matter of routine, the FBI got its subpoena for the emails and handed it over to Microsoft. But then, something unusual happened: Microsoft stalled. Because while Microsoft now knew they harbored content belonging to a probable felon, technically they were allowed to: since Congress had passed the Communications Decency Act in 1996, tech companies couldn’t be held legally responsible for the content on their platforms.15

The issue for Microsoft wasn’t the particular user the FBI was after, or even the potentially incriminating content of his emails. The problem was that the emails weren’t stored in United States territory. They were on a server in Dublin, Ireland. And whether an American subpoena had jurisdiction over data physically housed on machines in another country simply wasn’t clear.

So while Microsoft handed over the Davis emails physically stored in the United States, they declined to turn over the ones housed in Ireland.

In sum, Microsoft, an American-based tech firm owned and operated by American citizens, refused to comply with the American government’s subpoena. And amazingly enough, they weren’t breaking any laws, because none existed at the time that made clear what the appropriate course of action should be.

In 2013, the US Department of Justice sued Microsoft to retrieve the Dublin-stashed emails.16 That case turned into a fiasco. What might have been a simple paper chase became a many-year legal crusade. Because for Microsoft, it was never about Davis, or even the content of his emails. This was the case that would set precedent for the US government’s jurisdiction over global digital communications for years to come. As of this writing in 2019, the case is still making its way through appeals courts.

On one level, the Silk Road story is about citizens’ rights online: who gets to decide what happens to digital information, the tech companies who manage user data or the countries in which the users reside. But on another, it’s about citizens’ rights in real life: which entity gets to decide the fate of that user, a fate that might come with stakes as high as physical imprisonment.

Surprisingly, of the three players involved in this fight—two countries (the United States and Ireland) and one tech company (Microsoft)—the tech company, not the countries, took the lead on safeguarding citizens’ rights.

To be fair, the American government is legally beholden by the Constitution to “pursue justice” on behalf of its citizens. Which they did, attempting to convict perpetrators whose illegal marketplace harmed untold numbers of victims. But on the other hand, the American government is also constitutionally prevented from conducting “unreasonable searches and seizures.” Microsoft could conceivably argue that this is what the US government was doing, as the desired objects of the searches were physically outside US territory.

The elephant in the room is, of course, that while this may technically be the case, this is not in any way practically the case. It’s not like Microsoft would have had to send a team of experts across the Atlantic to excavate documents with a trowel. It could have conjured up the Dublin emails with the mere click of a button, never having to leave its headquarters in Redmond, Washington.

The question is, then, why Microsoft went through the bother. Legal cases are extremely expensive, even for tech empires. And they’re time-consuming; this case has dragged on for over six years already. But most important, unlike governments that are constitutionally bound to look after their people, Microsoft—or any other tech company, for that matter—has no obligation to put up any sort of fight for citizens’ rights.

Then a rationale begins to emerge: Microsoft’s not actually protecting citizens; they’re protecting users. They’re not securing citizens’ physical belongings from “unreasonable search and seizure.” They’re protecting their users’ data.

In this way, “citizen” and “user” merge in some information-age mashup, becoming something new: the “citizen-user.” Because whether Microsoft is really protecting “citizens” and their rights or protecting “users” and their data is almost irrelevant. The end result is the same: the tech company is standing up for the individual. And in this particular case, the United States, the most powerful country on Earth, can’t do a thing about it.

This shows how major tech companies can outmaneuver countries—how they operate above the laws of individual countries. This is how tech companies become net states.


OUTMANEUVERING COUNTRIES ISN’T ALWAYS FUELED BY LEGAL MURKINESS or precedent-setting. Sometimes, it’s a matter of plain old principle: net states refusing to work with governments because it goes against their beliefs, even when it means losing money.

“It’s so exciting that we’re close to getting MAVEN!” wrote Google Cloud’s chief scientist for artificial intelligence (AI), Fei-Fei Li, in an email obtained by the Gizmodo Media Group.17 The “MAVEN” Li refers to would be “Project Maven,” the plain English moniker for the Department of Defense’s (DOD) exploratory artificial intelligence program.18 Maven’s mandate was essentially to put AI capabilities on drones.

“I think we should do a good PR on the story of DoD collaborating with GCP from a vanilla cloud technology angle (storage, network, security, etc.), but avoid at ALL COSTS any mention or implication of AI,” Li’s email continued, urging her colleagues to steer clear of what could be a public relations nightmare. She instead pitched Project Maven as a “vanilla”—in other words, harmless—cloud storage partnership. This shows how even in those early planning days, Li was conscious, and nervous, about what would happen if the public thought Google was collaborating with the Department of Defense on anything to do with artificial intelligence. As her email suggested, it wouldn’t be difficult for this to quickly spiral into “killer robots” headlines splattered across the news.

Turns out the media wasn’t Google’s main problem. Rank-and-file Google employees would prove to be the project’s undoing.

Before getting to Google employee protests, it’s important to dig beyond the perception of Project Maven and look at what Project Maven actually aimed to do. Maven’s AI was supposed to aid human operators at DOD sift through massive troves of information, “to help a workforce increasingly overwhelmed by incoming data, including millions of hours of video.” And no one within DOD had developed AI to the point where it could be deployed in this way. As Colonel Drew Cukor, chief of the Algorithmic Warfare Cross-Functional Team responsible for Project Maven, announced at the Defense One Tech Summit in 2017, “You don’t buy AI like you buy ammunition. This effort is an announcement . . . that we’re going to invest for real here. The only way to do that is with commercial partners alongside us.”19

It started as a tiny project, in Google terms. The $9 million contract, which launched in 2017, involved just 10 employees, a minuscule allocation of resources from Google’s 88,000-person workforce.20 But as word about Project Maven spread throughout the company over the next several months, outrage ensued. About a dozen AI researchers at Google resigned in protest, the first mass resignation over a matter of principle in Google’s history.21 This was shortly followed by a petition signed by 4,000 staffers demanding that Google cease its AI contract with the military immediately.

“We believe that Google should not be in the business of war,” began the one-page letter to Google’s CEO, Sundar Pichai.22 “Building this technology to assist the US Government in military surveillance—and potentially lethal outcomes—is not acceptable.”

Most notable, however, was not the fact of the petition or even that it demanded an end to Google’s partnership with the Pentagon; it was the workers’ rationale for the protest. “We cannot outsource the moral responsibility of our technologies to third parties,” the letter stated. “Google’s stated values make this clear: Every one of our users is trusting us. Never jeopardize that. Ever.”

The resignations and petition worked. Despite the potential financial windfall future military contracts could bring the company, on June 1, 2018, Google announced that it would not be continuing the Project Maven contract once it expired at the end of the year.


WE’RE IN A WORLD STILL DOMINATED BY NATION-STATES, BUT INCREASINGLY influenced by the actions of net states. Nation-states continue to own the physical territories within their borders, but net states wield significant power both within and across country space, guiding events that affect us both on an individual and on a global level. Therefore, we need to get smart about what net state power really looks like, and quick.

One country that’s excelling in its efforts to do so is Denmark. In 2017, it opened a door that has the potential to radically alter our existing geopolitical order: it appointed a new ambassador to capital-T Tech itself. Ambassador Casper Klynge is the world’s first-ever tech ambassador. His mandate: to establish diplomatic relations between Copenhagen and Tech. And what exactly that looks like is all fresh territory, yet to be discovered. Fittingly, his office operates as a virtual embassy, with three physical manifestations: one in his home base of Copenhagen and two in the most powerful tech hubs on Earth—Silicon Valley, California, and Beijing, China.

I arranged to interview Ambassador Klynge from his Silicon Valley office in late April 2019. I’d given his communications director my cell number and sat on the sofa in my basement home office in Brooklyn, waiting for the call. The lights were off, the only source of illumination being the gray-white glow from my computer screen. (This was deliberate; frankly, I didn’t want to be distracted looking at laundry piles in the corner during the interview.)

Suddenly, my phone rang. But in addition to ringing, the screen lit up, serving up an image of my own face as well.

Oh. We’re having a FaceTime interview, I realized. I should have expected it—tech ambassador, after all. Because I didn’t want to miss the call, I answered before I had time to turn on the light. Two tanned, cheerful people greeted me on what appeared to be a sunny day in a naturally lit office somewhere in Silicon Valley, their friendly faces framed by a whiteboard with vague scrawl in the background.

“Are you . . . um, is this still a good time?” Ambassador Klynge asked encouragingly. I could see in the tiny window FaceTime provides of one’s own reflection the eerie computer light cast on my face, giving me a ghostlike appearance. Despite this less than ideal setup, it had taken quite a bit of work to get on the ambassador’s calendar, and I didn’t want to miss my chance. So, there we were—I in my dark Brooklyn basement and he in his sunny California diplomatic outpost—talking, smiling through our cell phones.

I went straight for my most pressing question first: He’s the world’s first ambassador to the tech sector—how had that sector received him?

Ambassador Klynge’s expression made it seem as if this question brought a story to mind. Having worked with diplomats in my past, I doubted I would hear it in an on-the-record conversation like this, though, and didn’t press. After a moment, he said that “some companies” had been “very forward-leaning.” He paused. “Then you have the other side of the spectrum,” he said, “where some companies have been enormously difficult to deal with. . . . We deliberately say we want more or less to come in at the top level. That means C-suite level, and they sort of offer the oldest intern. . . .”

The ambassador trailed off for a second. On the screen, I could see Klynge lean slightly forward at his conference room table, pinning his index finger to some invisible spot on it. “I think reluctant is a very diplomatic term.”

I wanted to ask which companies sent an intern to greet him—an ambassador!—but didn’t think he would be at liberty to disclose. I figured I could probably guess for myself anyway: for this book, Microsoft and Google happily accepted my interview requests; Facebook, on the other hand, was a vault.

What about governments? I asked. Were they reluctant as well?

Total opposite, he said without hesitation.

“There has been enormous—I would almost say unprecedented—interest from other capitals in basically learning from us, getting our experiences from dealing with the tech industry.” He said, “They tell us, ‘We would love to do something similar [to your tech embassy], but our bureaucracies are so large and so difficult that we would never be able to do it; . . . the distance from flash to bang is simply too big.’”

Then Klynge’s whole face lit up; he was clearly pleased to reflect on what his country had been able to accomplish. “That’s one of the areas where being small is a little bit of an advantage.”

This comment revealed a distinct advantage Denmark and the two other countries who’ve since appointed their own tech ambassadors—Estonia and Australia—have over their larger nation-state counterparts. When it comes to tech, the smaller nations have proven to be like speedboats amid a sea of ocean liners. Nation-state behemoths may still have more firepower and financial might, but, essentially moored in place by their unwieldy mass and unforgiving bureaucracies, they can’t seem to keep up with net states nearly as well as countries like Denmark and Estonia.

I had time for one last question. How should we—governments, societies, people like you and me—be thinking about technology? How do you, Mr. Tech Ambassador, see it?

He took just a moment, barely a beat—it could have been a hiccup in our connection, really. But then he said, “The freight train is coming.”

Klynge continued, “It might not be everybody who’s seeing the massive impact of technology also on international relations, but one of the reasons why we gather . . . countries [is] to try and help shape thinking in capitals all over the world.”

Governments need to understand, Klynge explained, that technology is much more than “an add-on.” “It’s not the IT office that needs to deal with technology; it’s mainstream foreign and security policy.”

I invited him to explain why he thought so, and this time he responded immediately. “Technology will have a massive impact on international relations. It will have a massive impact on the convening power of the West. It will have a massive impact on the balance of power in the future,” he said.

Then he added one last thought. “For that lesson, it’s high noon for many, many countries all over the world.”

Casper Klynge and his fellow tech ambassadors exist because at least three of the world’s most forward-leaning countries have come to recognize, in the most formal and official way a country can, that net states occupy a substantial role in our geopolitical, social, and personal worlds. This book describes the various ways in which net states exert influence on those worlds and each of us in them, as well as what we can—and must—do to ensure that they do so responsibly.

This book shows us our tech in a new light: not just as services we access or devices we use but as forces of personal, social, and geopolitical power. From this new vantage point, we gain additional ground for exploration: the capacity to ask questions about tech’s impact that we’ve yet to even consider.

This book is not an exposé of any individual net state, though the major tech companies serve as our main characters. It’s the story of what net states are up to, both as they engage with us—their citizen-users—and as they expand out of the digital and into the physical world.


THIS BOOK STARTED WITH AN ARTICLE I WROTE IN 2015 AND LEFT IN A file for two years. I wrote it to try to make sense of what had happened following the November 2015 terrorist attacks in Paris. The day after those attacks, the hacker collective Anonymous launched a campaign, Operation ISIS, in which they claimed to have taken down upwards of 20,000 ISIS-related social media accounts in a single day.23 By comparison, the social media companies themselves had taken down only around 800 ISIS accounts over the prior 18 months.24

It occurred to me that the social media giants and Anonymous both had a bigger role to play in fighting terrorism than I’d seen discussed. But I ran into a problem: how to discuss it. What was Facebook? And Google? And Anonymous? And the other major tech companies and movements? They clearly weren’t nation-states, like the US and France. But they weren’t nonstate actors, like ISIS or al-Qaeda, either. We simply didn’t have the language at the time to categorize them. Despite that, it was becoming increasingly evident that these . . . somethings . . . were forces to be reckoned with—not just as commercial entities but as significant players in defense, diplomacy, and other geopolitical arenas.

When I shared the draft article with my most trusted readers, it elicited raised eyebrows—one Anonymous campaign did not seem quite sufficient to support a new theory. So I shelved the piece but kept collecting evidence: examples of incidents in which tech companies had reached beyond their core services and into governmental areas. By 2017, I felt I had gathered enough data to warrant dusting off the article and making the public case for net states. WIRED magazine agreed: in November 2017, it published “Net States Rule the World: Ignore Them at Your Peril,” introducing the term “net states” to the lexicon.

Since 2017, evidence that tech companies are acting like countries has only continued to amass. In June 2019, Facebook announced the launch of its own monetary project: a cryptocurrency, Libra. As technologist Micah Sifry observed in his newsletter, Civicist, “If you’re going to be a country, you might as well have a currency, right?”25

The Information Trade is both a near history and a profile of what it means to live a tech-enabled life. It celebrates how technology enables us to share the stuff of life—information, data, stories, knowledge, sorrows, silliness, and ephemera. It informs us about what happens to that data when we do share, both with and without our knowledge. And it cautions us against giving up our ability to influence the balance of power between ourselves, our governments, and our net states.


“NET STATES” IS KIND OF LIKE “SEA CREATURES” OR “THE EUROPEAN Union”: it’s a label that represents a larger group. Its members share features, but there’s a lot of variation among them. In the same way you wouldn’t want to read a book about sea creatures without learning about sharks and whales, or a book on the EU without touching on Germany or France, this book is structured around five major net states—Amazon, Apple, Facebook, Google, and Microsoft—as well as the net state activity exhibited by Elon Musk’s Tesla and its sister projects, and the political movement represented by independent Pirate Parties in various countries.

Chapter 1 looks at how we transformed from audience members to computer users to citizen-users, starting with the launch of Microsoft’s Windows 95. While widely associated with office drudgery now, Microsoft broke onto the scene more than twenty years ago with a revolutionary suite of tools that radically transformed what was possible for the average computer user with no technical knowledge: the ability to navigate a computer (via Windows) and get work done (via Office), thus contributing to the information-sharing norms in which we operate today. But information-sharing, for some people, is not simply a feature; it’s a right, something to believe in—and a cause to fight for. And, over the past twenty years, we have become more than simple recipients of content. We’ve morphed into something altogether new: citizen-users. Chapter 1 shows how this came to be, how we became citizen-users for whom technology is not just a tool we use but an ideology that dictates how we engage with—and take part in—our governments.

While citizen-users engage with digital content, they’re still grounded in a physical landscape. Chapter 2 situates citizen-users in their physical landscape, examining what net states are doing “IRL”—in real life—starting with Tesla’s and Google’s interventions in Puerto Rico after Hurricane Maria in 2017. It grounds the ethereal internet, “the cloud,” in the physical world, tracing how our data is bound to Earth through undersea cables and data centers. The chapter then moves to net state activity in key areas of the physical world. By tracking net state activity IRL, this chapter lays the foundation for a new way of looking at power: distributed not according to borders on a map, but through information flows, investments, and physical assets.

Chapter 3 looks at the battle over our privacy. Privacy is no longer a given. We’re engaged in a global battle over who gets to determine the degree of privacy we retain over our content and activities via tech. This chapter explores how our understanding of privacy has evolved and the possibility that its current iteration may be an “anomaly.” It also considers how net state partnerships with data brokers create profiles that “know” us, and how some countries are fighting back against these practices. It traces how Europe is leading the way for wielding government regulations over tech companies in defense of citizen-user rights and considers what options Americans have in our currently unregulated landscape.

Chapter 4 considers our physical security, showing how net states like Google became integral to the fight against modern-day enemies. Exploring the differences between the tech ethos and military ethos, this chapter explores how the expertise so prized by the security agencies has become a kind of disadvantage in the fight against terrorism. It shows how net states are uniquely capable of engaging in security issues, through counterterrorism activities via Google’s think tank, Jigsaw. The chapter then shows the possibilities for net state/nation-state cooperation through acts of diplomacy and looks at how net states, led by Microsoft, have begun to forge ahead in this domain.

Chapter 5 examines how we use net state tools to curate idealized versions of ourselves—our profiles and activities online—and how nation-states may enact real-world consequences on what we are or are not permitted to access based on them. Using China’s Social Credit Score system as an example, the chapter considers what the networks of connections we create, as well as the carefully managed personas that we upload, say about our needs as individuals, citizens, and citizen-users.

Chapter 6 delves further into our daily life by examining the tech we use in our homes and our public spaces with the Internet of Things (IOT). This chapter explores the ways that the IOT currently influences and is likely to affect our daily existence. It then examines developments in user profiling, starting with Amazon’s recommendation systems and “smart” technologies that gather data on our health, our environment, the information we seek, the music we play, and even our sleep habits.

Chapter 7 moves the focus from our actions to our cognition, examining what happens to our minds as we interact with net state technologies. It explores the impact of the uniquely immersive qualities of this tech and how that impacts our thinking, our behaviors, and ultimately, our awareness of ourselves and the world around us. Starting with the most ubiquitous net state tech—the smartphone—this chapter looks at how increasingly immersive properties of our technology affect how we learn, what we remember, and how we perceive the world. From the current tools at our disposal to emerging tech like augmented reality, the chapter considers what an increasingly personalized view of the world might mean for people and societies.

Finally, chapter 8 takes a hard look at where we’ve been in recent history and where we are now, with staggering rates of depression, addiction, and—unique to America—acts of gun violence. It gives us options for how to reconcile our feelings of empowerment via tech with our sense of powerlessness in the face of life-altering challenges. This chapter explores what we as citizen-users must do to ensure that we remain actively engaged in the development of net states, their relationship with our nation-states, and how they relate to our own lives, offering a citizen-user pact with net states.

The book concludes with an assessment of how net states have begun to engage with one another and recommendations for governments to join with them or face irrelevance. It argues that citizenship, whether in a nation-state or a net state, requires engagement, and the consequences for failing to engage are dire. In a democracy, failing to vote means losing out on the chance to be represented by those who protect our interests. In net states, failing to engage means losing out on personal privacy, the implications of which are only starting to be understood.


PUBLIC OPINION ABOUT THE ROLE OF TECHNOLOGY IN OUR LIVES SWAYS. In the first 15 years of the new millennium, tech was going to save us all, make the world more democratic, level the playing field, and provide a platform for the disenfranchised to make their voices heard. Then the elections of 2016 hit; Americans were manipulated en masse by Russian misinformation campaigns. Facebook gave 87 million users’ data over to the political consulting firm Cambridge Analytica. “Technology addiction” and “Facebook depression” became well-known conditions. Tech suddenly seemed dangerous.

But public opinion on tech will likely swing again. Tech just does too much good for people not to notice eventually. For example, in 2018, in New Delhi, India, a police department instituted a pilot project using experimental facial recognition software. Within four days, more than 3,000 missing children were located.26 That same year in Australia, two teenage boys caught in rough waters 2,500 feet offshore were rescued when a remote-controlled drone delivered an inflatable rescue pod to the swimmers within 70 seconds of launch. Traversing the same distance would have taken a lifeguard up to six minutes, during which time the boys could have drowned.27 And back in the United States, a young woman was warned by her Apple Watch that her heart rate had skyrocketed to 190 beats per minute. This prompted her to get to an emergency room, where she was immediately diagnosed as undergoing what would have been fatal kidney failure.28

And so on. Tech can literally save lives. Even beyond its lifesaving capacity, its presence in our daily realm can facilitate better living, with faster answers to our queries, better suggestions for what will be our most beloved book or film, and easier access to aids of all kinds, from maps to encyclopedias, from cookbooks to cameras.

We are not victims here. We’ve invited tech into our lives for a reason. It makes life easier. It makes life more convenient. Sometimes, it makes life safer. Sometimes, it makes life better. And since tech isn’t going anywhere, we owe it to ourselves to know what it means that it’s here: what our data is worth and how it’s used. Just as being a responsible citizen of a nation-state requires paying attention to who’s in charge and what they’re up to, we need to become responsible citizen-users of our net states, paying attention to who’s in charge and what they’re up to.

Eventually, public opinion will settle somewhere in the middle regarding tech. We will no longer be starry-eyed about its promises or frightened by its possibilities. But until then, we should harness our outrage and our passions to demand that net states take great care with us, their citizen-users. Because while there may be only one Facebook and one Google and one Apple now, those will not always be our only options. Remember, not long ago, there was only one Myspace and one Napster.

Henry David Thoreau once noted, “Is a democracy, such as we know it, the last improvement possible in government? There will never be a really free and enlightened State until the State comes to recognize the individual as a higher and independent power.”29 Net states create tools that elevate the individual, and—just as in our political system—it’s up to the individual then to leverage or leave idle that power.

We users have more power over net states than we’ve yet to claim. The Information Trade shows how to be present in the midst of technology, aware of the new ways it controls our world, and able to manage its impact on our lives. We do not need to stop technology from evolving to ensure that it does so responsibly. The Information Trade explores what it means to be a responsible citizen-user, engaged with and unafraid of the world that we’re building with our tech—and that tech is building for us.





One


Rise of the Citizen-User


In the 1983 film WarGames, a baby-faced Matthew Broderick plays an underachieving teen who develops his hacker chops by altering grades in his high school’s mainframe. Trying to impress the doe-eyed Ally Sheedy, he accidentally hacks into a live military operation at NORAD, suddenly finding himself engaged in a computer-simulated war exercise to prevent World War III.

The movie was a huge success. It was the fifth-highest-grossing film of the year and garnered three Academy Award nominations. But its biggest impact was felt by the computer industry, which desperately needed the boost. In the early 1980s, tech still seemed mystifying and cultish to mainstream America; people didn’t really know what to make of computers or the rare few who tinkered with them. In 1983, only 8 percent of Americans owned a computer. Apple’s first personal computer, which didn’t go on sale until 1984, cost an eye-popping $2,500—a third of the price of a brand-new car at the time.1 Cell phones were clunky, ugly affairs, also prohibitively expensive at about $4,000 a pop, or about $9,520 in 2019 terms.2 For the average American in the 1980s, “technology” consisted of TVs, cassette tapes, and Ataris. Until WarGames came along, personal computers were, by and large, a curious luxury.

But the American imagination had now gotten a taste of computers as tools worth their attention, and pop culture responded accordingly. The ultimate manifestation of this moment of tech awakening was when Apple barreled into mainstream American consciousness with their now famous 1984 Super Bowl commercial, directed by Blade Runner’s Ridley Scott. In the commercial, an athletic heroine races past the dull-eyed masses as she wields a sledgehammer. She launches the hammer at a massive television screen that had enraptured its audience, symbolically destroying the means of control over passive television consumers and introducing them to a tool designed to reinvigorate and empower the individual: the Apple home computer.

In addition to boosting sales for home computers, another gift WarGames gave the ’80s was the stereotype of the hacker: the image of the obsessive, scrawny teen squirreled away in his parents’ basement conducting virtual break-ins for personal gain or juvenile kicks.3

This depiction was almost an affront to actual hackers—originally a term reserved for self-motivated technology tinkerers. In reality, most hackers were serious computer scientists, gainfully employed by prestigious research universities like MIT and Stanford. Hackers had been around since the 1950s in a loosely connected community of like-minded programmers. And they changed history: hackers built the US Department of Defense’s ARPANET, the predecessor of the World Wide Web, and mainframes at IBM. Hardly goofballs digitally breaking and entering classified data warehouses, hackers were among the early architects of today’s technological infrastructure.

The group of original hackers included Sir Tim Berners-Lee, aka TimBL, the British computer scientist who invented the World Wide Web, its first web browser, and HTML, the dominant programming language for websites. While studying physics at Oxford in 1976, Berners-Lee cobbled together a computer using an old TV and a soldering iron—the very portrait of a hacker in action.4 His contemporary Richard Stallman (aka rms), a software engineer and digital activist, launched the Free Software Movement and the GNU operating system, which would later become a part of Linux, the most widely used operating system on the planet. Android phones all run on a version of Linux: that’s 88 percent of internet-enabled mobile devices—approximately 4.4 billion worldwide.5 Stallman, who on his personal website lists among his hobbies “affection,” “international folk dance,” and “puns,” is known as much for his philosophical intensity as his programming chops. His Free Software Movement gave rise to open source software—that is, software whose inner workings aren’t proprietarily protected, like the web browser Firefox and the website builder WordPress—and he’s arguably one of the forefathers of the very concept of information-sharing as an ideal, not just a practice.

Over the years, dozens of hackers—almost all of them university professors or professionally employed engineers, with the exception of Bill Gates in the early years of Microsoft—contributed to the rise of the web as we know it today. It wasn’t until the 2000s that the teen hacker college-dropout trope would become a reality, with Facebook founder Mark Zuckerberg as poster child. And these programmers were serious about their work and serious about their culture: the “hacker ethos” was a code to live by, a topic of debate and deliberation, and, most of all, a point of pride. To be a hacker was to uphold a set of values and a way of life.

The hacker ethos, which was both pragmatic and idealistic, consisted of six basic tenets.6 First, hackers believed that access to computers should be universal, regardless of skill level or intent for use. Because they viewed computers as tools for empowering the individual, they believed that every individual should have access to one. Second, they fervently adhered to the notion that information should be free. This is reflected in the early days of the internet, when, indeed, all information online was free. It was only after the World Wide Web was commercialized in the mid-1990s that websites began charging for content—a move that was anathema to hacker ideals.

Third, hackers held a deep mistrust of centralized authority of any kind. This is also reflected in the way the internet works: it’s a decentralized system, running on millions of computers across the globe. There’s no one person or organization who can “turn off” the internet—redundancy is a safeguard built into the very foundation of the web. Fourth, hackers believed that they should be judged by skills and abilities, not official credentials. Being a college dropout is worn almost as a badge of pride in the software industry, and over 50 percent of employed programmers in 2015 didn’t have a computer science degree.7

The final two tenets are the most idealistic: that one can create art and beauty with code, and that computers should be used to change life for the better. While there is no shortage of malevolent hackers now, nor was there in the early days of the web, the hacker ethos took seriously the idea that tech is a tool that can be used for good or ill, and it is up to the coder him- or herself to make the moral choice to apply programming skills for creative, artful, and positive ends.

As universities across the country began to gain access to ARPANET, hackers started collaborating with one another virtually, sharing code and problem-solving tactics. And so by the ’80s, serious hackers had started to band together, resulting in a frenzy of invention and innovation. This energy and the hacker ethic were captured by journalist Steven Levy when he published his 1984 book, Hackers: Heroes of the Computer Revolution. Thirty-five years later, this book is still lauded as the manifesto of its era. But when it was published, critics viewed the “hacker ethic” as a historical anomaly, an oddball set of ideals that died before they even got a chance to get going. The New York Times review recoiled at the book’s account of programmers plying their skills on games like Frogger. Christopher Lehmann-Haupt concluded that “if the point of the entire computer revolution was to try to get a frog across a road . . . then it’s not only unsurprising that the hacker ethic died; it isn’t even sad.”8

Hackers themselves disagreed. Far from seeing the hacker ethic as dead, they took the book Hackers as a catalyst that inspired them to, for the first time ever, physically come together, bringing the hacker ethic to the table for discussion and celebration. This took the form of the first-ever “Hackers Conference,” organized by publisher-activist Stewart Brand, Apple cofounder Steve Wozniak, and others. On November 1, 1984, 150 of the most talented programmers, engineers, and designers gathered at the Headlands campus of the Yosemite National Institutes in Sausalito, California, to meet face-to-face and discuss their craft.9

Most people have at least heard of Apple and its cofounder Steve Wozniak; Stewart Brand is less well known, but worth knowing about. Brand wasn’t a hacker. He didn’t even know how to code back then. But he’d launched something called the Whole Earth Catalog in 1968, and in its way it epitomized the hacker ethic.

It was, on one level, a traditional catalog; you could mail-order things from it just as you could from the Sears or JC Penney (or any other) catalog. But it stood out from others of its kind in key ways—first, for what it sold. For a world Brand described as needing to go “back to basics,” his catalog offered, fittingly, a range of back-to-the-land type stuff. Wares had to fit at least one of four criteria: they had to be useful as tools, relevant to independent education, high quality or low cost, and easily available by mail. Under this umbrella, Whole Earth sold materials for and published articles on everything from “earthworm technology” (for aerating farm soil) to “cooking with fire” (for outdoor and off-grid living); it also offered—under the “useful as a tool” and “relevant to independent education” categories—ads for the first Apple computer.

It was really the articles accompanying the goods it sold that defined Whole Earth as the start of a movement that empowered individuals—not just as part of collectives or communes, but as people capable of existing wholly and fully on their own two feet. Whole Earth promoted the individual on every level: logistically, with teachings on how to build fires and yurts; physically, with articles on DIY agriculture and hydration devices; and intellectually, with essays from the most forward-leaning and controversial thinkers of the day, from Buckminster Fuller and Carl Sagan to the Dalai Lama and members of the Black Panther Party. Whole Earth was recognized as revolutionary in its time: for example, it’s the only catalog to ever win the National Book Award.10 Brand’s publication elevated the citizen not as a consumer, but as a vessel of power, a being capable of shedding the trappings of “modern” (1960s) life and finding fulfillment by going back to basics. A reflection on our place in the universe, Whole Earth acknowledged how even 1960s technology was emerging as the next force of nature we would be forced to reckon with.

So it’s not surprising that Brand, of all people, came up with the mantra for a generation of citizen-users. At that 1984 Hackers Conference, Brand, clad in a tan leather vest over a black-and-white gingham button-down shirt, made an offhand comment in a panel discussion with Wozniak that perfectly put into words an idea whose time had come.

“On the one hand,” Brand said, “information wants to be expensive, because it’s so valuable. The right information in the right place just changes your life.” And then he added, “On the other hand, information wants to be free, because the cost of getting it out is getting lower and lower all the time.”11

The comment was casual, a nod to an audience member who had just voiced frustration over the rise of proprietary software shutting down collaboration opportunities. But the words themselves—“information wants to be free”—struck a collective nerve. That statement would go on to become the rallying cry for a generation.12

At the time, Brand was talking about how it would be increasingly difficult to charge money for information once it was digitized and thus easily copied. But as global networked computing became a reality, tech activists adopted the idea as a literal one.

The reasoning behind it goes something like this: Information, once digitized, is easy to share. And digitized information is also easy to manipulate and search, from basic everyday Google queries to sophisticated data mining. This digital information searching reveals all kinds of valuable things, and shockingly fast—from patterns and research material to regular old know-how on how to do things. Since digitized information can be shared with many people simultaneously, and since it can reveal useful and beneficial things, many people should be able to benefit from it as a kind of public good. As such, information should be free, and freely shared. Thus, information wants to be free.13

Not everyone agrees with this, least of all net states whose business models today rely on monetizing user content. Ironically, though, it was the forefathers of those same net states who first promoted the “information wants to be free” ethos and hacker code. You can see that ethos in Steve Jobs’s 1980 Apple mission statement: “To make a contribution to the world by making tools for the mind that advance humankind.”14 It’s reflected in Google founders Sergey Brin and Larry Page’s mission statement in 1998: “Don’t be evil.”15 It’s in Mark Zuckerberg’s “Move fast and break things” motto, which he adopted for Facebook in 2004.16 Notably, all three companies have since moved on to more conservative versions of their mission: Google’s is now “Do the right thing”; Facebook’s is, only partly jokingly, “Move fast with stable infrastructure”; and Apple’s has plummeted from inspiring to anodyne, now reading, “Apple designs Macs, the best personal computers in the world, along with OS X, iLife, iWork and professional software.”

Even Stewart Brand himself has tempered his early antiestablishmentarianism. Reflecting on the Whole Earth Catalog and its associated movement from a 2018 vantage, he was quick to qualify that it was very much a reflection of its time. “‘Whole Earth Catalog’ was very libertarian, but that’s because it was about people in their twenties,” he said in a New Yorker interview.17 “Everybody then was reading Robert Heinlein and asserting themselves and all that stuff. We didn’t know what government did. The whole government apparatus is quite wonderful, and quite crucial. [It] makes me frantic, that it’s being taken away.”

The hacker ethos matters because it inspired the generation of computer programmers and technologists who would go on to found the net states we interact with today. It is the reason that their companies—Google, Facebook, Amazon, Apple, Microsoft, and Tesla, to name the biggest among them—are driven not solely by their bottom lines, but in addition by beliefs that their products and services create some form of good in the world. While these firms may not adhere to the hacker ethos in all of their business decisions, it is still an influential force that drives many of those who work at net states and, perhaps at times, still even occupies the minds of their founders.


BEFORE INFORMATION COULD BE FREE, HOWEVER, PEOPLE NEEDED DEVICES to process it.

The problem was, even by the late 1980s, computers had yet to become commonplace home products. Gradually, they were becoming more affordable and more interesting. But still, only a small percentage of the population picked them up. By 1989, 15 percent of American households owned a computer.18

Perhaps that’s because, without the internet, computers didn’t do all that much. You could type, edit, and store documents—a huge improvement over the typewriter—but that was only marginally exciting. You could play games, of course; but those weren’t terribly sophisticated just yet. Computer use at home didn’t really take off until the World Wide Web landed in the early 1990s. But even then, uptake started slowly.

“If people are to be expected to put up with turning on a computer to read a screen,” mused Microsoft founder Bill Gates in a 1996 essay, “they must be rewarded with deep and extremely up-to-date information that they can explore at will.”19 But even that wouldn’t be enough to keep users happy, he wrote. Imagining what a future internet might look like, Gates went on to suggest, “They need to have audio, and possibly video.”

Keep in mind that in 1996, “turning on a computer to read a screen” was pretty much the most you could look forward to. Even then, with the World Wide Web just a few years old, it wasn’t something everyone was eager to experience.

Even for those who had web access then—about 18 percent of US households—going online was a huge pain. Dial-up internet connections seemed to take forever—a web page took roughly 30 seconds to load, even with a 56K modem, which was state-of-the-art for the time.20

However, the biggest problem wasn’t getting online. The problem was that there wasn’t much to do once you got there. In 1996, there were only about 100,000 websites, most of which featured text and text only. Some offered a few low-resolution graphics, but not too many, as that would have caused the pages to take even longer to load. Without much of interest to keep people online, it’s not surprising that the average American in 1996 didn’t bother with the internet much, spending about 30 minutes online a month—an average of a minute a day.

Given this state of affairs, it makes sense that tech pioneers like Gates spent a lot of time worrying about how to get more people to “put up with” turning on their computers. That phrase summed up most people’s relationship with technology back then. It wasn’t yet the touch-of-a-button pocket device we enjoy today. With the exception of enthusiasts—20 years ago, the most likely demographic online was white men over the age of 50; only 14 percent of women under 30 used the internet on a regular basis—tech simply wasn’t a big part of people’s lives. Circa 1996, tech was the Motorola StarTAC flip phone, with its green pixelated text and black screen. Tech was Tamagotchis: virtual “pets” attached to keychains that activated themselves to demand “feeding”—which involved pushing one of three identically mundane-looking buttons—so that they wouldn’t die on you. Tech was Hollywood fiction and teenage toys. In sum, tech didn’t matter yet; it hadn’t yet graduated from mild distraction to grown-up necessity.

Before the turn of the century, what self-respecting grown-ups really focused on was TV. This was the height of the Friends era, the years of The X-Files and ER and Law & Order. TV was king, and audiences ate it up: the average household tuned in for more than seven hours a day.21 Oprah reigned supreme, launching her now-legendary book club in 1996. And people still read actual books. TIME magazine praised Amazon, which launched in 1994 selling only books, as one of the top websites of 1996, primarily because you could search by nifty features such as “author” or even “subject or title”—or, best yet, you could “read reviews written by other Amazon readers and even write your own” (italics added).22

And being able to do this—“write your own” review—signifies why this is where the web starts getting interesting: interactivity arrives. As noted, most 1996-era websites were little more than digital brochures. Interactive features that allowed users to shape their experience didn’t become common until “Web 2.0” emerged almost a decade later.23 So the option of submitting your own review—contributing your own voice to anyone who happened to be on the World Wide Web—was something totally novel. All of a sudden, a person didn’t have to be famous or a news producer to get their opinions in front of the masses; they just had to go online.

While several variables influenced how tech changed for the average user, one of the biggest contributing factors came down to a single product: Microsoft’s Windows 95 operating system, released in 1995. Pre–Windows 95, your computer probably had a black or dark-green background with yellow or bright-green text or, worse, an oversaturation of hyper-rich colors (“pretty” not being the forte of ’90s-era computer engineers—see figure 1.1 below). Windows 95 radically transformed this by bringing a sane-looking design to computing (figure 1.2). It also introduced key features that made the user feel in control, like the task bar along the bottom of the screen and the now-well-known “Start” menu button on the lower left-hand side.

FIGURE 1.1. Microsoft MS-DOS Interface, 1985



FIGURE 1.2. Microsoft Windows Interface, 1995



Windows 95 and its accompanying internet browser, Internet Explorer, catapulted technology to the next level. It made computing much easier for the average user. With the launch of Windows 95, the cultural attitude toward technology in the United States transformed. All of a sudden, instead of only weird or nerdy types using computers, everyone could be a computer user. Not only was it no big deal; it started to become the norm.24

By simplifying the browsing experience on your computer and the World Wide Web for the masses, Windows 95 democratized computing. As one reporter reflected, even the introduction of what seems like a simple feature, the “Start” button, brought about transformative change: “In 1995, computers were still mostly for the office and productivity. Windows 95 brought with it a word that consumers understood: ‘Start.’ Start what? Start anything.”25

One of the things this new feature started was the idea of the computer user as a person with power. Compare computer use to television-watching, for instance. In contrast to how solitary TV-viewing may be today, it used to be a communal experience, a reason for families to gather together. Broadcast networks scheduled set times for shows, and families sat around the TV, together, at exactly the same time, watching the nightly news or prime-time programs. And with just three major channels to choose from, it was likely that your neighbors were watching the same shows you were—extending the community experience of television-watching from your own home to your broader network. More important, television-watching in the 1990s required consensus: you and your brother and sister and parents all had to agree on what you’d watch. TVs had audiences, groups—we as individuals were just some subset of a larger body.

Computers, on the other hand, had users. The internet offered us all the gift of personalized choice. We didn’t have to confer with our siblings over what website to go to; we just went, by ourselves. It was like hogging the remote control, every time we logged on. In the early days, with one computer in the house, people still had to take turns going online, which necessitated some level of interpersonal interaction. But once online, our experiences were our own; we were the master of our browsing, the driver of our curiosity fulfillment.

If after Windows 95 people became computer users, then with the explosion of websites people became computer citizens, empowered entities interacting with other empowered entities. We weren’t just website audience members; we were “visitors,” each one singular and unique. It’s in the language itself: on websites, each set of watching eyeballs is measured as a “unique visitor.” We may not have known it yet, but we began to matter to content producers not just as part of a larger audience, but as independent units whose actions could be tracked and monitored and learned from. In less than a decade, we went from television audiences to computer users to website visitor, singular.

Through our interactions with early operating systems like Windows 95, during the AOL / GeoCities / Myspace days of the web, we were not only deepening into our identities as computer users. We were testing out the waters of being citizen-users.

And so Microsoft itself, for a moment, was king, not only in how widely its products were used, but in popular culture. People, briefly, loved Microsoft.

Things went south fast.

By any measure, Windows 95 was a smashing success;26 it sold 7 million copies in its first five weeks.27 By 1998, industry experts estimated 90 percent of computers ran on some version of Windows; by 2018 this number included over 1 billion devices.28 And Microsoft’s internet browser, Internet Explorer (IE), which users installed, perhaps unwittingly, when installing Windows, was so successful that it essentially killed the other browsers. The first full-color web browser, Mosaic, got folded into what would become the other dominant browser, Netscape Navigator.29 Netscape enjoyed market dominance for a hot minute, but once the masses got Windows 95 with its bundled IE, Netscape started to tank. That browser’s user base declined almost in lockstep with IE’s rise.30 AOL, which had acquired Netscape for a massive $4.2 billion in 1998, was forced to shut it down just five years later.31

The meteoric rise of Microsoft and its college-dropout boy-genius founder Bill Gates—he was crowned by Forbes magazine the richest man in the world by 1995, a title he would hold for 13 of the next 17 years32—got the attention of just about everyone, especially government regulators. On March 3, 1998, the Senate Judiciary Committee called several tech industry leaders to a hearing, including Gates.

Senator Orrin Hatch asked Gates the question at the heart of the hearing—mirroring questions still being asked 20 years later to newer quasi-monopolies like Facebook and Google. Hatch asked, “Is there a danger that monopoly power is or could be used to stifle innovation in the software industry today or, perhaps more importantly, looking forward?”33

Turns out the hearing just laid the groundwork for what was to come. Two months later, Microsoft got slapped with an antitrust lawsuit by the Department of Justice and 20 state attorneys general for, in effect, holding a monopoly and engaging in anticompetitive practices.

While companies as large as Microsoft get sued with some regularity—by conservative estimates, Microsoft has been sued over 50 times (for patent infringement, by its competitors, by the US government alone at least five times, and even by companies it’s invested money in)34—an antitrust suit is a big one. Antitrust cases have a history of taking down giants: they’re what forced the breakup of the telecommunications behemoth AT&T (“Ma Bell”) in 198235 and oil industry titan John D. Rockefeller’s Standard Oil in 1911.36 In short, while some lawsuits are regarded by massive corporations as flies to be swatted, or, more to the point, settlements to be paid out (Microsoft has paid out an estimated $9 billion in settlements over the years), an antitrust case is rightly regarded as a potential bear on your doorstep, with the power to take down even a colossus like Microsoft.

Microsoft fought this case for 19 years and, if you ignore the hundreds of millions of dollars in lawyers’ fees, actually ended up getting little more than a legal slap on the wrist.37 But in the eyes of the public, the Gates/Microsoft trial was damning. While their products were as popular as ever, Microsoft and Gates went from being held aloft as America’s ideal for innovation to just another big bad business out to bilk the American consumer.38 Microsoft simply wasn’t cool anymore.

But that didn’t actually matter. America was already hooked.

By the early 2000s, the internet, though still accessible to only 43 percent of Americans and just 5.8 percent of the global population, was well established as the place to be. Web 2.0—websites that permitted interactive engagement versus just being digital brochures—had finally arrived. But within this same decade, a new trend emerged that threatened to upend the order we’d just begun to get used to: a collection of content producers such as newspapers, magazines, and music services started experimenting with charging money for their content.

This did not sit well with “information wants to be free” believers. So in June 2003, a group of friends in Sweden became official internet activists when they launched Piratbyrån—“the Bureau of Piracy.” Originally, Piratbyrån was little more than a protest in response to Sweden’s establishment of Antipiratbyrån, the official copyright enforcement agency.39 The protest organization’s mission was not to organize internet piracy per se, but rather to encourage the spread of information regardless of intellectual property rights.

Its mantra was strikingly similar to Brand’s “information wants to be free” movement: “Use what you know for good. Spread it further. Sow what you want. Add, delete, change and improve.”40 Self-described as a “loosely organized think-tank, a website, a philosophical greenhouse or FAQ guide to digitization,” Piratbyrån pioneered what would become one of the most popular activities online in the early 2000s: free file-sharing.41

Twenty-five years into the internet era, it might be difficult to fathom how much work went into sharing intellectual property such as music, movies, TV shows, and games before the web. You had to physically go to a store, buy the original whatever, then take however many hours needed to make a physical copy onto a video or tape cassette or, eventually, CD or DVD in order to make a single copy to share with one friend. Most Americans who were online at the time will likely remember the 1999 rollout of Napster as world-changing: here was a music-sharing service that allowed, for the first time ever, massive and free file-sharing online with strangers from all over the world, thus eliminating the need to physically store copies of your favorite movies or playlist on anything but your computer.42 Shortly thereafter, The Pirate Bay, or TPB, was born—the largest BitTorrent site in history (BitTorrent being the name of the protocol that permits the transfer of massive files such as movies and albums). With 300 million users and counting, and despite having been taken down multiple times over the years for copyright infringement, TPB is still operational to this day.43

As might be expected for an organization that blatantly encourages what is technically intellectual property theft, TPB has had its ups and downs with the law. More than 60 police officers raided TPB’s Stockholm main data center in 2006, promoting hundreds of protesters to take to the streets in Stockholm and Göteborg.44 The raids successfully took the site offline—but only for three days. Its devoted followers moved it to another reserved domain name to get it up and running again (using one of the about 70 domain names that TPB reported they have reserved for such contingencies).45 Then came the real crackdown: in 2009, the police came after the TPB for copyright infringement.46

Undeterred, the movement created first by Piratbyrån and advanced by The Pirate Bay—the organization of the “information wants to be free” principles set forth by Stewart Brand decades earlier—went on to do something almost unprecedented in modern social movements: it made the leap from merely conducting online activism to inspiring a set of international political parties that have won hundreds of elections worldwide.

As might be expected with a group of antiauthority activists, these parties don’t all coordinate with each other. And some actively disavow the others. But they all operate under the same umbrella: the Pirate Parties International.

Since the first Pirate Party officially formed in Sweden in 2009 on a platform of the “protection of human rights and fundamental freedoms in the digital age,”47 the movement has spread to 68 other countries. And they’ve actually managed to insert themselves into the traditional political establishment. To date, the Pirate Party has racked up 547 separate electoral victories across the globe at the local, state, national, and even international organizational levels. At the time of this writing, it even has four seats in the European Parliament, the elected legislative body of the European Union.48

Thus far, the Pirate Party’s biggest victory has been in Iceland. In 2016, the so-called Panama Papers revealed that the family of the prime minister of Iceland had apparently been hiding millions of dollars in offshore accounts, triggering public accusations that they were dodging Iceland’s substantial personal income tax rate of up to 46 percent of one’s earnings. The prime minister resigned as a result of the public uproar.49 Shortly thereafter, running on an antiestablishment platform, Iceland’s Pirate Party won 15 percent of the vote, which was sufficient for an invitation to form a government. (For context, seven parties ran in that election, making a 15 percent win a substantial victory.)50

“Information wants to be free” had clearly transcended the hacker ethos to become an organizing principle for citizens around the globe who wanted to make the leap from protesting government to becoming a part of it.

The hackers were now in charge.


AT THE SECOND ANNUAL WASHINGTON IDEAS FORUM ON OCTOBER 1, 2010, Eric Schmidt, Google’s CEO for sixteen years, reminded an audience of journalists, policy-makers, and politicians what was going on. “With your permission,” he said, “you give us more information about you, about your friends. And we can improve the quality of our searches.”51

“We don’t need you to type at all,” Schmidt continued. “Because—with your permission—we know where you are. With your permission—we know where you’ve been. And—with your permission—we can more or less know what you’re thinking about.”

Nervous laughter broke out across the room, prompting Schmidt to quickly interject, “Now, was that over the line? Is that right over the line?”

The “line” Schmidt referenced alluded back to something he had said earlier in his remarks. “There’s what I call ‘the creepy line,’” he had said. “And the Google policy about a lot of these things,” referring to far-future technologies, like brain implants, “is to get right up to the creepy line, but not cross it.”

Back in October 2010, Google had not quite crossed the creepy line. None of the tech giants had: in the fall of 2010, the global love affair with Facebook—still primarily a friends-connection network—was in full effect. In the fall of 2010, half a billion people logged on to Facebook to play FarmVille and Mafia Wars and to make use of the “Like” feature (introduced only the previous year) on each other’s posts, photos, and comments.52 In the fall of 2010, Facebook still felt innocent and hopeful, epitomized by the December 2010 launch of the Arab Spring. That movement’s early protests were largely organized via Facebook, which Atlantic author Rebecca Rosen referred to as “the GPS for this revolution.”53

In 2010, technology was still exhilarating. We were enamored of our smartphones, Apple’s iPhone being less than four years old and still in the category of craved-for tech, owned by just 33 percent of Americans.54 The notion that personal technology use might be bad for us—as suggested by early research into “internet addiction” and “Facebook depression”—was, back then, still a novel and academic debate, not yet something taken seriously in popular culture.55 Social media, especially once accessible through personal devices like smartphones, was going to be a democratizing force, we thought, with the promise that it comprised “long-term tools that can strengthen civil society and the public sphere.”56

In 2010, our technology—our iPhones and Facebook and Google—was still going to empower us. Amazon, which had morphed from bookseller to everything-seller, was just going to make it easier for us to buy anything we wanted. Microsoft was just going to power our office work. And then there was Tesla, bursting onto the scene with moonshot projects to get us into space and traveling on Earth at hypersonic speed—the Jetsons of the pack, futuristic and sexy and exciting.

In 2010, technology had yet to become creepy. It was glorious. We were blissfully unaware of the complications it would bring.

By 2019, we’ve become well aware. We’re aware that we are more than net states’ user bases; we’ve become their populations as well. Our real lives are becoming more integrated with our digital ones. “With our permission,” we’ve allowed our lives to become reliant on net states in certain areas, trusting them to manage our data rights, defend us from cyberattacks, and sign on to diplomatic treaties for our protection. Our relationship with net states comes with unenumerated benefits and unexpected responsibilities.

As net states “know” us more—as Schmidt said, knowing where we’ve been, where we are, and what we’re thinking about—we are increasingly dependent on them in ways we couldn’t have anticipated. Thus, our roles as citizen and user are merging.

To understand how citizen-users engage with their net states, it’s helpful to first look at how citizens engage with their nation-states. In our social media–fueled age, we commonly hear how citizens make up “the public sphere.” This phrase in its current usage can be traced back to German sociologist Jürgen Habermas,57 who coined it in his dissertation, which was published in German in 1962 and translated into English in 1989.58 In this work, Habermas described the history of how citizens came to emerge as a real check against government.

The story goes that by the mid-1800s, middle-class educated citizens in Europe (“the public,” as opposed to the aristocracy) began to engage in discussions about not just their daily lives, but also subjects relating to the broader public good: what Habermas called “rational-critical debate.” En masse, these conversations would emerge as what we generally refer to as “public opinion”: the broad set of ideas and sentiments that a nation holds about political issues. Public opinion would, in theory, serve as a check on government. With legislators informed and influenced by what the public thought, they would legislate in such a way as to reflect what the public wanted. And so goes the theory of democratic societies in general: the public expresses its opinion and then—critical step here—votes for people who will make laws in accordance with those opinions, resulting in a happy, healthy society.

There are some problems with Habermas’s version of the public sphere, not least of which is the issue of inclusion. His 1960s manuscript about activities in the 1830s considered “the public” to be, frankly, wealthy white men. But the issue I want to direct your attention to is not who is part of the public sphere, but what we do as members of the public sphere.

Electorally speaking, Americans are notoriously bad at taking action. Only 61 percent of Americans voted during the 2016 presidential election,59 a 20-year low for that type of election.60 Worse, only 36 percent of Americans voted during the 2014 midterm election, an abysmal 72-year low.61 Put another way, when asked whom we want to represent us nationally, 4 out of 10 opt out. When asked whom we want to represent us locally—these are the representatives who are ostensibly members of our communities, our cities, our states—7 out of 10 of us don’t bother to weigh in.

These turnout rates have serious real-world implications, among the largest being that our elected leadership is decided by a handful of people, statistically speaking. For instance, Donald Trump’s victory in the 2016 presidential election is credited to approximately 80,000 votes in three states: that’s smaller than the population of an average three-square-mile neighborhood in Brooklyn, New York.62 While the uniquely American quirks of the electoral college influence these outcomes as well, if every eligible citizen voted, the political landscape both locally and nationally would likely look very different.

It may seem an odd comparison to make, but contrast these voter turnout rates to cell phone purchase rates. As of 2019, 96 percent of Americans over the age of 18 own cell phones, and 81 percent of those are smartphones.63 We go through the bother of upgrading our smartphones every 21 months, or about once every two years.64 Compare this to the act of voting in presidential elections, which takes place once every four years.

The difference between going through the trouble of getting a better phone and going through the trouble of getting a better elected representative is pretty basic: one is tangible, the other abstract. Our phones fulfill many in-the-moment purposes: they’re navigation devices, music players, cameras, internet access points, and, of course, actual telephones. On the other hand, voting doesn’t feel connected to most of our lives—on any basis, let alone a daily one. As a culture, our country is increasingly less connected to other people in general: a third of Americans haven’t even met the neighbors who live directly next door to them.65 If we don’t even share words with the people who live right next to us, how many of us, then, interact with our congressional representatives, who represent roughly 700,000 people in a district? In short, we are very much in touch with our technology. We are far less so with our democracy.

One obvious follow-up question about citizen engagement is whether merging elections with technology might help. There’s no obvious answer, however, and understanding why requires considering what it takes to be a citizen and what it takes to be a tech user.


WE USE OUR PHONES FOR MANY REASONS, BUT THOSE CONSIDERATIONS generally boil down to trying to improve something about our lives: to check the weather, to find information we need, to reach out to someone we care about, to read articles or books or the news, or to discover something we didn’t know before. We also use our phones for less lofty reasons, such as to avoid boredom (93 percent of 18- to 29-year-olds) or even specifically to avoid having to interact with people around us (47 percent).66 Given how much we already use our phones, then, the big question is this: If we could vote in elections on our phones, would we?

It turns out that the question of why people do or don’t vote is complex, and introducing technology into the mix only makes it more so. According to a RAND Corporation report in March 2018, plenty of countries around the world already have e-voting, but that doesn’t necessarily translate into better voter turnout.67 Research tells us that people who vote generally do so not because it’s convenient, but out of a sense of civic duty.68 Conversely, people who don’t vote aren’t generally deterred by having to physically go to a polling site. Rather, they don’t vote because they don’t feel as if their vote matters.

In other words, whether we vote comes down to power—specifically, whether we feel that we have the power to effect change. Some of us do feel powerful with respect to our votes, as though by voting we’re acting as civically engaged participants in our democracy. Others among us feel the opposite—powerless—as if we as individuals are ultimately irrelevant to the outcome of the vote and so there’s no point in even bothering.

Power matters, because if there’s anything net states give users, it’s a sense of power. Look no further than Apple’s branding of its wildly successful series of tech gadgets: the iPhone that kicked off the smartphone revolution in 2007, the iPad, iTunes, and so on. There’s a key indicator there: “I.” Me. You. Unlike political representation, tech is not an abstract concept. No, it’s tech for you and you alone. You are the center of your digital universe, and you’ve got the products that make it so.

This is a massive shift from how technology was experienced by users in the past. As recently as 2007, when the iPhone hit the market, 90 percent of American homes shared landlines. A caller phoned, and anyone in the household could answer. Nowadays, people sharing a household—the most intimate social unit we have—generally still have to divvy up their physical space and everything in it. But not our tech. We have to share the contents of our refrigerator—literally, our food supply—with other humans; but not our tech. Not anymore.

Next to our clothing and shoes, tech is the sole area of the home that is, in 2019, entirely personalized. And, like our clothing and shoes, we increasingly wear our tech on our person—in our pockets or, more often than not, simply in our hands: 50 percent of millennials report holding their phone in their hands not just when in use, but throughout the entire day.69

Which brings us back to voting and citizenship. For most of our recent history, we were always citizens, regardless of whether we thought about it and regardless of whether we exercised our rights. Conversely, we were only occasionally tech users. To use tech, we had to take some sort of action to engage with a digital device.

That all changed in the past decade. Increasingly, even when we’re not taking action we are tech users. Google tracks our location through our cell phone even when it’s in our pocket (and, in some cases, even when we’ve opted out of location-tracking or our phone is turned off).70 The apps we’ve downloaded collect our data, even when we’re not using the apps. “The more data [tech companies] get, the more useful it is,” reported Abhay Edlabadkar, founder of Redmorph, a company that develops apps to block trackers on your phone. “Within the limits that your app has asked for, it can collect and scoop up as much data as it can.”71

All of this happens in the background, without needing our action to initiate use or even requiring us to pay attention. Information about where we are, who we’re with (by our physical proximity to other users), the information we seek (through our searches), and even our mood (by the nature of the content we post) is streamed, tagged, and, more often than not, bought and sold by the tech companies we’ve invited into our lives: 7 out of 10 smartphone apps share our personal data with third-party services (more on that in chapter 3).72

Here is the key: we did this. We bought the devices. We signed up. We logged on. We signed terms of service or user agreements with every bit of tech that we own.

Net states have their own version of our Constitution’s Bill of Rights: the terms of service. We just don’t generally bother to read them—and for good reason. According to a study done by Norway’s Consumer Council, it would take, on average, 31 hours to read all the terms of service on an average person’s smartphone73—more time than it would take to read the New Testament of the Bible.

What’s more, terms of service and user agreements change, often and unseen. Even if we paid attention to such things—and we historically have not—we may not have known they were changing.74 Until the internet is subject to some sort of regulation by the US Congress and international equivalents, this is unlikely to change—unless we, as citizen-users, make a change.

The good news is, we’ve already overcome one of the biggest hurdles to effecting change with respect to net states—a hurdle we’ve not yet overcome with our nation-states: engagement. Americans may not be engaged with our political process, but we are very much engaged with our tech.

We are the masters of our universe of tech. We don’t have to rely on some proxy to represent our interests—we are the keepers of our relationship with our technology. It is specifically ours, after all: like clothes and shoes, our particular profiles are molded to our highly personalized habits and preferences.

What’s more, engagement doesn’t mean taking any special action. We don’t have to vote to effect change with our net states. We—as members of the citizen-user public sphere—create public opinion through every post, every “like,” every tweet, every search, every website we visit and shop from and access. Our habits are our votes. Individually, changing our habits has an enormous impact on our relationship with net states in daily life. A collective change of our habits can make or break the very existence of a tech company, for they are only as strong as their user base. Their population. Their citizen-users. Us.

Inventor Buckminster Fuller frequently contributed to Stewart Brand’s Whole Earth Catalog, sharing his musings on everything from life on Earth to an interplanetary future. In one essay, he wrote, “Whether humanity will pass its final exams for . . . a future is dependent on you and me, not on somebody we elect or who elects themselves to represent us. We will have to make each decision both tiny and great with critical self-examination—‘Is this truly for the many or just for me?’”75

In our net state citizenship, our every decision is simultaneously for ourselves and for the many. We simply need to remember that we are more than ourselves. We are part of a public sphere, influencing, in this case, the net states that govern our existence digitally and the ways that the digital world extends into real life.

We’re no longer a people “putting up with” computers. We’re wearing them and inhabiting them. This gives the keepers of our digital lives great power over us. It is only fitting that we, in turn, demand that this power be used judiciously. Content may still be king for users, but as citizen-users, we owe it to ourselves to pay attention to more than content. In some cases, net states like Microsoft are taking action to protect us. If we keep watch, we can also take note of when they fail to.

Like the citizen, the citizen-user has responsibilities as well as rights—to keep informed, to keep engaged, and to vote—in this case, with our actions and with which tech we use. As Adlai Stevenson once said, “As citizens of this democracy, you are the rulers and the ruled, the law-givers and the law-abiding, the beginning and the end.” With so much of our lives played out in the digital sphere, we must remember that we are both in charge and overseen: the rulers and the ruled. In our hyper-individualized existences, we’d have no one to blame but ourselves if we didn’t keep our rights as well as our responsibilities in mind.





Two


Net States IRL


Citizen-users may engage with digital content, but they’re still grounded in a physical landscape. This chapter explores what net states are doing “IRL”—in real life—situating the ethereal internet, “the cloud,” in the physical world and tracing how our data is tethered to Earth through undersea cables and data centers. By tracking net state activity IRL, this chapter lays the foundation for a new way of looking at power: distributed not according to borders on a map, but through information flows, investments, and physical assets.

Never has Puerto Rico’s ambiguous status in the American experience appeared in sharper relief than in the aftermath of Hurricane Maria. From 6:15 a.m. on Wednesday, September 20, 2017, when the category 4 hurricane made landfall in what would be the worst storm the island has ever seen—and the fifth-most-powerful storm to ever hit the United States—the island suddenly seemed to be on its own.1 The storm ravaged Puerto Rico, battering its 3.4 million residents with winds of 155 miles per hour and more than 30 inches of rain in a single day. As a point of comparison, Hurricane Katrina’s rainfall maxed out at nine inches after making landfall on the Gulf Coast; its major source of damage came from floodwaters.2

“It was as if a 50- to 60-mile-wide tornado raged across Puerto Rico like a buzz saw,” reported meteorologist Jeff Weber from the National Center for Atmospheric Research. “It’s almost as strong as a hurricane can get in a direct hit.”3

Just three weeks earlier, in another part of America, Houston, Texas, had been hit by Hurricane Harvey, an equally powerful storm in its own way. Though less severe in intensity (it had been downgraded to a tropical storm by the time it hit Houston), its rains were relentless, dumping 40 to 60 inches on the 2.3 million residents of America’s fourth-largest city over the course of 117 hours. Harvey flooded 40 percent of the city’s buildings and residences and broke the all-time record for hurricane rainfall in the United States.4 An estimated 82 people perished in the storm.5

The federal government’s response to the flood damage in Texas was swift and massive. The Federal Emergency Management Agency (FEMA) coordinated and deployed over 31,000 personnel from multiple agencies and organizations to the city even before the storm made landfall.6 President Trump personally toured Houston four days after the storm hit.

To Puerto Rico, with a population double that of Houston, FEMA sent fewer than 500 staffers.7 The president didn’t appear for almost two weeks. And yet the damage was far more severe than what had befallen Texas. Immediately after landfall, the entire island lost electricity. More than 95 percent of cell service went out. The chief executive of the government-owned Puerto Rico Electric Power Authority, Ricardo Ramos, told CNN, “The island’s power infrastructure had essentially been destroyed.”8 Hurricane Maria’s death toll from the storm and its aftermath is estimated to be 4,645 people—more than 50 times higher than the loss of life in Texas following Hurricane Harvey.9

Time went by. Things got worse. A week after the storm, almost half the population still lacked access to drinking water. Ten days later, that number increased to 55 percent.10

Reporters covered the disaster from every angle imaginable—mostly doom and gloom: the loss of life, the dramatic absence of the federal government, the potential looming food shortages, and the consequences of long-term lack of electricity on a population of over 3 million.11 But one reporter took a different tack, identifying a rare opportunity. At 2:45 p.m. on October 4, 2017, Brian Kahn, a reporter with the environmental news website Earther, filed a story titled “Puerto Rico Has a Once in a Lifetime Opportunity to Rethink How It Gets Electricity.”12 A separate tweeter with about 9,000 followers then posted the story with the comment “Could @elonmusk go in and rebuild #PuertoRico’s electricity system with independent solar & battery systems?”

Elon Musk read that tweet and responded. Given that he has 23.7 million followers, it was rather extraordinary that Musk personally replied to the post. He tweeted, “The Tesla team has done this for many smaller islands around the world, but there is no scalability limit, so it can be done for Puerto Rico too. Such a decision would be in the hands of the PR govt, PUC, any commercial stakeholders and, most importantly, the people of PR.”

Approximately eight hours later, word of Musk’s tweet had reached Puerto Rico’s governor, Ricardo Rosselló. Rosselló tweeted back, “@elonmusk, let’s talk.”

Tech entrepreneur Elon Musk had long made headlines for his almost preternatural ability to plant a flag in future-leaning endeavors. Before most people were even using the internet, Musk cofounded PayPal, an online payment system, in 1998.13 He designed a proposed “Hyperloop” for high-speed mass transit between San Francisco and Los Angeles, with plans resembling schematics from Star Trek.14 He created SpaceX, which has successfully completed multiple restocking trips to the International Space Station, cornering the rocket launch market.15 And he formed Tesla, whose electric cars can travel upwards of 400 miles on a single charge.16 Musk is one of those rare people who can claim to “make life multiplanetary” within his lifetime and be taken seriously.17

In 2015, Tesla broke yet more new ground when it plowed into the energy business. It launched Powerwall, a company that makes a rechargeable lithium-ion battery-pack kit—Powerpack—which stores solar power for homeowners. The kit can be bought outright (or with loan financing) or leased.18 “We have this handy fusion reactor in the sky called the sun,” Musk noted at Powerwall’s inaugural press conference.19 But existing batteries, he noted, “suck.” Powerwall, Musk promised, was going to make traditional batteries obsolete.

In the two years following the Powerwall launch, Musk lobbied hard to get consumers, businesses, and governments alike to adopt solar energy storage via his Powerwall system. He achieved only moderate initial success. In 2015—launch year—Queensland, Australia, which already had one of the world’s highest rates of household solar panel systems (with more than 88,000 such systems), entered into a year-long trial with Powerwall to test how it could integrate Powerwall with the state’s energy infrastructure.20 Gradually, Powerwall installations began to make gains beyond Queensland and into other territories in Australia, albeit on the consumer rather than the governmental level. In short, despite receiving a positive reception conceptually, Powerwall had yet to really gain a foothold in a large-scale energy grid. Its first big foray into a national public grid was in 2016, when Powerwall provided energy to the entire island of Ta’u in American Samoa. But benefiting a population of fewer than 600 residents, this hardly gave Musk the large-scale proof of concept he craved.21

And for Musk, Powerwall wasn’t just some side-job, do-gooder passion project to save the environment. His work on solar energy was part of a larger plan, one that Musk began when he bought the solar energy company SolarCity. During a joint SolarCity-Tesla product launch in 2017, Musk spoke to the crowd of approximately 200 people. “This,” he said, gesturing to solar panels on the roofs of nearby houses, “is the integrated future. You’ve got an electric car, a Powerwall, and a Solar Roof.” As if to shoo away any naysayers in the crowd, he concluded, “It’s pretty straightforward, really.”22

With Powerwall, Musk wasn’t just launching another business. He was adding to his vision of a future in which energy and transportation would be fundamentally altered from the systems our governments have traditionally relied upon. As noted above, Musk is already transforming transportation: with Tesla (electric cars), the Hyperloop (high-speed urban transit), and SpaceX (space transit). He’s transforming energy: with SolarCity (solar panels) and Powerwall (solar energy storage). Combined, these endeavors form puzzle pieces that create an interconnected infrastructure, one that controls how future humans physically will move from place to place and gain access to energy. In so doing, Musk is building a pseudo-public utility. It’s not “public,” in that government won’t own it; Musk will. But these projects in many ways act like public utilities, in that they theoretically supply “the public” with basic needs: energy and transportation.

With this backdrop in mind, it makes sense that when Hurricane Maria came along, Musk saw an opportunity, the chance he’d been waiting for to bring Powerwall to scale. Weeks after FEMA’s much-criticized response to Puerto Rico’s island-wide power outage, Musk stepped up with a tantalizing offer: not only would he donate Powerpack systems to Puerto Rico free of charge, but they could be used as a first step in rebuilding the entire energy infrastructure of the island.

The US federal government—historically responsible, at the bare minimum, for providing basic infrastructure such as power, roads, and water to its people—barely showed up to aid 3 million of its citizens. A tech company—historically responsible for nothing but its bottom line—stepped in with an offer not only to donate equipment but to assume management of the island’s energy, an essential piece of critical infrastructure.

Just five days after the Twitter exchange between Elon Musk and Governor Rosselló, Tesla shipped hundreds (no exact number could be confirmed by reporters) of Powerpacks to the island, each of which stores up to 210 kilowatt-hours (kWh).23 For context, according to the US Energy Information Administration, the average American home uses about 900 kWh of energy per month.24 In other words, one Powerpack could provide electricity to one home for about a week—assuming it was never recharged, which, in a sunny environment such as Puerto Rico, was unlikely to be the case.

And Musk’s Powerpacks continued to deliver benefits to Puerto Ricans even beyond the immediate aftermath of Hurricane Maria. During an island-wide blackout seven months later, in April 2018, Musk’s Powerpack batteries generated electricity at 662 sites across the island.25 The Powerpacks may not yet have turned into the island-wide energy infrastructure transformation Musk had hoped for. But at the very least, they put themselves on governments’ radar.


VERMONT IS PERHAPS THE MOST STRIKING EXAMPLE OF HOW “PUBLIC” infrastructure is being transformed by net states. With just over 600,000 people, Vermont is a small state, population-wise. But it’s got a national reputation for punching above its weight when it comes to enacting progressive change. Back in 2000, Vermont was the first state to legalize same-sex marriage—a full eight years before a second state followed suit.26 As of 2015, Burlington, Vermont, became the first city in the nation to run completely on renewable energy.27 The state has one of the most technologically advanced energy grids in the country. In 2016, nearly all of its in-state electricity was generated by renewable energy, including hydroelectric, biomass, wind, and solar sources.28 In fact, the state’s campaign to get residents to install renewable energy systems has been so successful that the utility commission recently cut back on awarding financial compensation for doing so.29 “Renewable energy is flourishing in Vermont,” said the commission, “and has reached a level of maturity where it can continue to be deployed with lower incentives.”

Perhaps one of the reasons Vermonters take their energy supply so seriously is that they’ll quite literally freeze if they don’t: with about 81 inches per year, Vermont’s snowfall is among the highest of any state in the US.30 While this may be great for ski slopes, it also means that most Vermonters have to repeatedly weather winter storms accompanied by