Science fiction often presents two nightmare visions of the future: one when the world has ended, and one when the world hasn’t, where “progress”—technological, typically—determinedly continues, for better but mostly worse. Whichever you find least frightening probably depends on your faith that humanity will make the right decisions when it has to, like deciding not to implant computers inside our brains or outsource our fate to algorithms. Sometimes, the world ending isn’t the worst thing in the, well, world.
The world ends twice in Nick Clark Windo’s new dystopian science fiction novel, The Feed. It ends first as society destroys itself, anonymous hackers wreaking havoc with the titular technology—the Feeds implanted inside (mostly) everyone’s brains. It ends again as society destroys the planet, exorbitant energy consumption—to power the Feeds—turning Earth into an inhospitable oven. Windo is most interested in these doomsdays and their aftermaths; how we get there, less so.
The Feed opens on the precipice of the first apocalypse. The Feed, for a moment, remains functional. Essentially a smartphone embedded inside the skull, the Feed provides an intracranial stream of information, content, and communication. Targeted advertisements appear behind your eyes based on fleeting emotions and the people around you. The Feed measures everything from your hormone levels to the revolutions of a looter’s heaved rock, and presents the world in an onslaught of personalized metrics. Memories are stored in the cloud and shared with others as virtual reality-esque sensory experiences. People let out “unintended real laughs” instead of LOLs. It’s the Internet, but if mainlined directly into a person’s consciousness.
Tom and Kate Hatfield, The Feed’s protagonists, sit in an upscale restaurant, going “slow.” They’ve turned off their Feeds for the evening, fighting addiction to the technology in their brains. A short while into dinner, the restaurant’s patrons panic as their Feeds are bombarded with videos of the U.S. president’s assassination—along with a helpful link showing where to purchase the sweater he’s wearing for yourself, hopefully without the bloodstains. Chaos ensues as people are “taken” in their sleep. With the Feed, getting hacked means being totally erased from your own body, like a wiped hard-drive. No one knows if the people inside their brothers, sisters, fathers are actually who they claim to be. More assassinations follow, perpetrated by the presumed taken. Society and the Feed quickly collapse.
Six years later, the remnants of humanity—those who didn’t kill themselves in the void left behind by the Feed’s absence—live in camps. Food is scarce, and electricity is nonexistent. Without the instantaneous access to information provided by the Feed, people have forgotten common words and rudimentary concepts of how the world works. They have to relearn how to use their brains, how to create memories, and Windo illustrates the struggle with computer terminology: brains “stall” and “judder,” feeling for the Feed like a ghost limb.
Eventually, the plot of The Feed stirs into motion. Tom and Kate’s daughter is kidnapped, and they must traverse their hollowed-out world to find her. What follows is fairly typical post-apocalyptic fare. There are cars overgrown with vegetation and foxes chewing on bones, run-ins with mysterious fellow survivors. Time travel is brought into the mix, and, subsequently, parallel dimensions. The Feed provides a rather literal deus ex machina.
Windo’s novel focuses most of its attention on this apocalypse the technology has wrought, rather than the technology itself. In The Feed, the nightmare is in the ending: the destruction of the Feed, those left grappling with the gaping hole it has left in their minds. The central question of The Feed is how to survive without it.
In a different novel with the same technological premise, M.T. Anderson’s Feed, published in 2002 with a Sean Parker-approved title, the question is how to survive with it. The technology is generally the same as Windo’s—ceaseless ads and lowest common denominator content beamed into the brain—presented in a satirical young adult novel narrated by a teenager named Titus, because Anderson understands that a generation’s technology is best experienced with teens as tour guides. The world of Feed is ruthlessly commercial: one character is described as the “little economy model” of another, suburban chain restaurants have made it all the way to the moon, and Schools™ have been taken over by corporations because it “sounds completely like, Nazi, to have the government running the schools.” In Feed, unlike in Windo’s novel, the Feed is very much working as intended.
Anderson’s satire is not always the most nuanced, but then again, neither is the world he is roasting—even less so 16 years later. In flashes of news stories intermixed with commercials, environmental disaster is ignored while the president defends big business and—as if Anderson had a premonition of the current POTUS—publicly debates the meaning of the term “shithead.” What makes Feed compelling, however, is when it reveals the future our data-addicted present portends.
We see that present in Weapons of Math Destruction by Cathy O’Neil, a bestseller in 2016 that details how big data and its algorithmic tendrils already “encod[e] human prejudice, misunderstanding, and bias” and “punish the poor and the oppressed in our society, while making the rich richer.” O’Neil works industry by industry to reveal how digital advertisers see consumers begging for targeted ads, how for-profit colleges use data to set up the most desperate students with predatory loans, and how companies use unregulated “e-scores” as a stand-in for job applicants’ employability. The data-driven programs and products O’Neil investigates “deliver unflinching verdicts, and the human beings employing them can only shrug.” Just take a look at the social credit scores already introduced in China that rank individuals’ eligibility for loans and dating apps. In 2018, your data is your fate.
Titus is the narrator of Feed, but the novel’s true protagonist is his girlfriend, Violet. She spends the book torn between resisting the Feed’s influence and succumbing to its pleasures, and ultimately suffers the fate that O’Neil shows us is not a far-fetched science fiction. Violet was implanted with the Feed as a young child—still late for her generation—because her reluctant father knew it was an economic necessity, that she would need to be a consumer to get a job. But her father could only afford the cheaper model, and so when Violet and the others are hacked (in a nightclub on the moon), it’s Violet’s feed that begins to malfunction, taking her brain with it. Feed follows as Violet deteriorates, losing control of her body.
Violet is left with one option: appeal to the megacorporations that dominate the world of Feed for surgical repairs that will save her life. But she has spent the novel “trying to create a customer profile that’s so screwed, no one can market to it.” She attempts, as John Herrman recently suggests in The New York Times Magazine, to “fool our algorithmic spies.” Herrman writes of the “intentionally deceptive,” antagonistic actions we take prompted by the knowledge of big data’s omniscience. This knowledge prompts Violet to browse for a disparate collection of products, never purchasing. She purposely eludes the Feed’s gaze, and this rebellion condemns her to death. Her request to the Feed’s conglomerates is denied, as explained by the Feed’s bot-like avatar, Nina:
Unfortunately FeedTech and other investors reviewed your purchasing history, and we don’t feel that you would be a reliable investment at this time. No one could get what we call a ‘handle’ on your shopping habits.
Instead, Nina suggests checking out “some of the great bargains available to you through the feednet,” telling Violet, “we might be able to create a consumer portrait of you that would interest our investment team.” The Feed’s answer for Violet is to become a better kind of consumer to survive.
It sounds a bit maudlin, but is it so different from the GoFundMe version of healthcare that has become so popular, where life-saving care depends on your virality? Only for Anderson, it’s faceless corporations we have to impress for medical care, rather than our fellow Internet users. This is the future Feed presents to us and O’Neil sees breathing down our necks: consumerism not as self-actualization, but as self-preservation.
Like the world of Anderson’s novel, The Feed is most interesting when providing glimpses of its pre-apocalyptic backstory. We see a world populated by QR codes that personalize each and every product to a person, while there are references to continental water wars and environmental decay. Post-collapse, Tom laments how people let technology “evolve faster than their morals could keep up,” their belief that “when self-preservation kicked in, when business and survival meshed…[that it] was just a question of when that solution would be found” to keep the Feed in check. In an early scene, Tom comes upon what once was a warehouse of memory servers, and another character reminds him, “these are people”—with a person’s essence stored on a microchip, Windo literalizes the idea of people as the sum of their data. The Feed ultimately hinges on an exploration of whether or not this is true, whether or not identity is the corporeal form we inhabit or the data—emotional, behavioral—we produce. In Weapons, O’Neil shows us how that question has largely already been resolved in the affirmative for big data.
In both Feed and The Feed, big data offers the possibility of salvation: a chance to rewrite the past, treatment for an otherwise terminal fate. In both novels, salvation is denied and devastation comes while waiting for the benevolence of the corporations behind the algorithms and content. The lesson, then, is clear: Don’t wait. (Although it’s perhaps too late for Windo to follow his own advice, as The Feed has already been sold to Amazon as a television series.) But Windo makes the drama of The Feed intensely personal. Tom’s father invented the Feed, and so our narrator is the son of Zuckerberg or Bezos; he’s resisting his family legacy. Tom plays a personal part in the time-shattering twist that comes mid-book. The characters of Feed, in contrast, are commonplace. They’re any of Facebook’s billion users, any of the millions of Amazon Prime subscribers—people just trying to be the right kind of consumers. In this way, Feed is more immediate. It’s easier to imagine ourselves binge-buying pants online as consolation than ransacking our way to the penthouse apartment of the Feed’s CEO in search of time-travel-assisted resolution.
Neil Postman’s Amusing Ourselves to Death provides the two ways, as he puts it, “culture may be shriveled.” There’s the fate of George Orwell’s 1984, where “culture becomes a prison,” or the fate of Aldous Huxley’s Brave New World, where “culture becomes a burlesque.” Postman published Amusing in 1985, before technology remade our cultural landscape yet again, into the world O’Neil’s Weapons shows us. Now, there seems to be a third option: culture as math problem, stores of data and finely tuned algorithms dictating what we see, what we do, and what we are. “The only thing worse than the thought it may all come tumbling down is the thought that we may go on like this forever,” Violet says in Feed. It’s both teenage melodrama and science fiction dichotomy: The Feed finds horror in the former, Feed the latter. As for the rest of us—well, I’m sure there’s an algorithm to sort that out.
When I was in graduate school, a professor introduced me to a documentary called The Century of the Self. Directed by BBC journalist Adam Curtis, it follows the rise of modern public relations, whose Austrian inventor, Edward Bernays, exploited Americans’ innate self-centeredness to sell us on everything from psychoanalysis to cigarettes. It’s an eye-opening piece of work, and one I used to rewatch once or twice a year. Last time I did though, it occurred to me that it might not be all that relevant. Because we aren’t living in the century of the self at all anymore, but the century of the crowd.
It would be easy, I guess, to argue that the self is still ascendant since social media gives people more ways to think about themselves than ever. But a hashtag can’t go viral with just one user, nobody cares about an Instagram photo no one likes, and does a YouTube video that doesn’t get watched even exist? Even as users do the self-focused work of updating LinkedIn profiles and posting on Twitter and Facebook, they do it in the service of belonging, at the back of everyone’s minds, an ever-present audience whose attention they need if their efforts aren’t to be wasted.
In his new book World Without Mind: The Existential Threat of Big Tech, Franklin Foer argues that this shift from individual to collective thinking is nowhere more evident than in the way we create and consume media on the Internet. Because tech companies like Facebook and Google make money off the sale of our personal data to advertisers, they depend on the attention of the masses to survive. And because their algorithms shape much of what we see online, it’s to their benefit to coerce us into thinking of ourselves not as individuals but as members of groups. “The big tech companies,” Foer writes, “Propel us to join the crowd—they provide us with the trending topics and their algorithms suggest that we read the same articles, tweets, and posts as the rest of the world.”
Foer started his journalism career in the late ’90s as a writer for Slate when it was still owned by Microsoft. He edited The New Republic twice, from 2006 to 2010 and later, in 2012, after it was purchased by millennial billionaire and Facebook co-founder Chris Hughes. The year Foer first joined TNR, only college students could have Facebook accounts, the iPhone hadn’t yet been released, and the Internet still represented an opportunity for democratization, where a small website could attract a self-selecting group of readers simply by producing well-written articles about interesting things.
Today, there are two billion people on Facebook, which is also where most people get their news. Media organizations have adjusted accordingly, prioritizing stories that will travel widely online. Foer resigned from TNR shortly after Hughes announced he wanted to run the magazine like a startup. He uses the contentious end of his tenure there to make the case that news organizations desperate for traffic have given in too easily to the demands of big tech, selling out their readership in the pursuit of clicks and advertising dollars. The end result of this kind of corruption is sitting in the White House right now. “Trump,” the subject of thousands of sensationalist headlines notable primarily for their clickability, “began as Cecil the Lion, and then ended up president of the United States.”
Foer, of course, is writing about this topic from a position of relative privilege. He rose in his field before journalists relied on Twitter to promote their work. His career-making job was at a publication that has, more than once, made headlines for fostering an environment of racism and misogyny and a system of exclusion that may have facilitated his own way to the top. In late 2017, news about the sexual misconduct of his influential friend, TNR culture editor Leon Wieseltier, spread widely and quickly on Twitter and Facebook. Perhaps aware even as he was writing that he was not in the position to launch an unbiased critique, Foer chooses to aim his polemic at the people who run large online platforms and not at the platforms themselves.
Foer doesn’t want Facebook to stop existing, but he does want greater government regulation and better antitrust legislation. He wants a Data Protection Authority, like the Consumer Financial Protection Bureau, to manage big tech’s sale of our personal data. He wants increased awareness on the part of the public of the monopolies Facebook, Apple, Amazon, and Google represent. He wants everyone to start reading novels again. And he wants news organizations to implement paywalls to protect their integrity, rather than depending on traffic for revenue.
While I agree that reading fiction is one of the only ways any of us are going to survive this era with our minds intact, implementing subscription fees to save journalism sounds like suggesting everyone go back to horse-drawn carriages to end climate change. Foer dismisses the dictum “Information Wants to be Free” as “a bit of nineties pabulum,” but he’s wrong; If we put up blocks against information online in the form of paywalls, it’s going to find a way around them like a river around a poorly built dam.
We’re not going to go back to the way things were before, and if anything, the Internet’s information economy is going to carve out a wider and wider swath in our brains. Subscriptions work for The New Yorker and The New York Times in part because they come with built-in audiences old enough to remember when paying for information was the best way to get it. People might pay monthly fees for subscriptions to Stitch Fix and Netflix, but this model won’t sustain itself in a world full of readers who expect good writing not to cost anything.
Foer also has a higher opinion of human willpower in the face of massively well-funded efforts to dismantle, divert, and repurpose it than I do. I don’t know if the founders of Google and the big social media platforms always knew it would be possible to turn their user bases into billions of individual nodes ready to transmit information—via tweets, texts, posts, and status updates—at the expense of all their free time, but they do now. Our phones and our brains exist in a symbiotic relationship that is only going to intensify as time passes. As Foer himself notes, “We’ve all become a bit cyborg.”
The more addicted we are, the longer we spend online, the more data we give big technology companies to sell, the less incentive they have to change. We aren’t in a position to go offline, because online is where our families, our friends, and our jobs are. Tech companies have the lobbying power, financial means, and captive audiences necessary to ensure that the stimulus-reward loops they offer never have to stop. Media organizations that take advantage of these weaknesses will grow, while those that put up pay walls, adding friction to the user experience, will wither and die.
As a writer at Slate and an editor at the New Republic, Foer was a member of the generation that helped put in place the framework of a media industry whose flaws he’s railing against. He’s unlikely to be the person who fixes it. And just as Foer can’t solve the problems inherent in an industry he helped build, big tech companies aren’t going to remedy the problems they’ve brought about. Not because they don’t want to (though, why would they?) but because for the most part, the people running these companies can’t see the full picture of what they’ve done.
In an interview with Axios’s Mike Allen, Facebook CFO Sheryl Sandberg demonstrated little remorse at the role Facebook played in facilitating Russia’s interference in the 2016 presidential election via fake campaign ads. “A lot of what we allow on Facebook is people expressing themselves,” Sandberg said. “When you allow free expression, you allow free expression, and that means you allow people to say things that you don’t like and that go against your core beliefs. And it’s not just content—it’s ads. Because when you’re thinking about political speech, ads are really important.” In the universe where Sandberg lives, our problems—which include a president poised to start a war for the sake of his ego—are her problems only in so much as they damage her company’s ability to accept money from whomever it wants.
In late 2017, Twitter, Facebook, and Google were all called upon to testify before the Senate Intelligence Committee. Some members of Congress want a bill requiring big tech companies to disclose the funding source for political ads. Facebook and Twitter announced new internal policies regulating transparency. But how well these regulations will be enforced is unknown, and, frankly, it’s difficult to imagine a world in which insanely well-capitalized companies steeped in the libertarian ethos of Silicon Valley let rules get in the way of “innovation.”
One of the best chapters in World Without Mind involves the coming of what Foer calls the Big One, “the inevitable mega-hack that will rumble society to its core.” Foer writes that the Big One will have the potential to bring down our financial infrastructure, deleting fortunes and 401Ks in the blink of an eye and causing the kind of damage to our material infrastructure that could lead to death. Big tech can see the Big One coming, and is bracing for it, taking lessons from the example set by the banks during the economic collapse of 2008. They’re lawyering up and harnessing resources to make sure they’ll make it through. We, the users whose fortunes will have been lost, whose data will have been mishandled and who will have potentially suffered grave bodily harm as the result of this mega-hack, won’t fare so well.
This prediction brings to mind another recent book about the current state of technology, Ellen Ullman’s Life in Code: A Personal History of Technology. Ullman also denounces the dismantling of journalism as we know it by social media. “Now,” she writes, “without leaving home, from the comfort of your easy chair, you can divorce yourself from the consensus on what constitutes ‘truth.’” Ullman, like Foer, blames these platforms for the election of President Trump, calling Twitter the perfect agent of disintermediation, “designed so that every utterance could be sent to everyone, going over the heads of anyone in between.”
But she veers from Foer’s pronouncement that unregulated tech companies are going to be the death of intellectual culture as we know it. Describing San Francisco, where she lives, she notes the failure of more and more startups, the financial struggles of LinkedIn before its sale to Microsoft, the mass exodus of investors from Twitter, and Uber’s chronic struggles to reach profitability. Life in Code was written before Snapchat went public, but Ullman predicts correctly that it won’t go very well.
“The privileged millennial generation has bet its future on the Internet,” Ullman writes. “I wonder if they know the peril and folly of that bet.” Ullman, a programmer, lived through the first tech collapse. Now, she writes, conditions are ripe for a second fall. “The general public has been left standing on the sidelines, watching the valuations soar into the multibillions of dollars, their appetites whetted: they, too, want to get into the game. I fear that on the IPOs, the public will rush in to buy, as was true in 2000.”
These two dark visions of America’s future–one in which big tech brings about the end of society as we know it, and one in which it crashes under its own weight—both lead to similar outcomes: CEOs take their payouts and head to their secret underground bunkers in the desert while those on the outside get left holding the bag. Both potential endings also point to a precipice that we, as a society, are fast approaching, a sense that the ground is ready to fall out from beneath us at any time.
“There has never been an epoch that did not feel itself to be ‘modern,’” Walter Benjamin writes in the Arcades Project, “and did not believe itself to be standing directly before an abyss.” Thanks to climate change, Donald Trump’s perpetual nonsense, the rise of white supremacist hate groups, and the mass shootings and terror attacks that pepper the headlines each day, it’s hard not to feel as if that we’re all alive at the start of a dawning apocalypse. And maybe that’s because we are. But the end that’s coming will not be all encompassing. “The ‘modern,’’’ Benjamin also wrote, “is as varied in its meaning as the different aspects of one and the same kaleidoscope.”
In his book Homo Deus: A Brief History of Tomorrow, historian Yuval Noah Harari outlines the Dataist hypothesis that human beings are algorithms, elements of a massive global data processing system, the output of which was always destined to be a better, more efficient data processing system. “Human experiences are not sacred and Homo Sapiens isn’t the apex of creation,” Harari writes. “Humans are merely tools.” The endpoint of our current evolutionary trajectory, some researchers hypothesize, could look like a series of non-biological networks capable of communicating, rebuilding, repairing, and reproducing new versions of themselves without us. Harari points to theories that suggest we’ve always been headed towards that point, that it has always been what was supposed to happen, that we’re only one step in a process longer and more far-reaching than we can possibly imagine. It’s those enterprising entrepreneurs willing to exploit our connectivity-inclined, data-processing inner natures who stand to profit most from humanity’s current evolutionary moment.
In a recent feature in New York about Facebook, Mark Zuckerberg’s former ghost writer Kate Losse, trying to remember Facebook’s “first statement of purpose,” recalls her boss saying often, “I just want to create information flow.” Where Curtis’s PR executives in Century of the Self exploited our innate selfishness for their own gain, the Zuckerbergs of the world cash in on our uncontrollable impulse to share information. An impulse that, according to Harari, might be leading, even now, to the development of an entity that, in its pursuit of greater networking capability, will absorb human biology then leave it behind. It sounds, I know, like science fiction. But, 15 years ago, so did Snapchat, Facebook, and the iPhone.
Right now, the tide seems to be turning against tech. Last year, New York Times writer Farhad Manjoo promoted a series of articles on the monopoly power of Facebook, Apple, Google, Amazon, and Microsoft. The Guardian published a story about Facebook and Google employees safeguarding themselves against the addictive properties of the platforms they helped create. Cathy O’Neil’s look at the algorithms that shape the Internet, Weapons of Math Destruction, was shortlisted in 2016 for a National Book Award. Following closely on these reports, of course, have come the inevitable accusations of alarmism from technologists and the people who love them. Where that discourse will lead is hard to say.
One of the central question authors like Foer, O’Neil, Ullman, and Manjoo seem to want to raise is this: What will our legacy be? Will we be known for having set in place the foundations for a tech industry beholden to its users’ well being? Or will we be a blip, the last people who believed in an Internet capable of bringing about a brave new world, before everything changed? Benjamin is correct that all generations harbor an anxiety that theirs will be the last to grace the Earth before the world ends. But no generation has been so far, and ours probably won’t be either. And so to this question, I would add another: What will rise from whatever we build then leave behind? Because for better or for worse, something will.