When I was in graduate school, a professor introduced me to a documentary called The Century of the Self. Directed by BBC journalist Adam Curtis, it follows the rise of modern public relations, whose Austrian inventor, Edward Bernays, exploited Americans’ innate self-centeredness to sell us on everything from psychoanalysis to cigarettes. It’s an eye-opening piece of work, and one I used to rewatch once or twice a year. Last time I did though, it occurred to me that it might not be all that relevant. Because we aren’t living in the century of the self at all anymore, but the century of the crowd.
It would be easy, I guess, to argue that the self is still ascendant since social media gives people more ways to think about themselves than ever. But a hashtag can’t go viral with just one user, nobody cares about an Instagram photo no one likes, and does a YouTube video that doesn’t get watched even exist? Even as users do the self-focused work of updating LinkedIn profiles and posting on Twitter and Facebook, they do it in the service of belonging, at the back of everyone’s minds, an ever-present audience whose attention they need if their efforts aren’t to be wasted.
In his new book World Without Mind: The Existential Threat of Big Tech, Franklin Foer argues that this shift from individual to collective thinking is nowhere more evident than in the way we create and consume media on the Internet. Because tech companies like Facebook and Google make money off the sale of our personal data to advertisers, they depend on the attention of the masses to survive. And because their algorithms shape much of what we see online, it’s to their benefit to coerce us into thinking of ourselves not as individuals but as members of groups. “The big tech companies,” Foer writes, “Propel us to join the crowd—they provide us with the trending topics and their algorithms suggest that we read the same articles, tweets, and posts as the rest of the world.”
Foer started his journalism career in the late ’90s as a writer for Slate when it was still owned by Microsoft. He edited The New Republic twice, from 2006 to 2010 and later, in 2012, after it was purchased by millennial billionaire and Facebook co-founder Chris Hughes. The year Foer first joined TNR, only college students could have Facebook accounts, the iPhone hadn’t yet been released, and the Internet still represented an opportunity for democratization, where a small website could attract a self-selecting group of readers simply by producing well-written articles about interesting things.
Today, there are two billion people on Facebook, which is also where most people get their news. Media organizations have adjusted accordingly, prioritizing stories that will travel widely online. Foer resigned from TNR shortly after Hughes announced he wanted to run the magazine like a startup. He uses the contentious end of his tenure there to make the case that news organizations desperate for traffic have given in too easily to the demands of big tech, selling out their readership in the pursuit of clicks and advertising dollars. The end result of this kind of corruption is sitting in the White House right now. “Trump,” the subject of thousands of sensationalist headlines notable primarily for their clickability, “began as Cecil the Lion, and then ended up president of the United States.”
Foer, of course, is writing about this topic from a position of relative privilege. He rose in his field before journalists relied on Twitter to promote their work. His career-making job was at a publication that has, more than once, made headlines for fostering an environment of racism and misogyny and a system of exclusion that may have facilitated his own way to the top. In late 2017, news about the sexual misconduct of his influential friend, TNR culture editor Leon Wieseltier, spread widely and quickly on Twitter and Facebook. Perhaps aware even as he was writing that he was not in the position to launch an unbiased critique, Foer chooses to aim his polemic at the people who run large online platforms and not at the platforms themselves.
Foer doesn’t want Facebook to stop existing, but he does want greater government regulation and better antitrust legislation. He wants a Data Protection Authority, like the Consumer Financial Protection Bureau, to manage big tech’s sale of our personal data. He wants increased awareness on the part of the public of the monopolies Facebook, Apple, Amazon, and Google represent. He wants everyone to start reading novels again. And he wants news organizations to implement paywalls to protect their integrity, rather than depending on traffic for revenue.
While I agree that reading fiction is one of the only ways any of us are going to survive this era with our minds intact, implementing subscription fees to save journalism sounds like suggesting everyone go back to horse-drawn carriages to end climate change. Foer dismisses the dictum “Information Wants to be Free” as “a bit of nineties pabulum,” but he’s wrong; If we put up blocks against information online in the form of paywalls, it’s going to find a way around them like a river around a poorly built dam.
We’re not going to go back to the way things were before, and if anything, the Internet’s information economy is going to carve out a wider and wider swath in our brains. Subscriptions work for The New Yorker and The New York Times in part because they come with built-in audiences old enough to remember when paying for information was the best way to get it. People might pay monthly fees for subscriptions to Stitch Fix and Netflix, but this model won’t sustain itself in a world full of readers who expect good writing not to cost anything.
Foer also has a higher opinion of human willpower in the face of massively well-funded efforts to dismantle, divert, and repurpose it than I do. I don’t know if the founders of Google and the big social media platforms always knew it would be possible to turn their user bases into billions of individual nodes ready to transmit information—via tweets, texts, posts, and status updates—at the expense of all their free time, but they do now. Our phones and our brains exist in a symbiotic relationship that is only going to intensify as time passes. As Foer himself notes, “We’ve all become a bit cyborg.”
The more addicted we are, the longer we spend online, the more data we give big technology companies to sell, the less incentive they have to change. We aren’t in a position to go offline, because online is where our families, our friends, and our jobs are. Tech companies have the lobbying power, financial means, and captive audiences necessary to ensure that the stimulus-reward loops they offer never have to stop. Media organizations that take advantage of these weaknesses will grow, while those that put up pay walls, adding friction to the user experience, will wither and die.
As a writer at Slate and an editor at the New Republic, Foer was a member of the generation that helped put in place the framework of a media industry whose flaws he’s railing against. He’s unlikely to be the person who fixes it. And just as Foer can’t solve the problems inherent in an industry he helped build, big tech companies aren’t going to remedy the problems they’ve brought about. Not because they don’t want to (though, why would they?) but because for the most part, the people running these companies can’t see the full picture of what they’ve done.
In an interview with Axios’s Mike Allen, Facebook CFO Sheryl Sandberg demonstrated little remorse at the role Facebook played in facilitating Russia’s interference in the 2016 presidential election via fake campaign ads. “A lot of what we allow on Facebook is people expressing themselves,” Sandberg said. “When you allow free expression, you allow free expression, and that means you allow people to say things that you don’t like and that go against your core beliefs. And it’s not just content—it’s ads. Because when you’re thinking about political speech, ads are really important.” In the universe where Sandberg lives, our problems—which include a president poised to start a war for the sake of his ego—are her problems only in so much as they damage her company’s ability to accept money from whomever it wants.
In late 2017, Twitter, Facebook, and Google were all called upon to testify before the Senate Intelligence Committee. Some members of Congress want a bill requiring big tech companies to disclose the funding source for political ads. Facebook and Twitter announced new internal policies regulating transparency. But how well these regulations will be enforced is unknown, and, frankly, it’s difficult to imagine a world in which insanely well-capitalized companies steeped in the libertarian ethos of Silicon Valley let rules get in the way of “innovation.”
One of the best chapters in World Without Mind involves the coming of what Foer calls the Big One, “the inevitable mega-hack that will rumble society to its core.” Foer writes that the Big One will have the potential to bring down our financial infrastructure, deleting fortunes and 401Ks in the blink of an eye and causing the kind of damage to our material infrastructure that could lead to death. Big tech can see the Big One coming, and is bracing for it, taking lessons from the example set by the banks during the economic collapse of 2008. They’re lawyering up and harnessing resources to make sure they’ll make it through. We, the users whose fortunes will have been lost, whose data will have been mishandled and who will have potentially suffered grave bodily harm as the result of this mega-hack, won’t fare so well.
This prediction brings to mind another recent book about the current state of technology, Ellen Ullman’s Life in Code: A Personal History of Technology. Ullman also denounces the dismantling of journalism as we know it by social media. “Now,” she writes, “without leaving home, from the comfort of your easy chair, you can divorce yourself from the consensus on what constitutes ‘truth.’” Ullman, like Foer, blames these platforms for the election of President Trump, calling Twitter the perfect agent of disintermediation, “designed so that every utterance could be sent to everyone, going over the heads of anyone in between.”
But she veers from Foer’s pronouncement that unregulated tech companies are going to be the death of intellectual culture as we know it. Describing San Francisco, where she lives, she notes the failure of more and more startups, the financial struggles of LinkedIn before its sale to Microsoft, the mass exodus of investors from Twitter, and Uber’s chronic struggles to reach profitability. Life in Code was written before Snapchat went public, but Ullman predicts correctly that it won’t go very well.
“The privileged millennial generation has bet its future on the Internet,” Ullman writes. “I wonder if they know the peril and folly of that bet.” Ullman, a programmer, lived through the first tech collapse. Now, she writes, conditions are ripe for a second fall. “The general public has been left standing on the sidelines, watching the valuations soar into the multibillions of dollars, their appetites whetted: they, too, want to get into the game. I fear that on the IPOs, the public will rush in to buy, as was true in 2000.”
These two dark visions of America’s future–one in which big tech brings about the end of society as we know it, and one in which it crashes under its own weight—both lead to similar outcomes: CEOs take their payouts and head to their secret underground bunkers in the desert while those on the outside get left holding the bag. Both potential endings also point to a precipice that we, as a society, are fast approaching, a sense that the ground is ready to fall out from beneath us at any time.
“There has never been an epoch that did not feel itself to be ‘modern,’” Walter Benjamin writes in the Arcades Project, “and did not believe itself to be standing directly before an abyss.” Thanks to climate change, Donald Trump’s perpetual nonsense, the rise of white supremacist hate groups, and the mass shootings and terror attacks that pepper the headlines each day, it’s hard not to feel as if that we’re all alive at the start of a dawning apocalypse. And maybe that’s because we are. But the end that’s coming will not be all encompassing. “The ‘modern,’’’ Benjamin also wrote, “is as varied in its meaning as the different aspects of one and the same kaleidoscope.”
In his book Homo Deus: A Brief History of Tomorrow, historian Yuval Noah Harari outlines the Dataist hypothesis that human beings are algorithms, elements of a massive global data processing system, the output of which was always destined to be a better, more efficient data processing system. “Human experiences are not sacred and Homo Sapiens isn’t the apex of creation,” Harari writes. “Humans are merely tools.” The endpoint of our current evolutionary trajectory, some researchers hypothesize, could look like a series of non-biological networks capable of communicating, rebuilding, repairing, and reproducing new versions of themselves without us. Harari points to theories that suggest we’ve always been headed towards that point, that it has always been what was supposed to happen, that we’re only one step in a process longer and more far-reaching than we can possibly imagine. It’s those enterprising entrepreneurs willing to exploit our connectivity-inclined, data-processing inner natures who stand to profit most from humanity’s current evolutionary moment.
In a recent feature in New York about Facebook, Mark Zuckerberg’s former ghost writer Kate Losse, trying to remember Facebook’s “first statement of purpose,” recalls her boss saying often, “I just want to create information flow.” Where Curtis’s PR executives in Century of the Self exploited our innate selfishness for their own gain, the Zuckerbergs of the world cash in on our uncontrollable impulse to share information. An impulse that, according to Harari, might be leading, even now, to the development of an entity that, in its pursuit of greater networking capability, will absorb human biology then leave it behind. It sounds, I know, like science fiction. But, 15 years ago, so did Snapchat, Facebook, and the iPhone.
Right now, the tide seems to be turning against tech. Last year, New York Times writer Farhad Manjoo promoted a series of articles on the monopoly power of Facebook, Apple, Google, Amazon, and Microsoft. The Guardian published a story about Facebook and Google employees safeguarding themselves against the addictive properties of the platforms they helped create. Cathy O’Neil’s look at the algorithms that shape the Internet, Weapons of Math Destruction, was shortlisted in 2016 for a National Book Award. Following closely on these reports, of course, have come the inevitable accusations of alarmism from technologists and the people who love them. Where that discourse will lead is hard to say.
One of the central question authors like Foer, O’Neil, Ullman, and Manjoo seem to want to raise is this: What will our legacy be? Will we be known for having set in place the foundations for a tech industry beholden to its users’ well being? Or will we be a blip, the last people who believed in an Internet capable of bringing about a brave new world, before everything changed? Benjamin is correct that all generations harbor an anxiety that theirs will be the last to grace the Earth before the world ends. But no generation has been so far, and ours probably won’t be either. And so to this question, I would add another: What will rise from whatever we build then leave behind? Because for better or for worse, something will.