Homo Deus: A Brief History of Tomorrow

New Price: $35.00
Used Price: $8.65

Mentioned in:

Will the Internet Destroy Us All? On Franklin Foer’s ‘World Without Mind’

When I was in graduate school, a professor introduced me to a documentary called The Century of the Self. Directed by BBC journalist Adam Curtis, it follows the rise of modern public relations, whose Austrian inventor, Edward Bernays, exploited Americans’ innate self-centeredness to sell us on everything from psychoanalysis to cigarettes. It’s an eye-opening piece of work, and one I used to rewatch once or twice a year. Last time I did though, it occurred to me that it might not be all that relevant. Because we aren’t living in the century of the self at all anymore, but the century of the crowd.

It would be easy, I guess, to argue that the self is still ascendant since social media gives people more ways to think about themselves than ever. But a hashtag can’t go viral with just one user, nobody cares about an Instagram photo no one likes, and does a YouTube video that doesn’t get watched even exist? Even as users do the self-focused work of updating LinkedIn profiles and posting on Twitter and Facebook, they do it in the service of belonging, at the back of everyone’s minds, an ever-present audience whose attention they need if their efforts aren’t to be wasted.

In his new book World Without Mind: The Existential Threat of Big Tech, Franklin Foer argues that this shift from individual to collective thinking is nowhere more evident than in the way we create and consume media on the Internet. Because tech companies like Facebook and Google make money off the sale of our personal data to advertisers, they depend on the attention of the masses to survive. And because their algorithms shape much of what we see online, it’s to their benefit to coerce us into thinking of ourselves not as individuals but as members of groups. “The big tech companies,” Foer writes, “Propel us to join the crowd—they provide us with the trending topics and their algorithms suggest that we read the same articles, tweets, and posts as the rest of the world.”

Foer started his journalism career in the late ’90s as a writer for Slate when it was still owned by Microsoft. He edited The New Republic twice, from 2006 to 2010 and later, in 2012, after it was purchased by millennial billionaire and Facebook co-founder Chris Hughes. The year Foer first joined TNR, only college students could have Facebook accounts, the iPhone hadn’t yet been released, and the Internet still represented an opportunity for democratization, where a small website could attract a self-selecting group of readers simply by producing well-written articles about interesting things.

Today, there are two billion people on Facebook, which is also where most people get their news. Media organizations have adjusted accordingly, prioritizing stories that will travel widely online. Foer resigned from TNR shortly after Hughes announced he wanted to run the magazine like a startup. He uses the contentious end of his tenure there to make the case that news organizations desperate for traffic have given in too easily to the demands of big tech, selling out their readership in the pursuit of clicks and advertising dollars. The end result of this kind of corruption is sitting in the White House right now. “Trump,” the subject of thousands of sensationalist headlines notable primarily for their clickability, “began as Cecil the Lion, and then ended up president of the United States.”

Foer, of course, is writing about this topic from a position of relative privilege. He rose in his field before journalists relied on Twitter to promote their work. His career-making job was at a publication that has, more than once, made headlines for fostering an environment of racism and misogyny and a system of exclusion that may have facilitated his own way to the top. In late 2017, news about the sexual misconduct of his influential friend, TNR culture editor Leon Wieseltier, spread widely and quickly on Twitter and Facebook. Perhaps aware even as he was writing that he was not in the position to launch an unbiased critique, Foer chooses to aim his polemic at the people who run large online platforms and not at the platforms themselves.

Foer doesn’t want Facebook to stop existing, but he does want greater government regulation and better antitrust legislation. He wants a Data Protection Authority, like the Consumer Financial Protection Bureau, to manage big tech’s sale of our personal data. He wants increased awareness on the part of the public of the monopolies Facebook, Apple, Amazon, and Google represent. He wants everyone to start reading novels again. And he wants news organizations to implement paywalls to protect their integrity, rather than depending on traffic for revenue.

While I agree that reading fiction is one of the only ways any of us are going to survive this era with our minds intact, implementing subscription fees to save journalism sounds like suggesting everyone go back to horse-drawn carriages to end climate change. Foer dismisses the dictum “Information Wants to be Free” as “a bit of nineties pabulum,” but he’s wrong; If we put up blocks against information online in the form of paywalls, it’s going to find a way around them like a river around a poorly built dam.

We’re not going to go back to the way things were before, and if anything, the Internet’s information economy is going to carve out a wider and wider swath in our brains. Subscriptions work for The New Yorker and The New York Times in part because they come with built-in audiences old enough to remember when paying for information was the best way to get it. People might pay monthly fees for subscriptions to Stitch Fix and Netflix, but this model won’t sustain itself in a world full of readers who expect good writing not to cost anything.

Foer also has a higher opinion of human willpower in the face of massively well-funded efforts to dismantle, divert, and repurpose it than I do. I don’t know if the founders of Google and the big social media platforms always knew it would be possible to turn their user bases into billions of individual nodes ready to transmit information—via tweets, texts, posts, and status updates—at the expense of all their free time, but they do now. Our phones and our brains exist in a symbiotic relationship that is only going to intensify as time passes. As Foer himself notes, “We’ve all become a bit cyborg.”

The more addicted we are, the longer we spend online, the more data we give big technology companies to sell, the less incentive they have to change. We aren’t in a position to go offline, because online is where our families, our friends, and our jobs are. Tech companies have the lobbying power, financial means, and captive audiences necessary to ensure that the stimulus-reward loops they offer never have to stop. Media organizations that take advantage of these weaknesses will grow, while those that put up pay walls, adding friction to the user experience, will wither and die.

As a writer at Slate and an editor at the New Republic, Foer was a member of the generation that helped put in place the framework of a media industry whose flaws he’s railing against. He’s unlikely to be the person who fixes it. And just as Foer can’t solve the problems inherent in an industry he helped build, big tech companies aren’t going to remedy the problems they’ve brought about. Not because they don’t want to (though, why would they?) but because for the most part, the people running these companies can’t see the full picture of what they’ve done.

In an interview with Axios’s Mike Allen, Facebook CFO Sheryl Sandberg demonstrated little remorse at the role Facebook played in facilitating Russia’s interference in the 2016 presidential election via fake campaign ads. “A lot of what we allow on Facebook is people expressing themselves,” Sandberg said. “When you allow free expression, you allow free expression, and that means you allow people to say things that you don’t like and that go against your core beliefs. And it’s not just content—it’s ads. Because when you’re thinking about political speech, ads are really important.” In the universe where Sandberg lives, our problems—which include a president poised to start a war for the sake of his ego—are her problems only in so much as they damage her company’s ability to accept money from whomever it wants.

In late 2017, Twitter, Facebook, and Google were all called upon to testify before the Senate Intelligence Committee. Some members of Congress want a bill requiring big tech companies to disclose the funding source for political ads. Facebook and Twitter announced new internal policies regulating transparency. But how well these regulations will be enforced is unknown, and, frankly, it’s difficult to imagine a world in which insanely well-capitalized companies steeped in the libertarian ethos of Silicon Valley let rules get in the way of “innovation.”

One of the best chapters in World Without Mind involves the coming of what Foer calls the Big One, “the inevitable mega-hack that will rumble society to its core.” Foer writes that the Big One will have the potential to bring down our financial infrastructure, deleting fortunes and 401Ks in the blink of an eye and causing the kind of damage to our material infrastructure that could lead to death. Big tech can see the Big One coming, and is bracing for it, taking lessons from the example set by the banks during the economic collapse of 2008. They’re lawyering up and harnessing resources to make sure they’ll make it through. We, the users whose fortunes will have been lost, whose data will have been mishandled and who will have potentially suffered grave bodily harm as the result of this mega-hack, won’t fare so well.

This prediction brings to mind another recent book about the current state of technology, Ellen Ullman’s Life in Code: A Personal History of Technology. Ullman also denounces the dismantling of journalism as we know it by social media. “Now,” she writes, “without leaving home, from the comfort of your easy chair, you can divorce yourself from the consensus on what constitutes ‘truth.’”  Ullman, like Foer, blames these platforms for the election of President Trump, calling Twitter the perfect agent of disintermediation, “designed so that every utterance could be sent to everyone, going over the heads of anyone in between.”

But she veers from Foer’s pronouncement that unregulated tech companies are going to be the death of intellectual culture as we know it. Describing San Francisco, where she lives, she notes the failure of more and more startups, the financial struggles of LinkedIn before its sale to Microsoft, the mass exodus of investors from Twitter, and Uber’s chronic struggles to reach profitability. Life in Code was written before Snapchat went public, but Ullman predicts correctly that it won’t go very well.

“The privileged millennial generation has bet its future on the Internet,” Ullman writes. “I wonder if they know the peril and folly of that bet.” Ullman, a programmer, lived through the first tech collapse. Now, she writes, conditions are ripe for a second fall. “The general public has been left standing on the sidelines, watching the valuations soar into the multibillions of dollars, their appetites whetted: they, too, want to get into the game. I fear that on the IPOs, the public will rush in to buy, as was true in 2000.”

These two dark visions of America’s future–one in which big tech brings about the end of society as we know it, and one in which it crashes under its own weight—both lead to similar outcomes: CEOs take their payouts and head to their secret underground bunkers in the desert while those on the outside get left holding the bag. Both potential endings also point to a precipice that we, as a society, are fast approaching, a sense that the ground is ready to fall out from beneath us at any time.

“There has never been an epoch that did not feel itself to be ‘modern,’” Walter Benjamin writes in the Arcades Project, “and did not believe itself to be standing directly before an abyss.” Thanks to climate change, Donald Trump’s perpetual nonsense, the rise of white supremacist hate groups, and the mass shootings and terror attacks that pepper the headlines each day, it’s hard not to feel as if that we’re all alive at the start of a dawning apocalypse. And maybe that’s because we are. But the end that’s coming will not be all encompassing. “The ‘modern,’’’ Benjamin also wrote, “is as varied in its meaning as the different aspects of one and the same kaleidoscope.”

In his book Homo Deus: A Brief History of Tomorrow, historian Yuval Noah Harari outlines the Dataist hypothesis that human beings are algorithms, elements of a massive global data processing system, the output of which was always  destined to be a better, more efficient data processing system. “Human experiences are not sacred and Homo Sapiens isn’t the apex of creation,” Harari writes. “Humans are merely tools.” The endpoint of our current evolutionary trajectory, some researchers hypothesize, could look like a series of non-biological networks capable of communicating, rebuilding, repairing, and reproducing new versions of themselves without us. Harari points to theories that suggest we’ve always been headed towards that point, that it has always been what was supposed to happen, that we’re only one step in a process longer and more far-reaching than we can possibly imagine. It’s those enterprising entrepreneurs willing to exploit our connectivity-inclined, data-processing inner natures who stand to profit most from humanity’s current evolutionary moment.

In a recent feature in New York about Facebook, Mark Zuckerberg’s former ghost writer Kate Losse, trying to remember Facebook’s “first statement of purpose,” recalls her boss saying often, “I just want to create information flow.” Where Curtis’s PR executives in Century of the Self exploited our innate selfishness for their own gain, the Zuckerbergs of the world cash in on our uncontrollable impulse to share information. An impulse that, according to Harari, might be leading, even now, to the development of an entity that, in its pursuit of greater networking capability, will absorb human biology then leave it behind. It sounds, I know, like science fiction. But, 15 years ago, so did Snapchat, Facebook, and the iPhone.

Right now, the tide seems to be turning against tech. Last year, New York Times writer Farhad Manjoo promoted a series of articles on the monopoly power of Facebook, Apple, Google, Amazon, and Microsoft. The Guardian published a story about Facebook and Google employees safeguarding themselves against the addictive properties of the platforms they helped create. Cathy O’Neil’s look at the algorithms that shape the Internet, Weapons of Math Destruction, was shortlisted in 2016 for a National Book Award. Following closely on these reports, of course, have come the inevitable accusations of alarmism from technologists and the people who love them. Where that discourse will lead is hard to say.

One of the central question authors like Foer, O’Neil, Ullman, and Manjoo seem to want to raise is this: What will our legacy be? Will we be known for having set in place the foundations for a tech industry beholden to its users’ well being? Or will we be a blip, the last people who believed in an Internet capable of bringing about a brave new world, before everything changed? Benjamin is correct that all generations harbor an anxiety that theirs will be the last to grace the Earth before the world ends. But no generation has been so far, and ours probably won’t be either. And so to this question, I would add another: What will rise from whatever we build then leave behind? Because for better or for worse, something will.

Five Ways ‘Homo Deus’ by Yuval Noah Harari Changed My Life

Historian Yuval Noah Harari first published Sapiens: A Brief History of Mankind in Hebrew in 2011. It went on to be translated in more than 30 countries to international acclaim and first appeared in English in 2014. The book has climbed the bestseller lists since. From the Stone Age to modern day, Sapiens explains how modern humans have come to dominate our environment through our unique ability to collaborate. We can organize flexibly around imaginary concepts, like nations, gods, and money.

Harari’s new book, Homo Deus, picks up where Sapiens left off and projects forward to imagine what we might become. In addition to nations, gods, and money, the self is also an imaginary concept. As self-made gods, what new world should we create? In an age where making sense of the world feels something like trying to take a sip of water from a fire hose, Harari has a unique ability to construct a captivating narrative while drawing from many disciplines. Those who have read Sapiens may feel that some of the ideas in the second part of Homo Deus are familiar, but I urge you not to skip forward. The review of core ideas, like the value of money and humanism, are a necessary set up for the thrilling third part of the book.

And it’s this third part of Homo Deus, where Harari plays a sort of proviso-prophet, that is especially fascinating. Malcolm Gladwell-style criticism will undoubtedly be leveled at a historian who has dared to pluck examples from philosophy, biology, psychology, and other disciplines in order to peer into the future. But much like Gladwell’s “Revisionist History” podcast, the point of Homo Deus is not to make predictions, but to loosen the grip of our past so that we can ask better questions and be more imaginative about the future. This is a brave book for a brave new world.

Should you read Homo Deus? I did and it changed my life in a number of ways. Here are the top five:

5) I Feel Assured My History Degree Was Not a Waste of Time

Instead of shrugging when my capitalist uncle asks, again, why I bothered to study history, I will tell him this: History can liberate us from the past. If you want people to gain rights or build equality in the world, Harari makes the case that the first step is to retell the history. The new story will explain that, “our present situation is neither natural nor eternal.” Our lives become the stories we decide to tell ourselves. History majors may yet inherit the earth.

4) I Understand Why I Went Through the Torture of Childbirth, Nearly Died, and Then Willingly Had Sex and Did It All Over Again

Before the pain became so excruciating that I couldn’t think, I spent much of my second labor wondering, why? If my life is a story that I tell myself, then shouldn’t the pain I experienced during my first labor stop me from doing this again? Within a larger conversation about how we think and make decisions, Harari explains the peak-end rule. The narrating self, the one that tells the story, has a tendency to remember the most painful moment and the end moment, rather than a detailed play-by-play of what actually happened. Some part of myself did hold a memory of the pain, but I’d since allowed my love of my baby to override the more accurate memories.

This peak-end rule also goes some way to explain multiple occurrences of teething and toddlers in my life. No need to re-read Gone Girl, I am my own unreliable narrator.

3) When I Next Get in Trouble, I Will Say My Algorithm Made Me Do It

When I started this article, I stole two squares from my cousin’s chocolate bar. At the time of writing, I’ve since circled back for two more. Do I have evil in my soul? Why does my mind crave chocolate? Have I evolved in a way that makes chocolate essential to my being? Perhaps my cousin will accept one of these reasons as an excuse. Or, I could tell her that I am an avalanche of electric signals fired by billions of neurons that stimulate glands to secrete hormones and make my muscles contract in such a way that the chocolate found its way to my mouth.

Rather than a holistic chocolate stealing entity, I am a combination of smaller parts that have come together and evolved gradually. These parts include a wad of tissue in my head that I call my brain. As a whole, my organism works like any other algorithm. Put an input, like a search query, into Google’s algorithm and you will arrive at a given result depending on how the search engine functions that day. Put chocolate in the house and you may find that a chain of electric signals and secretions work the two squares through my algorithm, which results in the chocolate ceasing to exist in its previous form.

2) I Can Now Articulate Another Thing That Has Been Bugging Me About the U.S. Election

Since reading Homo Deus, I am no longer am a practicing devotee of liberalism.

Many of us might agree that Donald Trump talks nonsense and he has gained popularity by pointing out that his opponents don’t make any sense either. There are times when none of it feels rational. Regardless of what side you are on, our current political imagination is dominated by liberalism. As we don’t believe in  a god or politician who is all-knowing, we believe that the best way to find truth and the best leader is at the level of the individual. Our new authority, a clear and consistent inner voice, guides our most important decisions.

Or does it? I don’t need to do much self-reflection to realize that my inner voice is often conflicted and inconsistent. I dredge up old memories and inflate their importance to justify my current decisions. I say I believe one thing and then do another. So do the candidates.

At the core of liberalism is a belief that we are individuals with defined senses of self. But for all our advances in science, we have never found any evidence of a “self.” Put Trump’s policies under a microscope and all you will find is a chain reaction of biochemical events. At its core, liberalism is an act of faith. In order to believe in it, you need to put rational thought aside, which astutely describes what has been bugging me.

1) I Will Kick Ass When I Next Play Settlers of Catan

I’m not going to tell you the winning strategy that Harari lays out for Settlers of Catan, but I am convinced that it will give me a competitive edge. But I have a problem. If you read the book, we might both start using the same strategy, which will significantly lessen my advantage. A third player might also read the book and know both our tactics in advance. Once Harari’s book is widely read, all my advantage is lost.

As we live in a data driven world, my best bet might be to build an app that can crunch the strategies and variables and make a recommendation for my next best move. If the app becomes widely used, I can go on to build a networked algorithm that can collect information and become all knowing about the strategies used in Settlers of Catan. With my one winning strategy, I will become just one point of input, whereas my algorithm’s power to win, with data collected from a broad range of inputs, will quickly become a far superior player. At that point, do I cease to be the player and become the one who is played?

If I sound like a character who from the pages of a Jeff VanderMeer novel, it’s for good reason. We currently use technology to overcome our limitations, but its capabilities may contain the seeds of our irrelevance or downfall.

In the more immediate future, however, I plan to kick ass at Settlers of Catan.

Surprise Me!

BROWSE BY AUTHOR