Jaron Lanier wants you to take a break from social media. Not forever—but for a significant period of time. He suggests six months. He understands that this will be hard. And he gets that it could be professionally difficult and personally isolating. But he thinks it will be worth the sacrifice, because it will help make the internet better. It might also make you feel better—more free, happier, calmer.
If you’re intrigued by this challenge, I encourage you to pick up his book-length essay: Ten Arguments for Deleting Your Social Media Accounts Now. You won’t be scolded or labeled a screen addict; instead, you’ll be asked to take a closer look at the business models behind Facebook, Twitter, and Google, which Lanier labels BUMMER, an acronym for “Behaviors of Users Modified, and Made into Empires for Rent.” In a recent TED Talk, Lanier explained it this way: “We cannot have a society in which, if two people wish to communicate, the only way that can happen is if it’s financed by a third person who wishes to manipulate them.”
Lanier is a Silicon Valley insider, best known for pioneering virtual reality but also for his books, especially You Are Not A Gadget (2010) and Who Owns the Future? (2013). Both argue (in part) for a more thoughtful, economically fair version of the Internet. Ten Arguments is shorter than his previous books, and it’s more urgent, with frequent references to current events. In his acknowledgments, Lanier explains that he wrote it because his 2017 book, Dawn of the New Everything, a memoir about his work with virtual reality, was so overshadowed by the news that he ended up speaking to interviewers about how “social media was playing a role in making the world newly dark and crazy.”
The Millions: I loved your book and I came to it in kind of a funny way. I read In Search of Lost Time last year, which is a novel that really makes you think about your habits, and when I finished reading it, I looked at my social media use and decided I didn’t want to be on social media anymore. I’ve been trying to convince other people to get off of social media, and when I saw your book of 10 arguments, I thought I could find some good ones.
TM: I felt like that was the main argument at the end of the book. It seems to be oriented toward individual action: Quit social media for a month and see how you feel about it.
JL: We’ve shifted the whole framework of society into this one where corporate algorithms are what know you, but I’m much more interested in the process of people knowing themselves and inventing themselves. It’s almost as if people have forgotten that. It’s very strange to me; people who are very addicted to the system will say, oh, well, I’m letting it know me, but in order for there to be something to know, you have to invent yourself, and in order to invent yourself, you have to spend some time with yourself. There’s a real quality of absurdity to me in the way we’re thinking about it.
TM: I did wonder if you’re also looking for collective action. Do you think a large group of people should do this in order to change the landscape?
JL: I think it would be a tremendously positive thing for the world if there were a massive group of people to delete their accounts all at once; however, I believe that it’s a very unlikely thing to happen. The truth is that companies like Facebook, but Facebook in particular, genuinely have been able to leverage addiction. The very definition of addiction is that it’s hard to quit. And then, on top of that, they have a digital large-scale version of addiction, which is called network effect. There’s something very reasonable that people want—which is what the internet was for—which is, they want to be able to reach each other, and they would like to be able to do things like share family pictures and all that. And as long as there’s a single company that has such a monopoly on that stuff and also actually owns all the data, in order for everyone to get off of it, they’d all actually have to do it at once and then get onto something else, and that coordination problem is impossible. Therefore, even if they weren’t personally addicted, it’s inconceivable that everyone could get off.
I understand that the ideal of everyone just leaving the stuff is hard to the point of near impossibility, but I feel I have to ask for it because you have to be able to ask for the right thing to happen. Even if the ideal is unattainable in a given era, you have to at least be able to articulate it. If you can’t do that, then you’re precluding hope for the future. In the immediate term, the fact that so many people have sympathy with the argument I’m making, combined with the fact that those same people have a hard time acting on it, will reinforce the idea that the current situation is really not democratic, not fair, not sustainable in the longer term.
I think that in, let’s say, the last half century or so, we’ve seen a few cases of massive societal change that were brought about by people who were trying to promote good ideas. For instance, littering used to be completely overwhelming, and now it’s rare; smoking used to be overwhelming, and now it’s rare; driving while drunk is not as rare as it needs to be but is certainly less common than it used to be. Those are three examples of very commonsense ideas getting implemented through effort and good intentions. So on one level, it’s like that. All of those degraded the lives of people in an immediate way; this one kind of hides the damages it’s doing until there’s an election that seems to be counter to what the majority of the people wanted. Or until a rise of horrible ethnic violence in parts of the world, or until waves of bullying, waves of teen suicide—all these kinds of things.
It’s a moral imperative to at least state what everybody should do even though it’s so hard. And then we’ll have to kind of gradually muddle our way toward something better.
TM: Given the situation now, how does an organization cope? For example, the website I write for now is an online magazine, and we use social media to post links to articles. And we need social media because it’s the way people find out about what we’re writing. Considering a site like ours, or even a larger site, like the New York Times, would you argue for them to just quit for six months or a predetermined amount of time, just to see what would happen?
JL: This is a tricky area. If people ask me for advice—and people do, even though I don’t advertise myself as an advice-giver on any personal level!—what I always say is: If you really think that using social media is vital to your career or to whatever you do, then you need to make the decision to make your career or your own efforts successful. It doesn’t do any good for anyone if you ruin your own life process. I feel very strongly about that. But there’s two things that have to be said in addition to that. One is that it’s possible that in some cases, this feeling of the necessity of social media is a bit of an illusion, and until we test it, it’s a little hard to say. I’m told that what I do should be impossible without social media accounts. I’m not the bestselling author in the world, but I have been the bestselling author in different countries at different times, and I have had bestselling books, and I somehow seem to get by without social media. I’m told, well, but you’re an exception in this way or that way. I might be, but why couldn’t there be others?
I think of a social media company, in particular Facebook and to a degree Google, as an existential mafia. They’re saying, you have to work with us or you effectively won’t exist. You’ll become invisible to everybody. Your very corporeality is in our hands, so give us a cut of your being. It’s a very strange moment. Ultimately, the power of a protection racket does rest with their ability to keep a community in fear. If only people could lose the fear, then their power would evaporate, but this gets us back to the problem of cooperation. Can I share one fantasy I’ve had?
JL: The number of websites that would say approximately what you just said—that we’re trying to reach people with this sincere, high-quality work we do, and we feel we need social media just to let the people know—the number of such sites in the world is not gigantic. Let’s say it’s less than 10,000. I’m not sure what it really is, but it’s something like that. There’s not a huge number of places where there’s multiple people working together to consistently put out good material online. If it’s really in the thousands, or even in the low tens of thousands, why can’t all those sites just get together? And come up with their own thing, with really great policies? Genuine privacy policies, no advertising—or at least, no advertising that’s personalized. I don’t object to advertising, I just object to behavior modification, which means there’s a feedback loop to your personal data. No Cambridge Analytica. No Putin. No information warfare. No bizarre, calculated creepy stuff. It would just be a thing to function for what you need, which would be a way to let people know what you have. Give people a way to manage the amount of complexity that exists online so they can find the things they care about. That could be done by a coalition of a relatively small number of people. I’ve heard of people trying that in the past, and usually what happens is Facebook treats that kind of like how Trump treats some person he has an affair with: This massive machine comes into play to try to shut it down. But I don’t know—I think something like that could happen.
TM: I wish it would. Right now, we’re starting to rely on a subscription model in part because it feels too dangerous to rely so much on the giant social media companies. So I like that idea, but it seems like the New York Times or some big site like that would have to get behind it.
JL: I think they might. I haven’t personally ever tried to have a conversation. This is the first time I’ve brought this up with someone in an interview. But I’m just really struck that for every organization that interviews me, whether it’s a really big one like the Times or a smaller one like yours, everyone has the same story: that we feel beholden to this weird company that stands between us and our readers and seems to be able to dictate to us how we can be in the world. And it’s a great shame that this is happening.
TM: How do you see this book fitting in with your other books?
JL: I feel like it’s a little different. The other books I’ve written were perhaps in a way more original. This one has a very different quality—it’s short! The other ones are big. This one is trying to organize some ideas and observations and information that really have been out there quite a bit. It does bring new ideas to the picture. In a lot of cases, it’s trying to create a focused way to think about so much information that’s already been out there, some of it from me, but a lot of it from other people.
One argument in it that is new is the reason that social networks emphasize negativity so much. And there are a few other little things, like the reason cats are more popular than dogs online. In a sense, it’s a more popular book because it’s working within what’s already out there. The other books, when they came out, I think were very different from anything else that was on the same topic.
TM: Did you have a different process for writing this book?
JL: Actually, I did. The other ones were super hard to write. This one was a little different. When I was doing the press interviews for the last one, which was about virtual reality, and my life, people kept asking me questions about what had happened with technology and politics—the Trump election and Brexit. This was before the Cambridge Analytica scandal was known, but there was still a lot of tension about it. It was in responding to journalists that I realized there was a need to pull everything together in one overarching argument. It rose much more conversationally than the previous ones. And so I thank all the journalists who asked me questions in the back of the book, because it really was the prompt from them that got me to pull this thing together.
TM: It seems like you revised it up to the last minute because there are references to #MeToo and other pretty recent events—how did you decide when to stop? Were you driving your editors crazy?
JL: It was kind of a comedy, actually. I turned the thing in around New Year’s, and then something would happen that would be relevant, and I’d call up my poor editor and say wait, wait, wait, I have to add a few more things. And then I’d get this email back saying well, OK, but remember we have to get it to the copy editor. And I’d say, just a few things! And then the next week something else would happen. And I’d say stop, stop, stop, I just have to add a little bit more! And we finally had to have a conversation saying, look, this could go on forever—you have to stop.
The thing is, there’s this tension: I really believe in the book as a media form, because what a book does is almost like an encapsulation of personhood. It has a definite authorship, and you put enough in it so that you present a whole worldview and not just a response, not just a countertweet or something, and you’re committed to it enough that you believe it will be able to stick around for a few years and still mean something. But at the same time, the very thing that makes it so important makes it hard to connect to fast-changing events and a very dynamic situation. You have to find the compromise between them that will work. And so at a certain point, we just had to cut it off. And even after that, Cambridge Analytica! It was at the printer, as I say in the book, and I said, we have to acknowledge it, and so I had to make it fit on the blank space on a page so that nothing would have to be repaginated.
TM: Is there something that’s happening now that you wish you could have gotten in the book?
JL: You know, it’s every day. There’s a new study today on the correlation between smartphone use and suicide. It’s just devastating. I was just reading a report on it in the Guardian; I haven’t read the original research yet. It’s not something new, but it’s more detailed research than there had been. There’s this extraordinary filing today, in San Mateo, between two extremely unattractive companies that are fighting each other. It’s this little company that wanted to sell people’s bikini pictures on an on-demand basis. They sued Facebook for shutting it down. They claim to have uncovered this extraordinary evidence that Facebook may be even more dickish than we knew they’d been. It’s sort of like a lawsuit between Rudy Giuliani and Harvey Weinstein. You can’t really root for either of them. But there they are. It’s just been happening every day—actually, the thing that was extraordinary today was James Clapper saying that the Russian information warfare would be more sophisticated, harder to notice, and perhaps more effective for the midterms than they’d been for the last election, which is not a pleasant thing to read or contemplate. It’s plausible.
TM: Before you go, I want to ask you about your reading habits and what kinds of things you like to read.
JL: Lately, I’ve been kind of feeling retro. I’ve been trying to reread old stuff that meant a lot to me when I was younger to try to understand what it was that got to me—a single sentence of Nabokov. I want to try to understand why that could be so heartbreaking and amazing. And also, I have this 11-year-old who likes to read adult books. She loves the prose of Bukowski, so we have to go through and kind of edit it out, to give her readable versions of Bukowski. I have friends who write wonderfully, and it’s strange to read somebody you know, because you don’t have that distance that is good for literature, but I just gotta say, Zadie Smith continues to amaze me with everything she does, and my buddy Dave Eggers.
I try to go through at least a dozen sites every day, just to keep up, and there’s some amazing writers online, but it just goes by so fast. I don’t really get to know individual writers, and that kind of bothers me. I wish there were some way to make it easier to get to know a single person’s writing when they’re writing in a lot of different places.
TM: It is hard; there’s not a good way. Sometimes you can go to people’s websites, but people don’t update them as much as they used to.
JL: Right. This whole world would be better if it weren’t for the domination of social media companies that are bent on behavior modification. The internet was supposed to be good about this stuff, and it still could be. I think it still will be someday.
In high school I had to read a lot of William Faulkner. An ambitious literature teacher fresh from Davidson College introduced us to The Sound and the Fury, As I Lay Dying, and Light in August in a single semester. Of course it was torture, subjecting the linear teenage mind to such non-linear narration, but something about Faulkner stuck, and one day on winter break, as a storm dropped a thin blanket of snow on Atlanta, I picked up The Reivers.
Suddenly Faulkner changed. So accessible. So clear. So page-turning. I would later read critics who breezily called the Pulitzer Prize-winning book lighthearted, narratively simple, and, for these reasons, atypical Faulkner (“affectingly wistful,” Jonathan Yardley wrote). It was, as they say today, a fun read, maybe (it was implied) too much so for a heavyweight such as the bard from Oxford.
But later in life I returned to Faulkner much in the way you return to the music of your youth. And on closer inspection it struck me that nothing about The Reivers was simple. In fact, the book, a thematic wolf in sheep’s clothing, was (and remains) one of the weightiest road-trip novels ever written. The Reivers, in essence, gets very meta about movement.
The Odyssey, On the Road, Zen and the Art of Motorcycle Maintenance — these books capture long-duration mobility as a backdrop to drama. But in The Reivers, movement itself is the drama, not to mention the quickening pulse of Yoknapatawpha, a place where, the closer you look, the more the characters materialize by gathering moss.
The book opens with a mobility upgrade. Boon Hoggenbeck steals (reives — it’s a Scottish term) Lucius Priest’s grandfather’s car so he can drive from Jefferson to Memphis to visit a prostitute named Miss Corrie. Before Boon departs, Lucius, aged 11, convinces him to bring him along for the ride. En route, they discover that Ned McCaslin, a black man who tends to Lucius’s grandfather’s horses, is hiding in the back seat. As the car fills with characters, The Reivers indeed becomes affectingly wistful, with Huck Finnish coming-of-age excitement leavening the trip.
Matters become a little heavier in Memphis. Boon drops Lucius at Miss Reba’s brothel and goes searching for his “girlfriend.” Ned, in the plot’s pivotal scene, secretly barters the stolen car — the first car in Yoknawpatapha County (where it’s 1905) — for a horse — “Coppermine” — he plans to train up and race hard at a local track (under the new nom de guerre “Lightening”). With the proceeds, Ned vows to buy back the vehicle and allow the dividends to speak to his considerable equine expertise.
Critics have long characterized The Reviers as a soft critique of modernization. It’s certainly that. Horses and mules haul so many themes around Faulkner’s novels that it seems appropriate for him to grant the beasts an 11-hour paean (this was his last novel), which he does by favorably juxtaposing the car’s defects with the horse’s reliability.
One example stands out. Midway to Memphis, Priest’s hijacked car gets stuck in a mud hole. The men struggle to wedge it out with iron bars and a plank of wood, but the vehicle — “so huge and so immobile” — proves to be “too fixed and foundational.” Defeated, Boon pays the mud hole’s owners a few bucks to have the car dislodged by a couple of mules, animals he later describes as “already obsolete before they were born.”
What follows is as arresting as anything Faulkner ever wrote. In an instant, the car morphs from an icon of progress into a “mechanical toy rated in power and strength by the dozens of horses.” It’s no longer a shiny symbol of a modernizing South, but an instant fossil, something you’d discover in layers of bedrock, an object that’s “helpless and impotent in the almost infantile clutch of a few inches of the temporary confederation of two mild and specific elements — earth and water.” The horse, an animal Faulkner deeply understood, triumphs over the car.
But Faulkner is hunting more substantial game here. He’s after the very morality of movement itself. In Western thought, the link between movement and morality is by no means self-evident or routinely explored. But to migrate, by definition, is to go astray. And to go astray is to err — to be errant — and, in turn, to be flawed, or at least radically open to its possibilities. The Reivers honors this definition, allowing movement to constitute error — personal, historical, collective error — as well as make possible its upshot: redemption.
But error comes first. After the travelers are disengaged from the mud hole, they eat fried chicken and ham and assess the near future. “When we crossed Hell Creek,” Boon explains, “we crossed Rubicon” and “set the bridge on fire.” They feel the frisson of liberation: “the very land itself seemed to have changed…the air was very urban.” Only automotive power — such a novelty in 1905 — allows them to barter the past for a future characterized by “the mechanized, the mobilized, the inescapable destiny of America.”
But such liberation comes at a cost. When the trio eventually finds the main road to Memphis — “running string straight into distance” — the world they once knew blurs into confusion. The geography outside the gunmetal doors — “the Sabbath afternoon, workless, the cotton and corn growing unvexed now, the mules themselves sabbatical and idle in the pastures” — becomes lost to Lucius, who recalls, “I couldn’t look at it…I was too busy, too concentrated.” Hurdling through space in metallic containment quietly erodes a sense of place and the integrity such a feeling nurtures. “It was Virtue who had given up, relinquished us to Non-virtue,” Lucius remembers thinking as the car kicked up dust. “The country itself was gone.”
And then they stop at Miss Reba’s. “You’ll like it,” Boon tells Lucius.
Lucius doesn’t like it. Lucius is horrified. His experiences at the brothel culminate in a coming-of-age sequence that includes a badly cut hand, copious tears, and the tectonic realization that “I knew too much, had seen too much; I was a child no longer now; innocence and childhood were forever lost, forever gone from me.”
But what never leaves Lucius is the potential for redemption. Redemption in The Reivers is embodied in the noble form of the horse. The relationship that Lucius and Ned develop with Lightening — the bartered horse that Lucius eventually rides in two mile-long circles — restores “the country itself” to a non-automotive pace and routine. It’s on the sweaty back of Lightening — a horse maintained with mechanical precision by Ned — that Lucius transcends his fate and recovers his virtue.
The Reivers ends with this moving restoration. On the way to the race, Ned and Lucius must load Lightening onto a train car. Once in the container, the “horse’s hot ammoniac reek…and the steady murmur of Ned’s voice” blend into something “concentrated” and ineffable. Lucius, a nervous wreck about the race, says he “actually realized not only how Lightening’s and my fate were now one, but that the two of us together carried that of the rest of us, too, certainly Boon’s and Ned’s, since on us depended under what conditions they could go back home.”
Lucius and Lightening, when the first ride begins, careen down the track “as though bolted together.” With that unification, all characters return home the wiser, knowing, as Grandpa Priest would soon tell Lucius, “nothing is forgotten.”
Today, more than 50 years after The Reivers was published, a cottage industry exists to teach us to slow down and simplify the hectic pace of contemporary life. Think Shop Class as Soul Craft, You are Not a Gadget, or Last Child in the Woods. It’s easy to dismiss this genre of literature as a wistful — that word—blend of nostalgia and self-help. Reading The Reivers though, saps the impulse to mock. Although Boon is quick to note to that “if all the human race ever stops moving at the same instant, the surface of the earth will seize,” he also learns that slowing life down enough to watching mules on sabbatical can save your soul from the perils of speed.
That yowl of pain you heard coming from the nation’s capital on Monday afternoon was the death cry of the print newspaper business as it was cut to the heart by the buyout of the Washington Post by Amazon CEO Jeff Bezos. Just forty years ago, during the Watergate scandal, the Post was an economic and cultural force potent enough to help take down a sitting president, and now it has itself been taken down by a guy who less than twenty years ago was working out of his garage selling books on the Internet.
The surprise $250 million deal has coup de grâce written all over it, and in newsrooms across the country, reporters and editors – those who still have jobs – will be grousing over the rich symbolism of one of the crown jewels of American journalism being snapped up by a mogul made rich by the very technology that killed the print business model. But once the grumbling dies down, those of us who care about journalism and culture may have to concede two obvious points. First, we damn well better hope Bezos succeeds where others have failed in figuring out how to produce professional-quality content in the digital age, not just for the sake of the Washington Post or even of the news business, but for the sake of cultural and artistic production in general. Second, of all the billionaires with the means to buy a major cultural institution like the Washington Post, Jeff Bezos might just be the one who can reinvent it for the 21st century.
The details of the Post deal are curious – and telling. For one thing, the newspaper’s parent company, which owns a diverse group of media and educational businesses including the Kaplan test prep company, is selling only its newspapers and holding onto its other businesses. For another, Bezos is buying the Post on his own, and won’t merge it into Amazon, the online retailing behemoth he still runs. Finally, Bezos has said he doesn’t intend to lay off employees and will keep Katherine Weymouth, granddaughter of legendary Post publisher and Washington powerbroker Katherine Graham, on as the newspaper’s publisher.
In other words, the Washington Post Company sees its signature property as a money-loser, and Bezos appears to be stepping in as a white knight to save a cultural institution from falling into disrepair. The history of these sorts of deals is mixed at best. After all, just three years ago the Post Company sold once-mighty Newsweek for a dollar to yet another billionaire, 92-year-old Sidney Harman, who brought in former New Yorker editor Tina Brown, with a plan to merge the aging print weekly with the website The Daily Beast. Harman, however, promptly died, and with the magazine hemorrhaging readers, Brown just last week severed ties between the website and the print magazine, selling Newsweek to start-up International Business Times.
As its purchase price suggests, the Washington Post is in far better shape than Newsweek was, but the paper’s core business of covering inside the Beltway news is under threat from websites like Politico. More importantly, the paper faces the same problem all legacy news organizations face, which is how to scale back its news operation to a level that is economically sustainable in a post-print era without doing fatal damage to the news gathering itself. To do that, though, requires a nuanced understanding of why the old business model failed in the first place.
There are two versions of the story of why print newspapers bit the dust, one a tech-geek fantasy, the other a more prosaic business tale. In the tech-geek fantasy version, spread by the likes of digivangelist Clay Shirky in his 2008 book Here Comes Everybody, newspapers were beaten at their own game by bloggers and regular citizens armed with iPhones and laptops who were able to deliver news faster and more cheaply than the old print warhorses. Ironically, Shirky and others who advance this theory are laboring under the same misconception that has plagued news executives for the last twenty years – namely, the assumption that the principal business of a newspaper is gathering news. Newspapers don’t sell news. Rather, they give news away for free in order to maintain a distribution system for business information, most of which takes the form of paid ads. Newspapers remained as lucrative as they were for as long as they did because until the introduction of the web browser in 1994, nothing else offered cheap access to the millions of ordinary consumers who picked up the paper that landed on their front curb every morning.
Understanding this distinction helps explain why television hurt but did not kill newspapers. Television long ago entered more homes than newspapers ever did, and in many ways TV, which includes moving pictures and sound, is a better delivery device for news. But again, news isn’t the product for sale; advertising is, and by its nature, television can only effectively sell broad conceptual ideas that can be communicated visually in thirty seconds. You can use television to convince millions of Americans to shop at Safeway, but you can’t very well use TV to tell Americans about everything that’s on sale that week at their neighborhood Safeway. And if you are trying to find a roommate or selling some old furniture, you can’t afford the thousands of dollars it would cost to run even a fifteen-second spot on a local station. For those kinds of tasks, you called up your local paper and bought a classified ad – until, that is, Craigslist and eBay came along and let people post those ads essentially for free.
To repeat: newspapers aren’t dying because they’re getting beat on news reporting. Newspapers are dying because the Internet separated the news content from the advertising revenue stream. For generations news executives thought they were selling news, while in fact they were selling a pipeline to consumers that companies and individuals paid to use. Now, the Internet itself is that pipeline, and we’re watching a wild scramble to see who will control it and the rafts of dollars flowing down its many tributaries. So far, tech giants like Apple, Facebook, Google, and, yes, Amazon, are winning that battle hands down.
This same battle, meanwhile, has been playing out across all forms of cultural and artistic expression. Twenty years ago, if a rock band wanted to find a wide audience for its music, it signed with a record label, which then recorded the band’s music and distributed it to stores across the country. Now, thanks to the ease of digital distribution, young musicians can bypass record labels and post their songs online for free. But if they want to make any real money from their work, they will almost certainly have to turn to Apple’s iTunes site and its direct pipeline to America’s ears.
A similar story has played out in the movie business, which only a few years ago could depend on DVDs rentals to make up revenue lost at the box office. Now, thanks to streaming services like Netflix, DVD rental fees have dried up and movie studios are madly turning out special-effects-laden comic-book serials in the hopes of winning over American teenagers and Chinese moviegoers, the last groups still consistently willing to pay to watch a movie in a theater (as long as it’s loud, violent, and not overly dependent on the subtleties of spoken English).
Books have been somewhat insulated from these disruptions because, so far, most readers still prefer physical books over e-books, but the terms of the battle are the same. Publishing firms, which have for generations paid writers to produce and editors to curate books are fighting tech giants Apple and Amazon, which view books primarily as loss leaders they can use to attract customers to their e-readers. So long as most readers continue to prefer printed books, publishing will limp along in its wounded state, rather like the news business after television but before the Internet. But there is a tipping point at which e-readers, and the recommendation engines controlled by the tech giants, could take over the curating role now played by publishing houses, thereby killing the publishing industry as we know it.
All of which brings us back to Jeff Bezos and the Washington Post. Over the last twenty years, much of the money and power once held by content producers – newspapers, record labels, movie studios, publishing houses, etc. – has transferred to the tech giants that now control the digital pipelines to consumers. This means that it’s much easier for any individual artist or journalist to reach an audience, which is a great and good thing, but it also means that the tech giants controlling the pipelines are taking ever increasing shares of profits.
For the past decade or so, we have been enjoying a strange hangover period of the pre-digital age. A generation of journalists and artists trained in the dead-tree era, who have few other marketable skills, have continued producing art and journalism even though they are getting paid far less for their work than they used to. But every year more of these content producers are retiring or moving on, and we are entering a new period dominated by the first truly digital generation of bloggers and artists who are faced with the task of rebuilding the culture industries out of the ashes of the tech explosion.
I and many others have argued that, so far at least, this generation has relied too heavily on memes and information derived from the legacy content producers. In journalism, this has meant hordes of bloggers feasting on an ever-shrinking supply of reported news from print-based news organizations. In film, this has meant kajillions of kids with camera phones riffing on existing story worlds, like Star Wars and Harry Potter, and uploading the results onto YouTube. As Jaron Lanier, a digital pioneer recently turned Internet skeptic, puts it in his 2011 book You Are Not a Gadget, we are a culture in danger of “effectively eating its own seed stock.”
Obviously, this cannot go on forever, but thus far the most powerful technological disrupters have shown little interest in investing in the content carried along their digital pipelines. Apple, with its market-making iPhone and iPad devices, sparked a creative revolution in the world of apps, but when it comes to cultural content like books, movies, and news, all the tech companies have done is made it cheaper and easier to get what you want, cutting deeply into the profit margins for the content producers in the process.
Amazon, which now controls a quarter of the book business, has of course played a huge role in this devaluing of cultural content, but in recent years Amazon has also quietly begun investing in content of its own. Since 2009, Amazon has launched imprints focusing on romance (Montlake Romance), thrillers (Thomas & Mercer), and sci-fi (47North), and now even general adult titles (New Harvest) and literary fiction (Little A). Compared with Amazon itself, these ventures are tiny, and they have run into trouble with rival booksellers like Barnes & Noble, which have refused to stock their titles. But whether these imprints succeed or fail, they demonstrate that Bezos has begun to wrap his mind around what it would mean if his company squeezed so much value out of the book business that publishing became in effect one long amateur hour.
So, is the Washington Post purchase a step in the same direction, an effort on Bezos’s part to invest directly in the content that fuels his billion-dollar pipelines? The short answer is nobody knows. By all accounts the deal came together quickly, and it may well be that Bezos himself is unsure just what he wants to do with the Post. For a man worth $25.2 billion, as Bezos is, a $250 million newspaper truly can qualify as an impulse buy. Perhaps this is simply the billionaire’s answer to collecting old-fashioned typewriters. Let’s hope that’s not the case because whatever you may think of Bezos and others who broke the pre-Internet business model, the fact is it’s broken – and who better to fix it than the man who helped break it in the first place?
Bezos, who has never worked in the news business, may be less attached to the dying print model than most print-news lifers and thus more willing to embrace digital-only innovations. As a man who has made his living tapping the powers of the interwebs, he may be better able to see that strict paywalls, which limit linking and bring in few dollars, are a dead end for most news organizations. As a CEO who recently bought out the reader hub GoodReads, he may be more open to recasting the newspaper as a community gathering spot, a sort of localized wiki combining conversation, community news, and event listings with ad revenues supporting a small, professional news staff. Most important, as a manager who has excelled at the long game, spending years investing in infrastructure for Amazon rather than diverting profits to shareholders, Bezos might be more willing to lose some money while figuring out how to marry news quality with profitability.
Or maybe not. Maybe the guy just wants a $250 million toy. But let’s hope not, because if that’s the case we stand to lose a lot more than a grand old newspaper that once helped take down a president.
Today, April 30, marks the twentieth anniversary of my last day in the newsroom of a daily newspaper. In truth, my newspaper career was neither long nor particularly illustrious. For about four years in my early twenties I worked at two small newspapers: the Mill Valley Record, the decades-old weekly newspaper in my hometown that died a few years after I left; and the Aspen Daily News, which, miraculously, remains in business today. Still, I loved the newspaper business. I have never worked with better people than I did in that crazy little newsroom in Aspen, and I probably never will. I quit because it dawned on me that, while I was a good reporter, I had neither the skills nor the intestinal fortitude to follow in the footsteps of my heroes, investigative reporters like Bob Woodward and David Halberstam. What I couldn’t know the day I left the Daily News and began the long trek that led first to graduate school and then to college teaching was the sheer destructive power of the bullet I was dodging.
The Pew Research Center’s “State of the News Media 2012” report offers a sobering portrait of what has happened to print journalism in the twenty years since I left. After a small bump during the Clinton Boom of the 1990s, advertising revenue for America’s newspapers has fallen off a cliff in the past decade, dropping by more than half from a peak of $48.7 billion in 2000 to $23.9 billion in 2011. Thus far at least, online advertising isn’t saving the business as some hoped it might. Online advertising for newspapers was up $207 million between 2010 and 2011, but in that same period, print advertising was down $2.1 billion, meaning print losses outnumbered online gains by a factor of 10-1.
But as troubling as the death of print journalism may be for our collective civic and political lives, it may have an even more lasting impact on our literary culture. For more than a century, newspaper jobs provided vital early paychecks, and even more vital training grounds, for generations of American writers as different as Walt Whitman, Ernest Hemingway, Joyce Maynard, Hunter S. Thompson, and Tony Earley. Just as importantly, reporting jobs taught nonfiction writers from Rachel Carson to Michael Pollan how to ferret out hidden information and present it to readers in a compelling narrative.
Now, though, the infrastructure that helped finance all those literary apprenticeships is fast slipping away. The vacuum left behind by dying print publications has been largely filled by blogs, a few of them, like the Huffington Post and the Daily Beast, connected to huge corporations, many others written by bathrobe-clad auteurs like yours truly. This is great for readers who need only fire up their laptop – or increasingly, their tablet or smartphone – and have instant access to nearly all the information produced in the known world, for free.
But the system’s very efficiency is also its Achilles’ heel. When I worked in newspapers, a good part of my paycheck came from sales of classified ads. That’s all gone now, thanks to Craigslist and eBay. We also were a delivery system for circulars from grocery stores and real estate firms advertising their best deals. Buh-bye. Display ads still exist online, but advertisers are increasingly directing their ad dollars to Google and Facebook, which do a much better job of matching ads to their users’ needs. Add to this the longer-term trend of locally owned grocery stores, restaurants, and clothing shops being replaced by national chains, which draw more business from nationwide TV ad campaigns, and the economic model that supported independent reporting for more than a hundred years has vanished.
Without a way to make a living from their work, most bloggers are hobbyists, and most hobbyists come at their hobby with an angle. So, you have realtor blogs that tout local real estate and inveigh against property taxes. Or you have historical preservation blogs that rail against any new construction. Or you have plain old cranks of the kind who used to hog the open discussion time at the beginning of local city council meetings, but now direct their rants, along with pictures, smart-phone videos, and links to other cranks in other cities, onto the Internet. What you don’t have is a lot of guys like I used to be, who couldn’t care less about the outcome of the events they’re covering, but are being paid a living wage to present them accurately to readers.
The debate over the downsides of the Internet tends to focus on the consumer end, arguing, as Nicholas Carr does in his bestseller, The Shallows, that the Internet is making us dumber. That may or may not be true – I have my doubts – but as we near the close of the second decade of the Internet Era, we may be facing a far greater problem on the producer end: the atrophying of a central skill set necessary to great literature, that of taking off the bathrobe and going out to meet the people you are writing about. I mean to cast no generational aspersions toward the web-savvy writers coming up behind me, but having done both, I can tell you that blogging is nothing like reporting. Just about any fact you can find, or argument you can make, is available online, and with a few clicks of the mouse, anyone can sound like an expert on virtually any subject. And, because so far the blogosphere is, for the great majority of bloggers, quite nearly a pay-free zone, most bloggers are so busy earning a living at their real job, they have no time for old-fashioned shoe leather reporting even if they had the skill set.
But in the main, today’s younger bloggers don’t have those skills, because shoe-leather reporting isn’t all that useful in the Internet age. Reporting is slow. It’s analog. You call people up and talk to them for half an hour. Or you arrange a time to meet and talk for an hour and a half. It can take all day to report a simple human-interest story. To win eyeballs online, you have to be quick and you have to be linked. Read Gawker some time. Or Jezebel. Or even a site like Talking Points Memo. There’s some original reporting there, but more common are riffs on news stories or memes created by somebody else, often as not from television or the so-called “dead-tree media.” When there is an original piece online, often it comes from an author flacking for another, paying gig – a book, a business venture, a weight-loss program, a political career.
Clay Shirky, the NYU media studies professor and author of Here Comes Everybody, has suggested the crumbling of economic support for traditional print media and the original reporting it engendered is a temporary stage in the healthy process of creative destruction that goes along with the advent of any new game-changing technology. “The old stuff gets broken faster than the new stuff is put in its place,” Shirky is quoted as saying in The Pew Center’s “State of the News Media 2010” report.
Maybe Shirky is right and online news sites will discover an economic model to replace the classified pages and grocery-store ads, but as virtual reality pioneer Jaron Lanier points out in You Are Not A Gadget, we’ve been waiting a long time for the destruction to start getting creative. Lanier, who is more interested in music than writing, argues that for all the digi-vangelism about the waves of creativity that would follow the advent of musical file-sharing, what has happened so far is that music has gotten stuck in a self-reinforcing loop of sampling and imitation that has led to cultural stasis. “Where is the new music?” he asks. “Everything is retro, retro, retro.”
I have frequently gone through a conversational sequence along the following lines: Someone in his early twenties will tell me I don’t know what I’m talking about, and then I’ll challenge that person to play me some music that is characteristic of the late 2000s as opposed to the late 1990s. I’ll ask him to play the track for his friends. So far, my theory has held: even true fans don’t seem to be able to tell if an indie rock track or a dance mix is from 1998 or 2008, for instance.
I am certainly not the go-to guy on contemporary music, but, like Lanier, I fear we are creating a generation of riff artists, who see their job not as creating wholly new original projects but as commenting upon cultural artifacts that already exist. Whether you’re talking about rappers endlessly “sampling” the musical hooks of their forebears, or bloggers snarking about the YouTube video of Miami Heat star Shaquille O’Neal holding his nose on the bench after one of his teammates farted during the first quarter of a game against the Chicago Bulls, you are seeing a culture, as Lanier puts it, “effectively eating its own seed stock.”
Thus far this cultural Möbius strip hasn’t affected books to the same degree that it has the news media and music because, well, authors of printed books still get paid for having original ideas. (If you wonder why cyber evangelists like Clay Shirky keep writing books and magazine articles printed on dead trees, there’s your answer. Writing a book is a paid gig. Blogging is effectively a charitable donation to the cultural conversation, made in the hope that one’s donation will pay off in some other sphere, like, say, getting a book contract.) The recent U.S. government suit against Apple and book publishers over alleged price-fixing in the e-book market, which would allow Amazon to keep deeply discounting books to drive Kindle sales, suggests that authors can’t necessarily count on making a living from writing books forever. But even if by some miracle, books continue to hold their economic value as they move into the digital realm, the people who write them will still need a way to make a living – and just as importantly, learn how to observe and describe the world beyond their laptop screen – in the decade or so it takes a writer to arrive at a mature and original vision.
Try to imagine what would have become of Hemingway, that shell-shocked World War I vet, if he hadn’t found work on the Kansas City Star, and later, the job as a foreign correspondent for the Toronto Star that allowed him to move to Paris and raise a family. The same goes for a writer as radically different as Hunter S. Thompson, who was saved from a life of dissipation by an early job as a sportswriter for a local paper, which led to newspaper gigs in New York and Puerto Rico. All of his best books began as paid reporting assignments, and his genius, short-lived as it was, was to be able to report objectively on the madness going on inside his drug-addled head.
In 2012, we live in a bit of a false economy in that novelists and nonfiction writers in their thirties and forties are still just old enough to have begun their careers before content began to migrate online. Thus, we can thank magazines for training and paying John Jeremiah Sullivan, whose book of essays, Pulphead, consists largely of pieces written on assignment for GQ and Harper’s. We should also be thankful for Gourmet magazine, which, until it went under in 2009, sent novelist Ann Patchett on lavish, all-expenses-paid trips around the world, including one to Italy, where she did the research on opera singers that fueled her bestselling novel, Bel Canto. In a quirkier, but no less important way, we can thank glossy magazines for The Corrections by Jonathan Franzen, who supported himself by writing for Harper’s, The New Yorker, and Details during his long, dark night of the literary soul in the late 1990s before his breakout novel was published.
Those venues – most of them, anyway – still exist, but they are the top of the publishing heap, and the smaller, entry-level publications of the kind I worked for twenty years ago, are either dying or going online. Increasingly, my decision to leave journalism to enter an MFA program twenty years ago seems less a personal life choice than an act guided by very subtle, yet very powerful economic incentives. As paying gigs for apprentice writers continue to dwindle, apprentice writers are making the obvious economic choice and entering grad school, which, whatever its merits as a writing training program, at least has the benefit of possibly leading to a real, paying job – as a teacher of creative writing, which, as you may have noticed, is what most working literary writers do for a living these days.
Perhaps that is what people are really saying when they talk about the “MFA aesthetic,” that insular, navel-gazing style that has more to do with a response to previous works of fiction than to the world most non-writers live in. Perhaps the problem isn’t with MFA programs at all, but with the fact that, for most graduates of MFA programs, it’s the only training in writing they have. They haven’t done what any rookie reporter at any local newspaper has done, which is observed a scene – a city council meeting, a high school football game, a small-plane crash – and then written about it on the front page of a paper that everybody involved in that scene will read the next day. They haven’t had to sift through a complex, shifting set of facts – was that plane crash a result of equipment malfunction or pilot error? – and not only get the story right, but make it compelling to readers, all under deadline as the editor and a row of surly press guys are standing around waiting to fill that last hole on page one. They haven’t, in short, had to write, quickly, under pressure, for an audience, with their livelihood on the line.
It is, of course, pointless to rage against the Internet Era. For one thing, it is already here, and for another, the Web is, on balance, a pretty darn good thing. I love email and online news. I use Wikipedia every day. But we need to listen to what the Jaron Laniers of the world are saying, which is that we can choose what the Web of the future will look like. The Internet is not like the weather. It isn’t something that just happens to us. The Internet is merely a very powerful tool, one we can shape to our collective will, and the first step along that path is deciding what we value and being willing to pay for it.
Image via Wikimedia Commons
With the increasingly important role of intelligent machines in all phases of our lives–military, medical, economic and financial, political–it is odd to keep reading articles with titles such as Whatever Happened to Artificial Intelligence? This is a phenomenon that Turing had predicted: that machine intelligence would become so pervasive, so comfortable, and so well integrated into our information-based economy that people would fail even to notice it.
—Ray Kurzweil, The Age of Spiritual Machines
1. Things are in the Saddle
Sometime in the sixth century, Saint Benedict of Nursia, founder of the Benedictine monastery in Italy, required his monks to pray at seven scheduled times throughout the day. Given this rather assiduous prayer regimen, it is perhaps unsurprising that Christian monks — requiring a more exact form of timekeeping — were the ones who propelled innovations in clock technologies. Ultimately, the mechanism they produced would have unexpected consequences on the world outside the hallowed confines of the monastery. By the fourteenth century, the mechanical clock was a staple of the European urban landscape, a clanging device that monitored the hours and announced the day’s exigencies. Applying new pressures of punctuality and scheduling to life, the clock dramatically revamped how people worked and traveled and leisured. It operated as a haunting reminder of time’s inexorable progress. As T.S. Eliot said, “Because I know that time is always time/and place is always and only place/And what is actual is actual only for one time and only for one place.” A simple ticking device changed reality.
Throughout history the best minds have debated about the ways in which technology has impressed itself on humanity. In The Shallows: What the Internet is Doing to Our Brains, Nicholas Carr outlines the argument rather nicely. On one side are “technological determinists” who believe that the technologies we use autonomously determine the type of people we become, without our consent. By way of example, one could point to the invention of the clock, or other seminal technologies. Marx wrote, “The windmill gives you society with the feudal lord; the steam mill, society with the industrial capitalist.” Emerson posited, “Things are in the saddle/ And ride mankind.” On the other end of the debate are “technological instrumentalists” who contend that our technologies are “the means to achieve our ends; they have no ends of their own.” In the post-millennial world, the debate is one of paramount importance, but is rarely given serious attention and compelling dramatization in the contemporary literary novel. This is not to say that contemporary novelists aren’t interested in the subject (see Jonathan Franzen’s “Liking is for Cowards: Go for What Hurts” or Zadie Smith’s “Generation Why”). Or that historians and sociologists haven’t turned the subject into something of an intellectual G-spot (see Jaron Lanier’s You Are Not a Gadget, Sherry Turkel’s Alone Together, or Carr’s The Shallows).
2. A Master of a Language that Never Rears its Head
Such questions about technological determinism occupy a vital space in the artistic medulla of the Portuguese novelist Gonçalo M. Tavares. Since 2001, Tavares has been publishing plays, story collections, essays, and novels while concomitantly snagging a whole bevy of literary prizes. Born in 1970, the Portuguese novelist’s Jerusalem won the 2005 Jose Saramago Prize and inspired the Nobel Prize-winning Saramago himself to rather hyperbolically state that “in thirty years’ time, if not before, [Tavares] will win the Nobel Prize, and I’m sure my prediction will come true…Tavares has no right to be writing so well at the age of 35. One feels like punching him.” It is perhaps unsurprising then that I approached Tavares’ books with equal amounts of skepticism and expectation, since there was simply no way a writer who’s only in his early 40s could be generating this kind of revelatory and cornea-brightening work without my having heard whisper and murmur of it. Turns out I was wrong. His books made me want to throw an envy-inspired uppercut, too.
His novels Jerusalem, Learning to Pray in the Age of Technique, Klaus Klump: A Man, and Joseph Walser’s Machine (out this month in paperback from Dalkey Archive Press) constitute the twisted and ruminative series called The Kingdom, which explores the pathology of political ambition, the logic of human cruelty, the collateral damage of madness, and the senescence of morality in the technological age. Though soul-withered and psychotic, his characters are also breathtakingly smart and often interrupt the narrative action to pontificate on moral relativism, power, politics, and the telos of the individual. Combine the villainous unction of Hannibal Lecter with the grisly impulses of Patrick Bateman and you’ll have a composite psychological sketch of Tavares’s more disturbing creations. His less menacing characters range from the schizophrenic and mentally unstable to the pathologically detached. But his characters aren’t simply fictional renderings of Oliver Sacks’s patients. Instead, Tavares creates an assemblage of people who take the prevailing ethics and logics of society — capitalism, corporatism, and intellectual technology — and radicalize them in an effort to muse about what would happen to humanity if our political and economic efficacy trumped the importance of our souls.
3. As if Humans Were Substances that Thought, Substances with Souls
Joseph Walser’s Machine is a slender though pithy meditation on the ways in which systemic routines — economic, medical, industrial, and political — scour the human psyche to a bloodless husk. Like all Tavares’ novels, Machine is set an unidentified European city and is populated by characters with vaguely Germanic-sounding names. The year is unknown, but the psychic aura of the book suggests late-millennium panic. An odd hybrid of social critique, philosophical tract, and black comedy, Machine opens with an oblique snapshot of Joseph Walser’s life. “He was a strange man… He wore a simple pair of pants, almost like a peasant’s, and his hazel-colored shoes were absolutely out of style… Walser was a collector. Of what? It’s too early to say.” His days are existentially moored by routines: He gets up in the morning, shaves, eats a sensible breakfast, goes to his job in a factory owned by the most important business man in the city, reprieves for lunch around two, and walks home around six. Walser maintains a somnambulant existence in which the events of the outside world constitute a predictable landscape, a scheduled set change on a choreographed stage. Everything is ordinary, nothing spontaneous. Even the threat of war that looms over his city gets regarded as just another variable in the cyclical rhythms of history: “The technique of influencing men by frightening them about things that don’t exist yet is ancient. It’s happening again. There’s talk of a military unit approaching with great appetite.” Numb to these expected political events, Walser similarly views other people — his wife Margha, his boss Klober Muller, and his friends — with pathological blankness. Consider Walser’s impassive discovery of his wife’s affair with Muller. He fails to confront either of them and instead resumes the quiet fulfillment of the rote. Another staple of his routine is the weekly Dice Game he plays with his coworkers, some of whom eventually plan and execute an act of terrorism against the occupying army, which, even though it’s intended to be a catalyst for disorder, only further confirms that this type of violent resistance is to be expected during wartime. Because the Dice Game allows the men to gamble on chance, it seemingly offers a necessary break from the deadening routines of daily life. But the omniscient narrator explains the paradox of the game: “There was nothing lacking, everything was there already, in the game, nothing new could crop up to disrupt the proceedings. There were six numbers struck to the die and they weren’t going anywhere. There was no seventh cipher, no seventh hypothesis. Six was the limit.” It seems then the habitual decision to submit to chance and unpredictability every week is, for Walser, just another kind of routine. All this perhaps explains why his emotional potential amounts to that of a storefront mannequin. Other people misconstrue his silence and carelessness toward the outside world as a faculty for proletarian obedience.
Walser’s relationship with his machine is the only bond in his life that demands his precise attention:
Walser had long since operated the machine with unceasing concentration, since, from the beginning, he’d realized the following: if the machine could, in the worst case, as a result of a mistake, kill him, him the honorable citizen Joseph Walser, in peacetime, the most tranquil of times, while lazy children played in the parks on Sundays, then he, Joseph Walser, was, after all, at war, for he was dealing with a dangerous friend, a friend that was potentially an enemy, a mortal enemy, because it could — not in a few months or a couple of days, but in a second — turn into that which seeks to inflict bodily harm. The foundation of his very existence — this machine — which supported his family and was, therefore, what saved him, day after day, from being some other person, eventually his own negative, the opposite of the Man that he thought himself to be… but in saving him day after day, the machine also constantly threatened him, without abeyance.
This almost symbiotic relationship with his machine imbues Walser’s life with meaning. In fact, whenever he turns it off, he is overcome by an acute apprehension, wondering whether his own heart has stopped. Eventually, the machine makes good on its threat and severs Walser’s index finger, a gruesome incident that occurs while his friends from the factory detonate a bomb on a nearby street, which momentarily sends the city into hysterics. In the face of such chaos, the citizens begin the expected triage and abide a municipally enforced emotional prudence — carrying out routines to create some semblance of normal life. Walser’s own routine involves collecting shards of metals broken off from various mechanisms and storing them in his study as a kind of shrine to technological rationality. It is his way of constructing meaning — a personal logic — out of entropy. Despite the backdrop of war and the privations of his personal life, Walser all the while behaves with the chilly equanimity of a heart surgeon. At one point, he considers himself to be a great man because he is “prepared to not love anyone…” Such a formulation of human greatness allows him to commit twisted actions — having sex with his dead friend’s wife, stealing the belt off the corpse of recently murdered man, which he adds to his metal collection — the consequences of which converge in one of the strangest endings to a novel I’ve read, up there with Don Delillo’s Americana, Jonathan Littel’s The Kindly Ones, or Tom McCarthy’s Remainder.
What should by now be rather obvious is that Joseph Walser isn’t intended to strike us as a flesh-and-blood human being. This is not to suggest that Machine eschews the aesthetics of Realism, though. Instead, Tavares plumbs the internal life of a character who has or is slowly metamorphosing into a machine, a type of person whose morality is not a divine deliberation of how best to live, but is a mechanical operation. If the requested action is executed successfully — regardless of whether we find that action laudable or opprobrious — then the action is good. Such is the compassionless morality of Tavares’s Kingdom, where people value machines more than they do human beings. These characters are atomized individuals who are slaves to their own base desires, who fail to see themselves as part of a collective.
I teach literature and writing at a small college in Wisconsin, and oftentimes center my composition courses on discussions about how digital technologies are changing our definitions of communication, friendship, love, and morality. We read Jaron Lanier and Nicholas Carr, Martin Buber and Ray Kurzweil. APA studies about how Facebook increases narcissistic tendencies among its users, a documentary about Internet Addiction Recovery Camps in Asia, and Thomas de Zengotita’s essay from Harper’s about “the numbing of the American mind” are a few examples of the vast catalog of arguments we encounter throughout the semester. Thoughtful and at times intimidatingly smart, my students have curious reactions to these texts. They at once confirm the desensitization to reality and the ablation of serious thought that digital technologies eventuate, while at the same time praising the convenience, efficiency, and sleekness of their smartphones, which they often utilize in class to fact-check my lectures (a practice that fills me with a kind of searing dread). Recently, my students and I were discussing the ways in which texting and Facebook posts and Instant Messaging reduce our expectations of human interaction — one should note that the majority of my students absolutely abhor making actual, vox-a-vox phone calls. They related anecdotes about text message conversations gone awry (“I couldn’t tell if he was being sarcastic!” or “I thought she was pissed at me!”). One student recalled a face-to-face conversation she recently had with her friend. She had said that friend is the kind of funny where whenever you’re around her you find yourself clutching your stomach and begging her to stop. Anyways, the friend told a joke, and instead of laughing, instead of giving the friend some expressive indication that she thought the joke was that delicious blend of irreverence and wisdom, my student sat there stone-faced and saw a familiar acronym fill up the IMAX of her head. It read: “LOL.” As she bravely recounted the anecdote, speaking with that special intensity that attends a kind of personal revelation, she seemed preoccupied in a haunted way.
Now, I’m not saying that my student bears even a remote likeness to a single one of Tavares’ characters. In fact, she’s a terribly nice person who I have faith will do important things with her life. But we shouldn’t disregard the seriousness of the incident, for it speaks to the subtle and disturbing ways in which technologies sometimes limit or reduce our appreciation of reality, our ability to sensitively apprehend and partake in reality. While it’s easy to sneer or bristle at Tavares’ characters, to close his books and say, “Boy, that was weird,” I don’t think these are meant to be portraits of the cold and the sociopathic, the damaged and the damned. What makes these books necessary, urgent, and arguably genius is Tavares’ unapologetic presentation of characters who follow contemporary society’s prevailing ethics—the ethics of efficient machines—to their logical and devastating ends: a world without what Tavares calls “the efficient distribution of divine breath.” Told in pellucid prose, Joseph Walser’s Machine is a terrifying and mesmeric novel, offering a dark premonition of where we might be headed and what we might become.
Problems are matters of faith. They need to be believed in before they can be solved and their solutions are always shaped by the ways the problems are defined. In this way, Jane McGonigal’s Reality is Broken: Why Games Make Us Better and How They Can Change the World founders as soon as it begins. McGonigal is a researcher at the Institute for the Future and one of the most accomplished alternate reality game designers in the world. Her book offers a solution to a problem that she doesn’t really define and probably doesn’t exist.
McGonigal’s book is an evangelical pamphlet. It doesn’t discover a new problem so much as it insists that optimism is our destiny and amusement will accelerate it if through the power of interactive systems. “Today’s best games help us realistically believe in our chances for success,” McGonigal declares, and so by using their essential structure for more socially beneficial purposes than killing orcs and grenade bombing aliens we might one day “change the world.”
McGonigal enumerates reality’s broken parts in a laundry list of suffering: obesity, global warming, starvation, poverty, the loneliness of the elderly, attention deficit disorder, and the modern rise of clinical depression. In short, games will fix everything.
It’s often argued that video games are a new medium with exciting possibilities, an inheritor of the advancements of the written word and moving image that preceded them. Arguing that video games themselves constitute a medium is easy but it’s an inaccurate belief, one that warps many of McGonigal’s arguments.
To McGonigal, games offer us “intense, optimistic engagement with the world around us.” They are defined by four essential traits: a goal, rules, a feedback system, and voluntary participation. These are a decent reduction of the new medium, but they better describe interactive systems than they do video games.
There is indeed a new medium afoot, but it’s one that includes Google, World of Warcraft, Mario, and Microsoft Office. The medium is interactive system design, and video games are the non-productive, emotional face. In the way that movies occupy the same medium as training videos while serving a dramatically different purpose, video games are formed with the same essential elements as Excel. They both have a goal, respond to user input, have limits on recognizable input, and both can be taken or left.
This might seem like a small distinction but it’s an essential one that confounds almost every argument in McGonigal’s book. “Games aren’t leading to the downfall of human civilization,” McGonigal writes. “They’re leading us to its reinvention.” Replacing “games” with “interactive systems” makes this claim more tenable, allowing the possibility to include Wikipedia, Groupon, Facebook, LexusNexus, Google’s cloud of services, and the emotional vivacity of Tetris, Ico, and Wii Sports.
“Games don’t distract us from our real lives,” McGonigal claims. “They fill our real lives: with positive emotions, positive activity, positive experiences, and positive strengths.” This might also be true if we accept games as the artistic subset of interactive systems, but it should then be expanded beyond the scope of the “positive.” All art and emotional expression is catastrophically hamstrung when limited only to the positive and optimistic.
To McGonigal’s credit, Reality is Broken is filled with specific examples of games modeled around the desire to change the world for the better. She describes SuperBetter, an alternate reality game she invented after suffering a concussion in 2009. McGonigal’s recovery period was long and drawn out, lasting several months during which she was instructed to avoid exertion, refrain from work, and spend as much time as possible resting.
For a woman with a proactive disposition, this state of living quickly became torturous. Rather than wallow in self-pity, McGonigal decided to make a game of her recovery. She constructed a secret identity, and then identified friends, family members, and people in her neighborhood who could play a role in her recovery.
She listed all of the negative behaviors that would slow her recover (e.g. caffeine, vigorous exercise, email, work anxiety). Then she defined specific actions that could contribute to her recovery, which she then assigned points to so that she could feel like she was making quantifiable progress (cuddling with her dog, listening to podcasts, excursion to the department store to smell different perfumes). Without this game-like structure, McGonigal’s recovery was slow and painful, but within the motivational bounds of points, objectives, and identifiable “enemies” she was able to accelerate her recovery dramatically.
There are many similar examples. McGonigal describes a game that encourages social exchange between older people in retirement homes and younger people, one that assigns and tracks points for completing household chores, and one that encourages players to conduct there lives as if there was an oil shortage.
Examples like these–small in scale and benefiting from a sense of goodwill among their participants–do indeed seem to have a positive impact on their subjects. But if a game is built to solve a real world problem, is it still really a game? McGonigal’s work is that it creates a framework for simplifying human experience into quantifiable objectives and positive rewards that make sense in a system but less so in “reality.”
This is not a new criticism, Jaron Lanier, the author and one-time Web 2.0 developer, has long warned against designing systems that exploit human richness for simplified systemic objectives. Lanier describes a process of “lock-in,” where an idea must be taken as a fact or hard rule when placed in a software system.
“Lock-in removes the ideas that do not fit into the winning digital representation scheme, but it also reduces or narrows the ideas it immortalizes, by cutting away the unfathomable penumbra of meaning that distinguishes a word in a natural language from a command in a computer program,” Lanier wrote in You Are Not a Gadget: A Manifesto.
McGonigal veers into this territory when she talks about using games to make people happier by giving them discrete game-like objectives in their daily lives. This can create the impression of positive behavior in the short-term, but it also leads to long-term indifference and a diminution of the innumerable elements that naturally motivate us.
In Bright-Sided: How Positive Thinking Is Undermining America, Barbabara Ehrenrich argues this kind of positive thinking “requires deliberate self-deception, including a constant effort to repress or block out unpleasant possibility and ‘negative’ thoughts.” Reading McGonigal’s description of how the most successful games define their fail-states with gratuitous exaggeration to protect players from the idea of failure (e.g. there’s still a humorous payoff when you lose) and it’s hard to not see her optimism in these terms: a willful construct to force out naturally occurring forces to the contrary. This is less a reconciliation with reality and more a filtering of it into one that’s easiest for humans to process. How this makes us “better” is unclear.
McGonigal presses further in the closing third of her book with an argument that games can change the developing world for the better. Her most recent game is EVOKE, another alternate reality game commissioned by the World Bank as a way of positively affecting development in Africa and “other parts of the world.”
The game is a loose network for brainstorming with a game-like allocation of points for contribution and a fictional story to add urgency. EVOKE has led to some interesting work, including a pilot program for sustainable farming in a South African community, a project to convert glass boats into solar powered boats in Jordan, and a communal library that requires users to contribute a piece of information for every book they check out.
While these ideas all sound promising, the history of international development is littered with optimistic ideas that all sound fine in abstract. In practice, however, it’s entirely unclear why EVOKE is a game and not just a philanthropic network for people with Global Giving projects.
I lived in Madagascar for two years working as a Health Educator with the Peace Corps from 2003 to 2005. The United Nations Development Project had a program to build wells in the most remote villages in the arid southern part of the country where I lived. The project was simple, had an easily identifiable goal, and was well-funded by a willing community of international donors. The wells were installed and the locals enjoyed clean and easy access to drinking water for several years. Then the pumps began to break. The UNDP had no local presence, the closest office was 1,000 miles to the north. The locals lacked the parts to fix the pumps on their own because the pumps had been manufactured in Europe. And so they went back to their old way of life, fetching water from shallow rivers and the pools of rainwater that collected after the rainy season.
It’s easy to imagine how projects like this might be accelerated by game-like structure, but it’s hard to imagine the feeling of purposeful happiness making it any longer lasting. After my two years in Madagascar, I found that the things I wanted to work on in my community—HIV education, birth control access, convincing my neighbors to start farming tomatoes instead of only cassava—were greeted with disinterest.
I struggled to explain these issues in dire terms. But how do you convince someone to care about HIV when their language doesn’t have a word for blood cell? How do you convince someone to grow tomatoes when it would cost them four months income to build a wooden fence big enough to keep the pigs, goats, and chickens out of their garden?
It’s true we have an imperfect experience of the world we live in. We struggle and fail. We tend towards dissolution over a lifetime—to understand less at the end than at the beginning. In this way, Reality is Broken is a product of the lingering adolescence of video games, a forceful assertion of general good will and ambition that will never seem more possible than in the salad years, when the medium is still unburdened by the scar tissue of failure.
Reality is Broken came from a short rant McGonigal was asked to deliver during the 2008 Game Developers Convention. The rant was inspired by a piece of graffiti McGonigal had seen in Berkeley, a sad phrase scrawled onto a sticker plastered on a wall. “I’m not good at life.” This is the emotional core of every point argued in McGonigal’s book. It’s a work of solidarity and overwhelming empathy with everyone struggling in the world and yet no one person in particular.
One of the rants that followed McGonigal’s in 2008 was by Jon Mak, a Canadian game designer and musician. Mak chose not to talk at all. Instead he started playing ambient dance music over a boombox and hopped off the dais. He ran around the conference room throwing balloons into the audience. Each balloon had an irrational message written on it, a non-sequitur snippet that teased the human tendency to search for meaning even when there is none. Without instruction the audience began hitting the balloons into the air, passing them back and forth like beach balls.
I suppose it’s possible, in the collected time and energy spent passing those meaningless balloons back and forth we might instead have pooled our energies to build something, maybe creating a system to feed the homeless people wandering around the sidewalks below us. But we wouldn’t have been playing at that point, and I suspect most of the people in the room would have lost interest. Which is a good reminder that there remains a vague but real difference between play and work. I finished McGonigal’s book convinced it’s a distinction worth keeping.
(Image: -342 : guinea pig pwn from o_hai’s photostream)