When I was in graduate school, a professor introduced me to a documentary called The Century of the Self. Directed by BBC journalist Adam Curtis, it follows the rise of modern public relations, whose Austrian inventor, Edward Bernays, exploited Americans’ innate self-centeredness to sell us on everything from psychoanalysis to cigarettes. It’s an eye-opening piece of work, and one I used to rewatch once or twice a year. Last time I did though, it occurred to me that it might not be all that relevant. Because we aren’t living in the century of the self at all anymore, but the century of the crowd. It would be easy, I guess, to argue that the self is still ascendant since social media gives people more ways to think about themselves than ever. But a hashtag can’t go viral with just one user, nobody cares about an Instagram photo no one likes, and does a YouTube video that doesn’t get watched even exist? Even as users do the self-focused work of updating LinkedIn profiles and posting on Twitter and Facebook, they do it in the service of belonging, at the back of everyone’s minds, an ever-present audience whose attention they need if their efforts aren’t to be wasted. In his new book World Without Mind: The Existential Threat of Big Tech, Franklin Foer argues that this shift from individual to collective thinking is nowhere more evident than in the way we create and consume media on the Internet. Because tech companies like Facebook and Google make money off the sale of our personal data to advertisers, they depend on the attention of the masses to survive. And because their algorithms shape much of what we see online, it’s to their benefit to coerce us into thinking of ourselves not as individuals but as members of groups. “The big tech companies,” Foer writes, “Propel us to join the crowd—they provide us with the trending topics and their algorithms suggest that we read the same articles, tweets, and posts as the rest of the world.” Foer started his journalism career in the late '90s as a writer for Slate when it was still owned by Microsoft. He edited The New Republic twice, from 2006 to 2010 and later, in 2012, after it was purchased by millennial billionaire and Facebook co-founder Chris Hughes. The year Foer first joined TNR, only college students could have Facebook accounts, the iPhone hadn’t yet been released, and the Internet still represented an opportunity for democratization, where a small website could attract a self-selecting group of readers simply by producing well-written articles about interesting things. Today, there are two billion people on Facebook, which is also where most people get their news. Media organizations have adjusted accordingly, prioritizing stories that will travel widely online. Foer resigned from TNR shortly after Hughes announced he wanted to run the magazine like a startup. He uses the contentious end of his tenure there to make the case that news organizations desperate for traffic have given in too easily to the demands of big tech, selling out their readership in the pursuit of clicks and advertising dollars. The end result of this kind of corruption is sitting in the White House right now. “Trump,” the subject of thousands of sensationalist headlines notable primarily for their clickability, “began as Cecil the Lion, and then ended up president of the United States.” Foer, of course, is writing about this topic from a position of relative privilege. He rose in his field before journalists relied on Twitter to promote their work. His career-making job was at a publication that has, more than once, made headlines for fostering an environment of racism and misogyny and a system of exclusion that may have facilitated his own way to the top. In late 2017, news about the sexual misconduct of his influential friend, TNR culture editor Leon Wieseltier, spread widely and quickly on Twitter and Facebook. Perhaps aware even as he was writing that he was not in the position to launch an unbiased critique, Foer chooses to aim his polemic at the people who run large online platforms and not at the platforms themselves. Foer doesn’t want Facebook to stop existing, but he does want greater government regulation and better antitrust legislation. He wants a Data Protection Authority, like the Consumer Financial Protection Bureau, to manage big tech’s sale of our personal data. He wants increased awareness on the part of the public of the monopolies Facebook, Apple, Amazon, and Google represent. He wants everyone to start reading novels again. And he wants news organizations to implement paywalls to protect their integrity, rather than depending on traffic for revenue. While I agree that reading fiction is one of the only ways any of us are going to survive this era with our minds intact, implementing subscription fees to save journalism sounds like suggesting everyone go back to horse-drawn carriages to end climate change. Foer dismisses the dictum “Information Wants to be Free” as “a bit of nineties pabulum,” but he’s wrong; If we put up blocks against information online in the form of paywalls, it’s going to find a way around them like a river around a poorly built dam. [millions_ad] We’re not going to go back to the way things were before, and if anything, the Internet’s information economy is going to carve out a wider and wider swath in our brains. Subscriptions work for The New Yorker and The New York Times in part because they come with built-in audiences old enough to remember when paying for information was the best way to get it. People might pay monthly fees for subscriptions to Stitch Fix and Netflix, but this model won't sustain itself in a world full of readers who expect good writing not to cost anything. Foer also has a higher opinion of human willpower in the face of massively well-funded efforts to dismantle, divert, and repurpose it than I do. I don’t know if the founders of Google and the big social media platforms always knew it would be possible to turn their user bases into billions of individual nodes ready to transmit information—via tweets, texts, posts, and status updates—at the expense of all their free time, but they do now. Our phones and our brains exist in a symbiotic relationship that is only going to intensify as time passes. As Foer himself notes, “We’ve all become a bit cyborg.” The more addicted we are, the longer we spend online, the more data we give big technology companies to sell, the less incentive they have to change. We aren’t in a position to go offline, because online is where our families, our friends, and our jobs are. Tech companies have the lobbying power, financial means, and captive audiences necessary to ensure that the stimulus-reward loops they offer never have to stop. Media organizations that take advantage of these weaknesses will grow, while those that put up pay walls, adding friction to the user experience, will wither and die. As a writer at Slate and an editor at the New Republic, Foer was a member of the generation that helped put in place the framework of a media industry whose flaws he’s railing against. He’s unlikely to be the person who fixes it. And just as Foer can’t solve the problems inherent in an industry he helped build, big tech companies aren’t going to remedy the problems they’ve brought about. Not because they don’t want to (though, why would they?) but because for the most part, the people running these companies can’t see the full picture of what they’ve done. In an interview with Axios’s Mike Allen, Facebook CFO Sheryl Sandberg demonstrated little remorse at the role Facebook played in facilitating Russia’s interference in the 2016 presidential election via fake campaign ads. “A lot of what we allow on Facebook is people expressing themselves,” Sandberg said. “When you allow free expression, you allow free expression, and that means you allow people to say things that you don't like and that go against your core beliefs. And it's not just content—it's ads. Because when you're thinking about political speech, ads are really important.” In the universe where Sandberg lives, our problems—which include a president poised to start a war for the sake of his ego—are her problems only in so much as they damage her company’s ability to accept money from whomever it wants. In late 2017, Twitter, Facebook, and Google were all called upon to testify before the Senate Intelligence Committee. Some members of Congress want a bill requiring big tech companies to disclose the funding source for political ads. Facebook and Twitter announced new internal policies regulating transparency. But how well these regulations will be enforced is unknown, and, frankly, it’s difficult to imagine a world in which insanely well-capitalized companies steeped in the libertarian ethos of Silicon Valley let rules get in the way of “innovation.” One of the best chapters in World Without Mind involves the coming of what Foer calls the Big One, “the inevitable mega-hack that will rumble society to its core.” Foer writes that the Big One will have the potential to bring down our financial infrastructure, deleting fortunes and 401Ks in the blink of an eye and causing the kind of damage to our material infrastructure that could lead to death. Big tech can see the Big One coming, and is bracing for it, taking lessons from the example set by the banks during the economic collapse of 2008. They’re lawyering up and harnessing resources to make sure they’ll make it through. We, the users whose fortunes will have been lost, whose data will have been mishandled and who will have potentially suffered grave bodily harm as the result of this mega-hack, won’t fare so well. This prediction brings to mind another recent book about the current state of technology, Ellen Ullman’s Life in Code: A Personal History of Technology. Ullman also denounces the dismantling of journalism as we know it by social media. “Now,” she writes, “without leaving home, from the comfort of your easy chair, you can divorce yourself from the consensus on what constitutes ‘truth.’” Ullman, like Foer, blames these platforms for the election of President Trump, calling Twitter the perfect agent of disintermediation, “designed so that every utterance could be sent to everyone, going over the heads of anyone in between.” But she veers from Foer’s pronouncement that unregulated tech companies are going to be the death of intellectual culture as we know it. Describing San Francisco, where she lives, she notes the failure of more and more startups, the financial struggles of LinkedIn before its sale to Microsoft, the mass exodus of investors from Twitter, and Uber’s chronic struggles to reach profitability. Life in Code was written before Snapchat went public, but Ullman predicts correctly that it won’t go very well. “The privileged millennial generation has bet its future on the Internet,” Ullman writes. “I wonder if they know the peril and folly of that bet.” Ullman, a programmer, lived through the first tech collapse. Now, she writes, conditions are ripe for a second fall. “The general public has been left standing on the sidelines, watching the valuations soar into the multibillions of dollars, their appetites whetted: they, too, want to get into the game. I fear that on the IPOs, the public will rush in to buy, as was true in 2000.” These two dark visions of America’s future–one in which big tech brings about the end of society as we know it, and one in which it crashes under its own weight—both lead to similar outcomes: CEOs take their payouts and head to their secret underground bunkers in the desert while those on the outside get left holding the bag. Both potential endings also point to a precipice that we, as a society, are fast approaching, a sense that the ground is ready to fall out from beneath us at any time. “There has never been an epoch that did not feel itself to be ‘modern,’” Walter Benjamin writes in the Arcades Project, “and did not believe itself to be standing directly before an abyss.” Thanks to climate change, Donald Trump’s perpetual nonsense, the rise of white supremacist hate groups, and the mass shootings and terror attacks that pepper the headlines each day, it’s hard not to feel as if that we’re all alive at the start of a dawning apocalypse. And maybe that’s because we are. But the end that’s coming will not be all encompassing. “The ‘modern,’’’ Benjamin also wrote, “is as varied in its meaning as the different aspects of one and the same kaleidoscope.” In his book Homo Deus: A Brief History of Tomorrow, historian Yuval Noah Harari outlines the Dataist hypothesis that human beings are algorithms, elements of a massive global data processing system, the output of which was always destined to be a better, more efficient data processing system. “Human experiences are not sacred and Homo Sapiens isn’t the apex of creation,” Harari writes. “Humans are merely tools.” The endpoint of our current evolutionary trajectory, some researchers hypothesize, could look like a series of non-biological networks capable of communicating, rebuilding, repairing, and reproducing new versions of themselves without us. Harari points to theories that suggest we’ve always been headed towards that point, that it has always been what was supposed to happen, that we’re only one step in a process longer and more far-reaching than we can possibly imagine. It’s those enterprising entrepreneurs willing to exploit our connectivity-inclined, data-processing inner natures who stand to profit most from humanity’s current evolutionary moment. In a recent feature in New York about Facebook, Mark Zuckerberg’s former ghost writer Kate Losse, trying to remember Facebook’s “first statement of purpose,” recalls her boss saying often, “I just want to create information flow.” Where Curtis’s PR executives in Century of the Self exploited our innate selfishness for their own gain, the Zuckerbergs of the world cash in on our uncontrollable impulse to share information. An impulse that, according to Harari, might be leading, even now, to the development of an entity that, in its pursuit of greater networking capability, will absorb human biology then leave it behind. It sounds, I know, like science fiction. But, 15 years ago, so did Snapchat, Facebook, and the iPhone. Right now, the tide seems to be turning against tech. Last year, New York Times writer Farhad Manjoo promoted a series of articles on the monopoly power of Facebook, Apple, Google, Amazon, and Microsoft. The Guardian published a story about Facebook and Google employees safeguarding themselves against the addictive properties of the platforms they helped create. Cathy O’Neil’s look at the algorithms that shape the Internet, Weapons of Math Destruction, was shortlisted in 2016 for a National Book Award. Following closely on these reports, of course, have come the inevitable accusations of alarmism from technologists and the people who love them. Where that discourse will lead is hard to say. One of the central question authors like Foer, O’Neil, Ullman, and Manjoo seem to want to raise is this: What will our legacy be? Will we be known for having set in place the foundations for a tech industry beholden to its users’ well being? Or will we be a blip, the last people who believed in an Internet capable of bringing about a brave new world, before everything changed? Benjamin is correct that all generations harbor an anxiety that theirs will be the last to grace the Earth before the world ends. But no generation has been so far, and ours probably won’t be either. And so to this question, I would add another: What will rise from whatever we build then leave behind? Because for better or for worse, something will.
Like a lot of the people I know these days who grew up wanting to be writers, I learned how to write by reading the Internet. I graduated from college with a degree in capital-L Literature, but by the time I was 23, my major sources of inspiration were blogs and online fiction magazines. I found my voice (or at least a voice) by imitating bloggers no one had ever heard of, and publishing on websites few people would ever read. This suited me -- I liked the isolated weirdness of writing into the infinite void of the web. It didn’t matter to me that no one answered back. Some time between then and now, businesses with real money caught on to writing on the Internet, discovered it could be made scaleable, and everything changed. For many of my writer friends, this change was a net positive. Suddenly there was money in writing; in some cases, lots of it. The possibility of a “piece” going viral also meant the possibility of a job for pay at a real publication, or a book deal, or a mention in The New York Times, or any number of other writerly blessings. At the same time, wanting to make a career in letters and not being on Twitter and Facebook -- that is, not wanting to share your work constantly with the strangers you met on airplanes and in restaurants and people you hadn’t seen since seventh grade -- became the equivalent of not actually wanting to be a writer at all. For extroverts and writers with surplus self-assurance this didn't pose a problem. For those of us drawn to writing because it was the one job that wouldn’t require us to talk to people regularly, it was a nightmare. “Aside from issues of life and death, there is no more urgent task for American intellectuals and writers than to think critically about the salience, even the tyranny, of technology in individual and collective life,” wrote Leon Wieseltier last month in his call to arms against the threat technology poses to humanism. “Among the Disrupted” sparked a heated debate among its readers, or at least, as heated as can reasonably be expected in the letters section of the New York Times Book Review. As one critic pointed out, there are plenty of more urgent issues for all of us to ponder, child poverty and the pay gap among them. What struck me, though, wasn’t the essay’s hyperbole, but the inaccuracy of its target. The problem facing writers now isn’t the growing prominence of tech, but the question of how deeply the two fields are intertwined, and what that relationship means. Most of the notable works of fiction published in recent years (Jennifer Egan’s A Visit From the Goon Squad and Emily St. John Mandel’s Station Eleven spring immediately to mind) present thoughtful considerations of the evolving relationship between the self and technology, and a lot of the people who read these books probably read them on the Kindle app. The same people who subscribe to Harpers read work published by the Atavist and curated by Longform. I’m writing right now in the notepad app on my iPhone, and you’re reading on your computer or your iPad or your own phone. How many writers do you know at this point who don’t have their own websites, Tumblrs, or blogs? For that matter, how many do you know who aren’t on Twitter? Still, it’s true that while the Internet augmented journalism, creating jobs for newcomers and inventing new forms for them to occupy, it didn’t do the same for fiction. Atavist Books confirmed last fall that it would no longer be taking new submissions, and Byliner was folded into Vook last September. Amazon’s e-books are popular, but more innovative forms of digital literature don’t seem to be. Meanwhile, more and more often, walking into library near the office building where I work in Los Angeles feels like walking into a museum. Untouched contemporary novels line the shelves, positions unchanged. The few people in the reading room are, to a person, on laptops or on their phones. “Only connect,” E.M. Forster wrote in Howards End. We’ve taken his directive and run with it, to the point where we talk about disconnecting -- going off Twitter, Facebook, Instagram, Venmo, whatever -- in the same wistful tones we once used to talk about going on vacation. Social media, like literary fiction, allows us briefly, to inhabit a consciousness outside our own: on Facebook, we experience the highs and lows of strangers in real time through the same lens of authenticity that marks work by writers like Ben Lerner and Sheila Heti. If these platforms give every one of us both an audience and a show on demand, at any given moment, and for free, then what is there left for fiction to do? Curious about how all of this was affecting those most likely to suffer from its consequences, I reached out to novelists in my Twitter timeline to find out what they thought about the role of social media -- specifically Twitter -- in their own writing lives. I expected all three of them to share my own anxieties -- my view of the world has more in common with Wieseltier’s than I am, perhaps, ready to accept -- and I was surprised when not one of them did. Roxane Gay contributes reviews and cultural criticism to high-profile outlets regularly, edits multiple literary journals, and works as a professor at Purdue. She tweets prolifically to her 54,000 and counting followers, and engages with many of them directly. Last year, her essay collection, Bad Feminist, and her novel, An Untamed State, both dropped at the same time, to widespread acclaim. “Many of my essays begin as conversations on Twitter, where I am thinking through something that’s happening in our culture,” Gay wrote me, adding that she uses Twitter “to be alone while feeling less lonely.” “Ninety percent of the time,” Gay wrote, “I'm having delightful conversations and interactions on Twitter. There's little to dislike.” Amelia Gray, author of Threats and the forthcoming Gutshot, responded that Twitter “has been more beneficial to my creativity than most theory,” adding, “It’s a little corner of the world that is a pure ball-pit.” She went on, “My tweets and my fiction ideally share the same precision. They share a sense of character too, though rarely the same character. I use different social media towards different ends, and in that way I have different Internet personas and also a real persona which is different here and there.” Porochista Khakpour is an essayist and fiction writer and the author of, most recently, The Last Illusion. Last year one of her tweets caught the attention of Slate, landing her timeline a mention in The New York Times Book Review. She often writes, more or less jokingly (I think?), about being enslaved to Twitter, but when I wrote to ask her about the relationship between the site and her work, she wrote back, “I think I am beneficial and destructive to my own creativity and ability to be productive. I am the problem!” She described her relationship to Twitter as, “Love/hate, but tbh mostly love. And we all know love hurts, love bleeds, love kills, love is a battlefield, etc.” A blogger friend of mine once called social media a loneliness eliminator and I have to admit that definition makes me uneasy. What about those of us who like being lonely? What if loneliness is, in some ways, necessary? It’s true though, that, soon enough, this conversation will likely be irrelevant. Twitter and Facebook will dissolve into the digital slurry, only to be replaced by some other technology as incomprehensible to us as texting once was to our grandparents. Consider Vine, currently churning out new waves of celebrities that no one over 27 has ever heard of. Maybe six-second videos will become the new Twitter, which is the new Facebook, which was the new books. Or maybe, by this time next year, we’ll be having this same conversation about yet another new technology we can never remember how we lived without. Meanwhile, as these writers make clear, fiction writers will continue to adapt, and books, while they may be still hanging on by a thread, will hang on nonetheless. Something I’m starting to suspect is that it’s everything I worry will make books obsolete -- their slowness, the investment of time they require, and their inability to do anything other than the singular purpose for which they’ve been created -- that has allowed them to survive for this long. Social media lends itself best to chronicling discrete instances: bursts of anger, flashes of surprise. Built on the notion, even on a micro scale, of constant disruption, our feeds and streams can’t cohere for long enough to bleed seamlessly into one another the way actual sentences do. Twitter and Facebook are great for quick blasts of dopamine or adrenaline, but not for creating sustained waves of happiness or fear or maintaining the kind of cumulative tension upon which good stories rely. “Only connect!” Forster wrote, but he also wrote, “Live in fragments no longer.” Human experience is like a Georges Seurat in that it only comes into focus the further away you get from it. Facebook, Twitter, and Instagram regurgitate the present back to us in easily manageable pieces that delight, spark envy, disdain, boredom, revulsion, or inspiration. What they can’t do, however, is take a wider scope. Its novels we depend on to reorganize those scattered fragments into something whole. Image Credit: Pixabay.
When she was seventeen, Deb Olin Unferth trailed her boyfriend into Central America to join a revolution. Any revolution. They ran out of money, and so they came home. “I was eighteen. That’s the whole story,” Unferth tells us. This is not the whole story, and we know this because we are only two pages into a 224-page book. The revolution winds up serving as the backdrop for a different kind of war, the one between the author and George, the philosophy major she fell for as a freshman at a “large state school in a large state. “ George is mysterious, magnetic, incapable of genuine human interaction. Unferth convinces herself that he is a genius and affixes herself to him with a love that borders on veneration. It’s this infatuation, and not concern for the fate of Latin America’s children that compels her to leave school to take a volunteer job at an orphanage caught in the middle of warring El Salvadorian military forces. After Unferth gets them banished from the orphanage because she won’t wear a bra, they hitchhike through a series of ravaged countries, running out of money and falling out of love and not wanting to admit to either one. With Revolution: the Year I Fell In Love and Went to Join the War, Unferth accomplishes what a lot of writers wish they could, completing a book that establishes her as a literary light and also serves as a settling of scores after a relationship that veered madly off course. The whole score-settling aspect of things is more complicated than I’m giving it credit for here, but the fact remains that there are a lot of reasons to write a memoir and one of them is to give yourself the chance to live history again, this time with all the events that matter solidly under your control. Revisiting her early twenties gives Unferth the agency she didn’t have back then, and it’s this tension between the Unferth of now—two well-received works of fiction, a professorship at the Wesleyan University, a star on the Believer walk of fame—and the Unferth of 1987—knobbly, insecure, the type of person who will follow her boyfriend into war-torn territory because it doesn’t occur to her not to—that becomes the axis around which Revolution spins. This conceit is risky, and it wouldn’t work at all if Unferth weren’t so likeable. But she is, so it does, so much so that by the end of the book, the reader replaces her as the stricken lover, willing to follow her anywhere she wants us to go. Unferth and George lead us through Guatemala, Panama, El Salvador and Nicaragua, interviewing priests, politicians and civilians and recording the interviews on tape. They never listen to the tapes and later, when she begins to understand the level of carnage at work at the time, Unferth gets a “sick feeling of knowledge.” A parade of massacres was happening before their eyes, and, she confesses to us, she had no clue. As she recalls her earnest, bewildered march across Central America, she takes jabs at herself and the other expatriates on extended vacations in countries they have no business in. In Managua, she meets a troop of traveling Canadian jugglers traipsing their way through Nicaragua. “Imagine. We were walking across their war, juggling. We were bringing guitars, plays adapted from Gogol, elephants wearing tasseled hats….The Nicaraguans wanted land, literacy, a decent doctor. We wanted a nice sing-a-long and a ballet.” Moments like this one save Revolution from the narcissism that could so easily disable this particular species of memoir. The current version of Unferth, the one narrating the story, reads about her younger self along with us, judging her pratfalls and mistakes before we can. We’re free to watch as Unferth wrestles her early twenties into novelistic elements: exposition, climax, epiphany, logical conclusion. Of course, reality isn’t shaped like that, and this is something the author knows, and seems bent on making sure we know, too. She circles her narrative back in on itself, calling into question her own account of the truth. What she’s going for here isn’t accuracy so much as faith. Alongside her remembered version of the story, she offers her mother’s comments and notes from her journal, always allowing for the idea that things didn’t happen the way she’s telling us now, that maybe some of it never happened at all. This reordering of history results in a formidable deconstruction of the genre, a subtle but effective response to state-of-the-form manifestos like Ander Monson’s Vanishing Point: Not a Memoir and David Shields’ Reality Hunger. The writing here is visceral. We feel it when a horde of tiny flesh eating bugs takes residence under the author’s skin, or when the sun threatens to smother her while she sleeps in a metal-roofed hostel. Her description of the unforgiving landscape mirrors her evolution as a writer, as she finds herself playing out a fate she didn’t plan. She doesn’t, she discovers too late, particularly want to be part of the revolution but she trucks gamely along, long after she’s admitted to herself that can’t explain why she’s there. Later, she does the same thing as a fiction writer, teaching composition at too many universities while the rejection slips pile up and her parents wonder what they did wrong. Of course, it all works out in the end but here, Unferth shares with us that point in her life when nothing is working out, when she has no inkling that it ever will. Her candor is what keeps us with her as she returns, many years later, to Central America by herself, trying and failing to recapture something she can’t name. “Why would this trip mean so much that I’d have to keep going back to find it?” she asks. What compels us to rewrite the past? To turn life into something meaningful instead of absurd? Near the end of the book, Unferth attempts to find George, whom she hasn’t spoken to in decades. For the sake of the story, of course, she hires a private detective to search him out, and what she discovers first disappoints and then delights her. While her trips back to El Salvador and Nicaragua fail to bring the back the past the way she wants them to, in George she discovers that the thready, magnificent days of her youth are still alive. George’s fate offers her the same revelation we experience as we make our way through her careful memories: it is possible to trammel our histories. The past is right there, waiting for someone to come along and put it into words.