That yowl of pain you heard coming from the nation’s capital on Monday afternoon was the death cry of the print newspaper business as it was cut to the heart by the buyout of the Washington Post by Amazon CEO Jeff Bezos. Just forty years ago, during the Watergate scandal, the Post was an economic and cultural force potent enough to help take down a sitting president, and now it has itself been taken down by a guy who less than twenty years ago was working out of his garage selling books on the Internet.
The surprise $250 million deal has coup de grâce written all over it, and in newsrooms across the country, reporters and editors – those who still have jobs – will be grousing over the rich symbolism of one of the crown jewels of American journalism being snapped up by a mogul made rich by the very technology that killed the print business model. But once the grumbling dies down, those of us who care about journalism and culture may have to concede two obvious points. First, we damn well better hope Bezos succeeds where others have failed in figuring out how to produce professional-quality content in the digital age, not just for the sake of the Washington Post or even of the news business, but for the sake of cultural and artistic production in general. Second, of all the billionaires with the means to buy a major cultural institution like the Washington Post, Jeff Bezos might just be the one who can reinvent it for the 21st century.
The details of the Post deal are curious – and telling. For one thing, the newspaper’s parent company, which owns a diverse group of media and educational businesses including the Kaplan test prep company, is selling only its newspapers and holding onto its other businesses. For another, Bezos is buying the Post on his own, and won’t merge it into Amazon, the online retailing behemoth he still runs. Finally, Bezos has said he doesn’t intend to lay off employees and will keep Katherine Weymouth, granddaughter of legendary Post publisher and Washington powerbroker Katherine Graham, on as the newspaper’s publisher.
In other words, the Washington Post Company sees its signature property as a money-loser, and Bezos appears to be stepping in as a white knight to save a cultural institution from falling into disrepair. The history of these sorts of deals is mixed at best. After all, just three years ago the Post Company sold once-mighty Newsweek for a dollar to yet another billionaire, 92-year-old Sidney Harman, who brought in former New Yorker editor Tina Brown, with a plan to merge the aging print weekly with the website The Daily Beast. Harman, however, promptly died, and with the magazine hemorrhaging readers, Brown just last week severed ties between the website and the print magazine, selling Newsweek to start-up International Business Times.
As its purchase price suggests, the Washington Post is in far better shape than Newsweek was, but the paper’s core business of covering inside the Beltway news is under threat from websites like Politico. More importantly, the paper faces the same problem all legacy news organizations face, which is how to scale back its news operation to a level that is economically sustainable in a post-print era without doing fatal damage to the news gathering itself. To do that, though, requires a nuanced understanding of why the old business model failed in the first place.
There are two versions of the story of why print newspapers bit the dust, one a tech-geek fantasy, the other a more prosaic business tale. In the tech-geek fantasy version, spread by the likes of digivangelist Clay Shirky in his 2008 book Here Comes Everybody, newspapers were beaten at their own game by bloggers and regular citizens armed with iPhones and laptops who were able to deliver news faster and more cheaply than the old print warhorses. Ironically, Shirky and others who advance this theory are laboring under the same misconception that has plagued news executives for the last twenty years – namely, the assumption that the principal business of a newspaper is gathering news. Newspapers don’t sell news. Rather, they give news away for free in order to maintain a distribution system for business information, most of which takes the form of paid ads. Newspapers remained as lucrative as they were for as long as they did because until the introduction of the web browser in 1994, nothing else offered cheap access to the millions of ordinary consumers who picked up the paper that landed on their front curb every morning.
Understanding this distinction helps explain why television hurt but did not kill newspapers. Television long ago entered more homes than newspapers ever did, and in many ways TV, which includes moving pictures and sound, is a better delivery device for news. But again, news isn’t the product for sale; advertising is, and by its nature, television can only effectively sell broad conceptual ideas that can be communicated visually in thirty seconds. You can use television to convince millions of Americans to shop at Safeway, but you can’t very well use TV to tell Americans about everything that’s on sale that week at their neighborhood Safeway. And if you are trying to find a roommate or selling some old furniture, you can’t afford the thousands of dollars it would cost to run even a fifteen-second spot on a local station. For those kinds of tasks, you called up your local paper and bought a classified ad – until, that is, Craigslist and eBay came along and let people post those ads essentially for free.
To repeat: newspapers aren’t dying because they’re getting beat on news reporting. Newspapers are dying because the Internet separated the news content from the advertising revenue stream. For generations news executives thought they were selling news, while in fact they were selling a pipeline to consumers that companies and individuals paid to use. Now, the Internet itself is that pipeline, and we’re watching a wild scramble to see who will control it and the rafts of dollars flowing down its many tributaries. So far, tech giants like Apple, Facebook, Google, and, yes, Amazon, are winning that battle hands down.
This same battle, meanwhile, has been playing out across all forms of cultural and artistic expression. Twenty years ago, if a rock band wanted to find a wide audience for its music, it signed with a record label, which then recorded the band’s music and distributed it to stores across the country. Now, thanks to the ease of digital distribution, young musicians can bypass record labels and post their songs online for free. But if they want to make any real money from their work, they will almost certainly have to turn to Apple’s iTunes site and its direct pipeline to America’s ears.
A similar story has played out in the movie business, which only a few years ago could depend on DVDs rentals to make up revenue lost at the box office. Now, thanks to streaming services like Netflix, DVD rental fees have dried up and movie studios are madly turning out special-effects-laden comic-book serials in the hopes of winning over American teenagers and Chinese moviegoers, the last groups still consistently willing to pay to watch a movie in a theater (as long as it’s loud, violent, and not overly dependent on the subtleties of spoken English).
Books have been somewhat insulated from these disruptions because, so far, most readers still prefer physical books over e-books, but the terms of the battle are the same. Publishing firms, which have for generations paid writers to produce and editors to curate books are fighting tech giants Apple and Amazon, which view books primarily as loss leaders they can use to attract customers to their e-readers. So long as most readers continue to prefer printed books, publishing will limp along in its wounded state, rather like the news business after television but before the Internet. But there is a tipping point at which e-readers, and the recommendation engines controlled by the tech giants, could take over the curating role now played by publishing houses, thereby killing the publishing industry as we know it.
All of which brings us back to Jeff Bezos and the Washington Post. Over the last twenty years, much of the money and power once held by content producers – newspapers, record labels, movie studios, publishing houses, etc. – has transferred to the tech giants that now control the digital pipelines to consumers. This means that it’s much easier for any individual artist or journalist to reach an audience, which is a great and good thing, but it also means that the tech giants controlling the pipelines are taking ever increasing shares of profits.
For the past decade or so, we have been enjoying a strange hangover period of the pre-digital age. A generation of journalists and artists trained in the dead-tree era, who have few other marketable skills, have continued producing art and journalism even though they are getting paid far less for their work than they used to. But every year more of these content producers are retiring or moving on, and we are entering a new period dominated by the first truly digital generation of bloggers and artists who are faced with the task of rebuilding the culture industries out of the ashes of the tech explosion.
I and many others have argued that, so far at least, this generation has relied too heavily on memes and information derived from the legacy content producers. In journalism, this has meant hordes of bloggers feasting on an ever-shrinking supply of reported news from print-based news organizations. In film, this has meant kajillions of kids with camera phones riffing on existing story worlds, like Star Wars and Harry Potter, and uploading the results onto YouTube. As Jaron Lanier, a digital pioneer recently turned Internet skeptic, puts it in his 2011 book You Are Not a Gadget, we are a culture in danger of “effectively eating its own seed stock.”
Obviously, this cannot go on forever, but thus far the most powerful technological disrupters have shown little interest in investing in the content carried along their digital pipelines. Apple, with its market-making iPhone and iPad devices, sparked a creative revolution in the world of apps, but when it comes to cultural content like books, movies, and news, all the tech companies have done is made it cheaper and easier to get what you want, cutting deeply into the profit margins for the content producers in the process.
Amazon, which now controls a quarter of the book business, has of course played a huge role in this devaluing of cultural content, but in recent years Amazon has also quietly begun investing in content of its own. Since 2009, Amazon has launched imprints focusing on romance (Montlake Romance), thrillers (Thomas & Mercer), and sci-fi (47North), and now even general adult titles (New Harvest) and literary fiction (Little A). Compared with Amazon itself, these ventures are tiny, and they have run into trouble with rival booksellers like Barnes & Noble, which have refused to stock their titles. But whether these imprints succeed or fail, they demonstrate that Bezos has begun to wrap his mind around what it would mean if his company squeezed so much value out of the book business that publishing became in effect one long amateur hour.
So, is the Washington Post purchase a step in the same direction, an effort on Bezos’s part to invest directly in the content that fuels his billion-dollar pipelines? The short answer is nobody knows. By all accounts the deal came together quickly, and it may well be that Bezos himself is unsure just what he wants to do with the Post. For a man worth $25.2 billion, as Bezos is, a $250 million newspaper truly can qualify as an impulse buy. Perhaps this is simply the billionaire’s answer to collecting old-fashioned typewriters. Let’s hope that’s not the case because whatever you may think of Bezos and others who broke the pre-Internet business model, the fact is it’s broken – and who better to fix it than the man who helped break it in the first place?
Bezos, who has never worked in the news business, may be less attached to the dying print model than most print-news lifers and thus more willing to embrace digital-only innovations. As a man who has made his living tapping the powers of the interwebs, he may be better able to see that strict paywalls, which limit linking and bring in few dollars, are a dead end for most news organizations. As a CEO who recently bought out the reader hub GoodReads, he may be more open to recasting the newspaper as a community gathering spot, a sort of localized wiki combining conversation, community news, and event listings with ad revenues supporting a small, professional news staff. Most important, as a manager who has excelled at the long game, spending years investing in infrastructure for Amazon rather than diverting profits to shareholders, Bezos might be more willing to lose some money while figuring out how to marry news quality with profitability.
Or maybe not. Maybe the guy just wants a $250 million toy. But let’s hope not, because if that’s the case we stand to lose a lot more than a grand old newspaper that once helped take down a president.
Today, April 30, marks the twentieth anniversary of my last day in the newsroom of a daily newspaper. In truth, my newspaper career was neither long nor particularly illustrious. For about four years in my early twenties I worked at two small newspapers: the Mill Valley Record, the decades-old weekly newspaper in my hometown that died a few years after I left; and the Aspen Daily News, which, miraculously, remains in business today. Still, I loved the newspaper business. I have never worked with better people than I did in that crazy little newsroom in Aspen, and I probably never will. I quit because it dawned on me that, while I was a good reporter, I had neither the skills nor the intestinal fortitude to follow in the footsteps of my heroes, investigative reporters like Bob Woodward and David Halberstam. What I couldn’t know the day I left the Daily News and began the long trek that led first to graduate school and then to college teaching was the sheer destructive power of the bullet I was dodging.
The Pew Research Center’s “State of the News Media 2012” report offers a sobering portrait of what has happened to print journalism in the twenty years since I left. After a small bump during the Clinton Boom of the 1990s, advertising revenue for America’s newspapers has fallen off a cliff in the past decade, dropping by more than half from a peak of $48.7 billion in 2000 to $23.9 billion in 2011. Thus far at least, online advertising isn’t saving the business as some hoped it might. Online advertising for newspapers was up $207 million between 2010 and 2011, but in that same period, print advertising was down $2.1 billion, meaning print losses outnumbered online gains by a factor of 10-1.
But as troubling as the death of print journalism may be for our collective civic and political lives, it may have an even more lasting impact on our literary culture. For more than a century, newspaper jobs provided vital early paychecks, and even more vital training grounds, for generations of American writers as different as Walt Whitman, Ernest Hemingway, Joyce Maynard, Hunter S. Thompson, and Tony Earley. Just as importantly, reporting jobs taught nonfiction writers from Rachel Carson to Michael Pollan how to ferret out hidden information and present it to readers in a compelling narrative.
Now, though, the infrastructure that helped finance all those literary apprenticeships is fast slipping away. The vacuum left behind by dying print publications has been largely filled by blogs, a few of them, like the Huffington Post and the Daily Beast, connected to huge corporations, many others written by bathrobe-clad auteurs like yours truly. This is great for readers who need only fire up their laptop – or increasingly, their tablet or smartphone – and have instant access to nearly all the information produced in the known world, for free.
But the system’s very efficiency is also its Achilles’ heel. When I worked in newspapers, a good part of my paycheck came from sales of classified ads. That’s all gone now, thanks to Craigslist and eBay. We also were a delivery system for circulars from grocery stores and real estate firms advertising their best deals. Buh-bye. Display ads still exist online, but advertisers are increasingly directing their ad dollars to Google and Facebook, which do a much better job of matching ads to their users’ needs. Add to this the longer-term trend of locally owned grocery stores, restaurants, and clothing shops being replaced by national chains, which draw more business from nationwide TV ad campaigns, and the economic model that supported independent reporting for more than a hundred years has vanished.
Without a way to make a living from their work, most bloggers are hobbyists, and most hobbyists come at their hobby with an angle. So, you have realtor blogs that tout local real estate and inveigh against property taxes. Or you have historical preservation blogs that rail against any new construction. Or you have plain old cranks of the kind who used to hog the open discussion time at the beginning of local city council meetings, but now direct their rants, along with pictures, smart-phone videos, and links to other cranks in other cities, onto the Internet. What you don’t have is a lot of guys like I used to be, who couldn’t care less about the outcome of the events they’re covering, but are being paid a living wage to present them accurately to readers.
The debate over the downsides of the Internet tends to focus on the consumer end, arguing, as Nicholas Carr does in his bestseller, The Shallows, that the Internet is making us dumber. That may or may not be true – I have my doubts – but as we near the close of the second decade of the Internet Era, we may be facing a far greater problem on the producer end: the atrophying of a central skill set necessary to great literature, that of taking off the bathrobe and going out to meet the people you are writing about. I mean to cast no generational aspersions toward the web-savvy writers coming up behind me, but having done both, I can tell you that blogging is nothing like reporting. Just about any fact you can find, or argument you can make, is available online, and with a few clicks of the mouse, anyone can sound like an expert on virtually any subject. And, because so far the blogosphere is, for the great majority of bloggers, quite nearly a pay-free zone, most bloggers are so busy earning a living at their real job, they have no time for old-fashioned shoe leather reporting even if they had the skill set.
But in the main, today’s younger bloggers don’t have those skills, because shoe-leather reporting isn’t all that useful in the Internet age. Reporting is slow. It’s analog. You call people up and talk to them for half an hour. Or you arrange a time to meet and talk for an hour and a half. It can take all day to report a simple human-interest story. To win eyeballs online, you have to be quick and you have to be linked. Read Gawker some time. Or Jezebel. Or even a site like Talking Points Memo. There’s some original reporting there, but more common are riffs on news stories or memes created by somebody else, often as not from television or the so-called “dead-tree media.” When there is an original piece online, often it comes from an author flacking for another, paying gig – a book, a business venture, a weight-loss program, a political career.
Clay Shirky, the NYU media studies professor and author of Here Comes Everybody, has suggested the crumbling of economic support for traditional print media and the original reporting it engendered is a temporary stage in the healthy process of creative destruction that goes along with the advent of any new game-changing technology. “The old stuff gets broken faster than the new stuff is put in its place,” Shirky is quoted as saying in The Pew Center’s “State of the News Media 2010” report.
Maybe Shirky is right and online news sites will discover an economic model to replace the classified pages and grocery-store ads, but as virtual reality pioneer Jaron Lanier points out in You Are Not A Gadget, we’ve been waiting a long time for the destruction to start getting creative. Lanier, who is more interested in music than writing, argues that for all the digi-vangelism about the waves of creativity that would follow the advent of musical file-sharing, what has happened so far is that music has gotten stuck in a self-reinforcing loop of sampling and imitation that has led to cultural stasis. “Where is the new music?” he asks. “Everything is retro, retro, retro.”
I have frequently gone through a conversational sequence along the following lines: Someone in his early twenties will tell me I don’t know what I’m talking about, and then I’ll challenge that person to play me some music that is characteristic of the late 2000s as opposed to the late 1990s. I’ll ask him to play the track for his friends. So far, my theory has held: even true fans don’t seem to be able to tell if an indie rock track or a dance mix is from 1998 or 2008, for instance.
I am certainly not the go-to guy on contemporary music, but, like Lanier, I fear we are creating a generation of riff artists, who see their job not as creating wholly new original projects but as commenting upon cultural artifacts that already exist. Whether you’re talking about rappers endlessly “sampling” the musical hooks of their forebears, or bloggers snarking about the YouTube video of Miami Heat star Shaquille O’Neal holding his nose on the bench after one of his teammates farted during the first quarter of a game against the Chicago Bulls, you are seeing a culture, as Lanier puts it, “effectively eating its own seed stock.”
Thus far this cultural Möbius strip hasn’t affected books to the same degree that it has the news media and music because, well, authors of printed books still get paid for having original ideas. (If you wonder why cyber evangelists like Clay Shirky keep writing books and magazine articles printed on dead trees, there’s your answer. Writing a book is a paid gig. Blogging is effectively a charitable donation to the cultural conversation, made in the hope that one’s donation will pay off in some other sphere, like, say, getting a book contract.) The recent U.S. government suit against Apple and book publishers over alleged price-fixing in the e-book market, which would allow Amazon to keep deeply discounting books to drive Kindle sales, suggests that authors can’t necessarily count on making a living from writing books forever. But even if by some miracle, books continue to hold their economic value as they move into the digital realm, the people who write them will still need a way to make a living – and just as importantly, learn how to observe and describe the world beyond their laptop screen – in the decade or so it takes a writer to arrive at a mature and original vision.
Try to imagine what would have become of Hemingway, that shell-shocked World War I vet, if he hadn’t found work on the Kansas City Star, and later, the job as a foreign correspondent for the Toronto Star that allowed him to move to Paris and raise a family. The same goes for a writer as radically different as Hunter S. Thompson, who was saved from a life of dissipation by an early job as a sportswriter for a local paper, which led to newspaper gigs in New York and Puerto Rico. All of his best books began as paid reporting assignments, and his genius, short-lived as it was, was to be able to report objectively on the madness going on inside his drug-addled head.
In 2012, we live in a bit of a false economy in that novelists and nonfiction writers in their thirties and forties are still just old enough to have begun their careers before content began to migrate online. Thus, we can thank magazines for training and paying John Jeremiah Sullivan, whose book of essays, Pulphead, consists largely of pieces written on assignment for GQ and Harper’s. We should also be thankful for Gourmet magazine, which, until it went under in 2009, sent novelist Ann Patchett on lavish, all-expenses-paid trips around the world, including one to Italy, where she did the research on opera singers that fueled her bestselling novel, Bel Canto. In a quirkier, but no less important way, we can thank glossy magazines for The Corrections by Jonathan Franzen, who supported himself by writing for Harper’s, The New Yorker, and Details during his long, dark night of the literary soul in the late 1990s before his breakout novel was published.
Those venues – most of them, anyway – still exist, but they are the top of the publishing heap, and the smaller, entry-level publications of the kind I worked for twenty years ago, are either dying or going online. Increasingly, my decision to leave journalism to enter an MFA program twenty years ago seems less a personal life choice than an act guided by very subtle, yet very powerful economic incentives. As paying gigs for apprentice writers continue to dwindle, apprentice writers are making the obvious economic choice and entering grad school, which, whatever its merits as a writing training program, at least has the benefit of possibly leading to a real, paying job – as a teacher of creative writing, which, as you may have noticed, is what most working literary writers do for a living these days.
Perhaps that is what people are really saying when they talk about the “MFA aesthetic,” that insular, navel-gazing style that has more to do with a response to previous works of fiction than to the world most non-writers live in. Perhaps the problem isn’t with MFA programs at all, but with the fact that, for most graduates of MFA programs, it’s the only training in writing they have. They haven’t done what any rookie reporter at any local newspaper has done, which is observed a scene – a city council meeting, a high school football game, a small-plane crash – and then written about it on the front page of a paper that everybody involved in that scene will read the next day. They haven’t had to sift through a complex, shifting set of facts – was that plane crash a result of equipment malfunction or pilot error? – and not only get the story right, but make it compelling to readers, all under deadline as the editor and a row of surly press guys are standing around waiting to fill that last hole on page one. They haven’t, in short, had to write, quickly, under pressure, for an audience, with their livelihood on the line.
It is, of course, pointless to rage against the Internet Era. For one thing, it is already here, and for another, the Web is, on balance, a pretty darn good thing. I love email and online news. I use Wikipedia every day. But we need to listen to what the Jaron Laniers of the world are saying, which is that we can choose what the Web of the future will look like. The Internet is not like the weather. It isn’t something that just happens to us. The Internet is merely a very powerful tool, one we can shape to our collective will, and the first step along that path is deciding what we value and being willing to pay for it.
Image via Wikimedia Commons
In his book Here Comes Everybody, Clay Shirky explains why personal blogs and social networking sites can sometimes confound us. He argues that before the internet, it was easy to tell what was a broadcast and what was a private message. A television show was a broadcast — a message meant for a large audience of people, a public message. A telephone call, on the other hand, was a private message, meant for one other person. On the internet, though, the difference between the two kinds of media is much smaller. Is a personal blog a public or a private communication? Is it meant for mass consumption by thousands or millions of people? Not typically, and yet it can be read, theoretically, by billions.
This blurring of the two types of media is so difficult to grasp that it’s produced its own near-ubiquitous straw man argument, which blogger Jason Kottke calls “the breakfast question.” It comes up whenever anyone writes about social media: “Why would I care what you ate for breakfast that morning?” Shirky’s rebuttal to this is succinct:
“It’s simple. They’re not talking to you. We misread these seemingly inane posts because we’re so unused to seeing written material in public that isn’t intended for us. The people posting messages to one another in small groups are doing a different kind of communicating than people posting messages for hundreds or thousands of people to read.
I’ve been thinking about this particular idea a lot lately as it applies to Tumblr. For those who are unfamiliar with Tumblr, it’s a blogging platform that categorizes posts into one form or another — text, photo, chat, audio, video. It allows you to put out small bursts of content, which then goes into a feed. People can follow you, just as they can on Twitter, and they can “like” your posts and re-blog them. Tumblr offers a combination of Twitter’s viral capabilities with a more customizable experience that allows for a tremendous level of personal expression.
I’m something of a Tumblr addict. It is the first thing I check in the morning — before my email, before my Facebook page, but after I have some coffee (Some addictions are more powerful than others). What I love about it is the social interaction. I follow a large number of personal blogs that post funnier, more creative versions of “Here’s what I had for breakfast.” (I was following a blog that was, literally, about what people ate for breakfast, but I dropped it. I guess they weren’t talking to me.) I also follow a bunch of themed blogs –The New Yorker Tumblr, for instance. They don’t interact much with me, and that’s fine. They’re kind of like highly focused magazines, and I enjoy them accordingly.
But if that’s all Tumblr was, I don’t think it would be quite so important to me. It’s the community that makes it special. Checking my Tumblr feed is like checking in with my friends, even if these “friends” are people I know very little about and will possibly never meet in real life. I met most of these people through friends of friends or via the social discovery that re-blogging affords. I somehow stumbled into their worlds, and they were interesting enough to make me want to come back. I interact with enough of them that I can pretty clearly say that when they post something, it is intended for me. I’m part of their small group, and I have no qualms about that.
Lisa, on the other hand, is a different matter. Lisa is a college student at a large university in the Midwest (and Lisa is not her name; I don’t know whether she would want a bunch of book nerds suddenly reading her posts or not, so I’m not going to link to her blog here, either). She seems pretty smart, and she blogs about her love life, her schoolwork, her friends, and all of the other things that matter to her. I find Lisa’s life very interesting, and her blog is great. But I haven’t completely settled the “is she talking to me” question. While Lisa follows me back, we don’t interact with each other. She uses Tumblr in a very social way, she isn’t really part of the crowd of people whom I otherwise follow. And I find this somewhat troubling.
At this point, I need to lay a few things on the table. First, I don’t have a lot of close friends. My wife has several friends with whom she speaks on a regular basis. They talk about the things that are happening in their lives and how they feel about them. I don’t have that. I’m a social person, and there are certainly people I love to have dinner with, meet at a party, etc., but ever since college that kind of close friendship has eluded me. And I think I’m okay with that, for the most part. But you could certainly argue that I use Tumblr to fill some void in my life, as pathetic as that might sound.
Also, Lisa is very attractive. And Tumblr has a way of encouraging people’s vanity. On Wednesdays, for example, there’s a tradition of posting a photo of yourself; this is known as Gratuitous Picture of Yourself Wednesday (GPOYW). This has the effect of sexualizing a lot of Tumblr blogs, to the point that my wife, Edan, hated it for months and months after I joined because she felt like every woman on it focused so much of her attention on her sexuality. I think she’s probably right, though that was largely about who I was following (I used to run with a bad crowd, man). So let me just clear this up for you: I’m not following Lisa because she’s hot or because I’m a perv. Let’s be honest, if I wanted to look at 20 year-old girls, there are other places to do it; this is the internet we’re talking about. Also, Edan, now on Tumblr, follows Lisa, too. We talk about her posts with each other. “She needs to dump that guy; he’s bad news. He won’t even hold her hand!” Edan will say. “He’s a college kid. What do you expect?” I’ll reply.
While I can’t deny that gender plays a role here, that’s not all there is to it. I like following her because, for whatever reason, her narrative is compelling. Following her blog is somewhat akin to watching a reality TV show (Not one of the ones where they try to out-dance each other or diet for money, but one that just follows someone’s daily life). She’s my Jersey Shore.
But of course, Lisa isn’t a reality TV character, she’s a real person. Yes, I know Snooki is real, too, but celebrities are different. The fact that Lisa could walk the streets of every city in the world with complete anonymity makes her situation fundamentally different from, well, The Situation’s. There are different laws governing pictures of celebrities and real people. Celebrities belong to us — the public — in ways that private citizens do not. And treating real people, regular people, the same way we treat celebrities, is problematic. And let’s not forget that Snooki and her ilk are paid to be in the public eye and to put up with all that entails.
A few weeks ago, I went to an performance exhibition by my friend, the artist Charlie White. It was called Casting Call, and according to its website it was meant to further explore “White’s ongoing interest in the complexities of the American teen as cultural icon, image, and national idea.” For the exhibition, an art gallery was converted into two rooms, each separated from the other by a pane of glass. On one side of the room was a casting call for teen girls exemplifying “the All American California girl” — blonde hair, tan skin, etc. — between the ages of 13 and 16. White and his crew interviewed the models, took a mug shot-style photograph of them, and then brought in the next girl. On the other side of the glass, an audience — mostly art students and hipsters — watched. Our friend Stephanie, White’s partner, pointed out that everyone on our side of the glass was brunette (except, it must be pointed out, Edan) while all of the models were, of course, blonde. White and his crew discussed each girl, both amongst themselves and with the girl, as well, but we could hear none of it. We were left to interpret the scene for ourselves. “Oh, look, they’re letting that girl look at the photo. They must really like her,” I said. “Yeah, either that or they could tell she was upset, and wanted to reassure her she did a good job.”
A seemingly never-ending stream of girls came through the door. What fascinated me most about the entire exhibition is how quickly we could objectify the girls. I don’t mean objectify them in the way that it’s commonly used — to turn them into sex objects — though there was certainly a tinge of the erotic about the event; by objectify, I mean to make them into something not quite human, and in turn, to talk about them as though they were things rather than people. “She’s too old.” “I like that one, in the leopard-print shorts. She’s my favorite.” “Look at how weird her hair is. Why does she look like that?” It was how we talk about people when they’re on television, but these people were merely a few feet away. The pane of glass, and the contrast between the brightly lit casting room and the dim audience space, was enough distance to effectively dehumanize these girls. There were other factors at work, such as the blonde California girl’s status as marketing conceit and sexual totem, but I think a big reason we all felt free to dissect and dismiss these girls is because they couldn’t really see us. We were, more or less, anonymous. It was especially unsettling to turn around after watching for a few minutes and see one of the girls who had been in the call standing just behind us. How long had she been there, the girl in the leopard print shorts? And how did she suddenly become so real?
The internet is such a tricky place now that anonymity actually needs to be explained and defined. There are actually a couple of flavors of anonymity on the web, and each of them comes with different issues. The first kind of anonymity is the one most of us are familiar with online, the anonymous user or commenter. This user is indistinguishable from the other anonymous commenters, and they can occasionally make some useful contributions. Anonymity can allow people to be more playful than they would be normally, maybe a little bit sexier, a little bit funnier. But they can also just be thugs. This type of anonymous user crops up on nearly every blog post, and while they occasionally voice a particularly controversial opinion, they are usually there only to spew bile and throw insults at the author of the post. In the comments of this site, I once joked that “anonymous” is always such a badass (To which Max replied, “I’d like a t-shirt that says “Anonymous: Internet Badass.””). There’s a reason why some sites disable anonymous commenting of this kind; having no identity carries no threat of consequences. Even if others ridicule your ideas and effectively send you back to your cave with your tail between your legs, nobody knows who “you” are, so you can return the next day to fight again.
There’s a second, more nuanced type of anonymity that is possibly more prevalent than simple anonymous commenting, and that’s the disguise of the pseudonym. Every message board has its trolls, those who enjoy causing trouble, dissenting from the norm, and generally putting others down. I’ve yet to encounter a community online that doesn’t have at least one of these people. They are rarely truly anonymous, since most message boards, social sites, and other internet communities typically require a user name. Instead, these users hide behind a moniker — sometimes employing the same user name on multiple sites. Having some sort of identity does create some consequences. Users can be banned from sites, ostracized, or otherwise punished for their behavior.
Often, though, this type of user can simply change his name. This is another form of what Jaron Lanier, in his book You Are Not a Gadget, calls “transient anonymity:”
People who can spontaneously invent a pseudonym in order to post a comment on a blog or on YouTube are often remarkably mean. Buyers and sellers on eBay are a little more civil, despite occasional disappointments, such as encounters with flakiness and fraud. Based on those data, you could conclude that it isn’t exactly anonymity, but transient anonymity, coupled with a lack of consequences, that brings out online idiocy.
On Tumblr, most people interact via their blogs which necessarily have a name attached to them. This insures that people will be generally civil. It is also an opt-in system, where you have to choose who to follow, which I think adds to the welcoming feel of the platform. It takes a while to build up a following and to create a blog you can be proud of; why throw that all away by being a creep or a jerk? The value of the blogs themselves creates an added buffer against what Lanier calls “Drive-by anonymity.”
But there’s another element of Tumblr that I’ve seen cause some very disturbing encounters. Each Tumblr comes with the ability to enable a feature that allows others to ask you a question. It can also be used as a de facto messaging system. The user can then decide whether they want to post an answer to your question or delete it. The trouble starts when the user enables anonymous questions. Some people choose to leave anonymous questions enabled because it can lead to some very interesting content. For instance, if the user wrote a brave post about a disease they had, someone might leave an anonymous note about that, not wanting to reveal that they too have the disease. A more shallow but still amusing use is the frequent comment “I have a crush on you” or “I think you’re beautiful,” etc.
For every one such comment, there are dozens of vile, offensive comments, meant to do little other than demean the author of the blog and make them feel worse about themselves and their lives. For instance, I follow a woman who posts lots of photos of art, gorgeous film stills, great music, and, yes, sometimes pictures of herself. One day she put up the poster for the film The Girlfriend Experience, about a prostitute who spends the night with her clients, going to dinner or a movie before having sex for money. A day or two later, an anonymous person sent this message to her: “You look like you could give a pretty good “girlfriend experience.” How about it? Ever given any thought to doing something like that?” My response to this post was, simply put, rage. I posted a response along the lines of “The rest of us are trying to have a civilization over here. Take that elsewhere.” I was enraged that this person had used this feature of the blog to suggest that the blogger would make a good prostitute. Keep in mind that the author of this blog didn’t have to make this public. I assume she did so (without comment) to shame the jerk who asked the question. But it’s worth noting that there was no guarantee of attention from anyone beyond this one particular blogger. He did this solely to mess with, belittle, and intimidate the author of the blog. And he did so with impunity.
He wasn’t alone. Every day, without fail, another person I follow posts a comment or question that an anonymous user asked them. These questions range from the classically juvenile (“I’m masturbating to you right now.” “Take ur shirt off!”) to more pointed personal assaults (“What’s it like coping with your obvious addiction to sleeping pills?” “You post a lot of photos of yourself because your looks are the only thing you have going for you.” “You’re an obnoxious bitch who probably has no friends.”). Not coincidentally, every one of these questions showed up on a blog written by a woman. So far, three bloggers that I follow have had to abandon their old online identities when creepy people began harassing them online. All of them were women.
Why are women treated differently than men online? I suppose the greater question is why they are still treated differently everywhere — online or otherwise — but since this post is about the web, I will focus on that. Surely there’s the garden variety sexism that permeates most of our culture, where women’s opinions are discounted or denigrated, and where the female form is used to sell everything from liquor to football. But I think there is something else at work online, and in many ways, it’s related to the strange feeling of watching all of those girls wait to have their pictures taken, as well as my conflicted feelings about enjoying college girl Lisa’s blog so much.
In her groundbreaking work “Visual Pleasure and Narrative Cinema,” film theorist Laura Mulvey posits that Hollywood cinema always casts the audience in the role of the masculine spectator. The camera, therefore, becomes the male gaze, and the women on screen the passive objects of its gaze:
“In a world ordered by sexual imbalance, pleasure in looking has been split between active/male and passive/female. The determining male gaze projects its phantasy on to the female form which is styled accordingly. In their traditional exhibitionist role women are simultaneously looked at and displayed, with their appearance coded for strong visual and erotic impact so that they can be said to connote to-be-looked-at-ness. Woman displayed as sexual object is the leit-motif of erotic spectacle: from pin-ups to striptease, from Ziegfeld to Busby Berkeley, she holds the look, plays to and signifies male desire. Mainstream film neatly combined spectacle and narrative.”
She argues that simply looking is a pleasurable experience, and the cinema affords this pleasure by providing an atmosphere in which men are free to look at women, for as long as they please and with clear intent. She says, “At the extreme, it can become fixated into a perversion, producing obsessive voyeurs and Peeping Toms, whose only sexual satisfaction can come from watching, in an active controlling sense, an objectified other.” On the internet, this seems to be compounded. We’re free to look with impunity, and in some cases, we are free to anonymously harass, as well. Of course, it is sometimes pleasurable to be looked at, as well. While the internet indulges both of these impulses — to look at and to be looked at — it seems clear to me that we have once again forced the women more often into the latter role. Despite the great leveling effect that the web has had on the media — it’s given a voice to millions of people who would otherwise largely be silent — we are still creating a system of “sexual imbalance,” in Mulvey’s terms. This is most acute where the female image actually appears — on fashion blogs, personal blogging platforms like Tumblr, and of course pornography — but it is present, more or less, throughout the net. In fact, I’ve often found that what provokes the anonymous assaults, more often than not, are not pictures of women but arguments made by them. This suggests that the harassment is a form of maintaining the male dominance; that it possibly (and maybe often does) come from other women is irrelevant.
The key difference between the films that Mulvey dissects in her essay and the personal blogs I’m talking about is agency. The films were made by men — men called the shots (literally) and wrote the stories that cast women in the passive roles. Obviously a personal blogger decides what to post on her blog. But while this difference is worth noting, it doesn’t seem to matter much in terms of the audience’s reaction. In fact, the blogger’s agency frequently becomes a weapon for the blogger’s critics. “Well, if she doesn’t want to be called a slut, maybe she shouldn’t post such provocative photos.” Doesn’t this sound a bit like the “She was asking for it” argument?
Which brings me back to the problem of Lisa. Feeling as I do about the internet, and the role gender is fast coming to play in it, I feel implicated by her blog (through no fault of her own). Part of this comes from the hazy status of intent. Does she want me read her blog? Strangely, not long after I began this essay, someone asked her if she was comfortable with so many strangers following her daily life. She responded that she didn’t care; if they wanted to read about her and look at pictures of her, that was fine. This should have absolved me of my guilt, but it didn’t. I keep coming back to Mulvey’s argument: Am I deriving pleasure from looking at Lisa? I am. But I also post photos of myself, thereby enjoying the pleasure of being looked at. Still, no one has ever responded to an image of me with an anonymous note saying, “You look fat” or “Nice beard, asshole.” Only women have to put up with that. And that is shameful. (It’s worth noting that the hot film of the moment, The Social Network, would have us believe that social networking, at its base, is about checking out girls and stalking ex-girlfriends. It’s why the stuff was invented, to let men objectify women from a safe distance.)
And that’s what weighs on me as I follow Lisa’s blog. I’m aware of the voyeuristic aspect of following the blog of a much younger woman, but at the same time, I feel a sort of odd friendship with Lisa. If she weren’t following me back and I were merely reading her posts, as many no doubt do, in total anonymity, I think that would be different. Perhaps following back is all the recognition I need to feel like Lisa is talking to me. And it’s pretty clear from reading my blog who I am: I’m Patrick, I’m in my 30s, I live in LA, and I’m married. On the internet, being yourself is no small thing.
A year ago, I read one of those rare profound utterances that Twitter produces from time to time. It came from comedian Lindsay Katai: “The Internet: Where Ladies Promote Their Boyfriends’ Endeavors. Conversely, the Internet: Where Men Make Every Pretense of Appearing Single.” This rang true to me then, and I’ve thought of it frequently while reading Tumblr, where identities are formed one post at a time over weeks and months. The posts I most look forward to reading are the posts about people’s lives — the petty failures at work, the little strange thing they observed on the bus, a photo of themselves having fun.
I suspect I’m not alone in this. This is the pleasure of online life, it seems to me. It’s the reason, more than any fancy coding or user interface, that Facebook is so successful. We want to know each other, to see what’s happening in other people’s lives. We want, in short, to read each other’s stories. But that kind of world — one that values openness and honesty — can’t exist if half of its participants have to be constantly vigilant lest they be verbally assaulted, harassed, or worse. If we, as a culture, don’t do something to combat this, then we stand to lose more than just updates about meals and photos of pets. Like it or not, we are all going to have to live more and more of our lives online. I would hope that we could make that place better than the one we now call “real life” — a place where people are free to be themselves, yes, but also where they are free to decide what that means for themselves, without fear of humiliation or intimidation. That’s a place I’d like to call home.
Images 2 & 3: courtesy Charlie White)