1. In 1798, a decade after the ratification of the U.S. Constitution, President John Adams signed the infamous Sedition Act. The controversial law, passed alongside a slate of Alien Acts aimed at cracking down on immigrants deemed dangerous to the state, made it illegal to produce any “false, scandalous and malicious writing or writings against the government of the United States…with intent to defame the said government…or to stir up sedition within the United States.” The brief history of the Sedition Act, which expired in 1800 after Thomas Jefferson succeeded Adams as president, had its comic moments. One day, an elderly New Jerseyan, Luther Baldwin, stopped to watch President Adams and his wife parade down Newark’s Broad Street accompanied by a 16-gun salute. According to James MacGregor Burns’s judicial history, Packing the Court, someone in the crowd shouted, “There goes the President and they’re firing at his a – !” Baldwin, who had been drinking, retorted that he “did not care if they fired thro’ his a – !” and was promptly clapped into jail. But the Sedition Act was also used to silence press criticism. Scottish-born polemicist James Callender spent nine months in jail and paid a $200 fine for calling President Adams, among other things, a “repulsive pedant, a gross hypocrite and an unprincipled oppressor.” More famously, Benjamin Bache, editor of the virulently anti-Federalist paper the Aurora, was arrested under the Sedition Act after printing stories attacking Adams and accusing George Washington of secretly collaborating with the British during the Revolutionary War. I was reminded of the Alien and Sedition Acts in the opening days of the Administration of Donald Trump when, in rapid succession, the president halted immigration from seven Muslim-majority countries and his chief policy adviser, Stephen Bannon, told The New York Times that the media “should keep its mouth shut and just listen for a while.” Like a lot of people who read Bannon’s interview during that first tumultuous week when the president was signing one new wildly controversial executive order after another and millions of Americans were flooding the streets and airports in protest, I heard only the line about the nation’s media needing to sit down and shut up. When I reread the Times piece some days later I realized that Bannon wasn’t simply trying to muzzle the American media. He was also delivering a blistering critique of a media culture so lost in its bubble of urbane liberal comfort that it missed what may one day prove to be the story of the century. “The media got it dead wrong, 100 percent dead wrong,” he said of the 2016 election, calling it “a humiliating defeat that they will never wash away, that will always be there.” This, the blown coverage of the 2016 campaign, is the context for his headline-making denunciations. “The media should be embarrassed and humiliated and should keep its mouth shut and just listen for a while,” he told Times reporter Michael Grynbaum, adding: “You’re the opposition party. Not the Democratic Party. You’re the opposition party. The media’s the opposition party.” Now, obviously, mainstream media outlets weren’t the only ones who misread the Trump election. Everybody missed that story, including some members of Trump’s own campaign staff. It is also absurd to suggest that “the media,” en masse, are out to get Trump and his administration. There is, after all, a well-financed network of right-leaning news sites, one of which, Breitbart.com, Bannon himself has run, offering full-throated support to Trump’s presidency and even more full-throated condemnation of his enemies. But if you look past the bombast and exaggeration, you can detect in Bannon’s comments the outlines of a chillingly accurate analysis of an American news media crippled by half a century of technological disruption. The national media did miss the white-working-class rage that propelled Trump into office last fall, and even now large swatches of the mainstream press seems perplexed by -- and in some cases, openly opposed to -- the president that populist anger helped elect. Meanwhile, the news sites that saw Trump coming, the Breitbarts of the world, seem dangerously uninterested in facts and instead relentlessly push a hard-right political agenda. This, then, is the predicament facing the American news consumer today. It’s not just that we live in a polarized media universe. It’s that we are, journalistically speaking, flying blind. One segment of the population, the one that just elected a president, is in thrall to a fact-challenged ideological fringe while the rest of the population relies on a badly weakened legacy media whose reporters are highly educated and professionally concerned with facts and evidence, but so deeply ensconced in their elite, urban echo chamber that they’re not always capable of making sense of the facts they find. Thus, as we stand in the still-smoking ruins of the 20th-century American media machine, we risk returning to a media environment not unlike the one before the rise of the mass-circulated print newspapers when a hyper-partisan press free-for-all pushed an American president to sign a law allowing the government to lock up journalists it didn’t like. 2. I care about news because I read and watch a lot of it and because I rely on it as a voter, but in another way, this is personal for me. Thirty years ago, as a 22-year-old straight out of college, I lucked into a job at my hometown weekly, the Mill Valley Record. I had no journalism training, and I hadn’t written a news story since high school. I just showed up one day in the newsroom looking for work and the editor handed me a press release for an upcoming public meeting. “Why don’t you go to this?” he said. “If there’s any news in it, we’ll print it.” Three months later, I had a full-time job covering local politics. Like many young reporters in those days, what little I knew about journalism before I began practicing it myself came from two books, All the President’s Men by Bob Woodward and Carl Bernstein and The Powers That Be by David Halberstam. The Halberstam book is a massive doorstop history of 20th-century journalism while the Woodward and Bernstein book is a tick-tock thriller about a single major news investigation, but both books offer riveting accounts of American print journalism’s finest hour, the Washington Post’s reporting of the Watergate Scandal, which ultimately caused the resignation of President Richard Nixon. At the Record, I may have been covering planning and zoning meetings and writing puff pieces about local businesses, but in my mind I was a junior Bob Woodward nailing down that last fact, making that extra phone call, so that one day I would be able to speak truth to power on the front pages of a major metro daily. What I didn’t know -- what no one of that era understood -- was that in a little more than a decade the Internet would strangle the small-town weeklies that had trained generations of cub reporters like me and put the major metro dailies that I aspired to join on life support. Three decades on, I understand that the media landscape that I knew as a small-town reporter in the late-1980s and early-1990s was just one iteration in the ever-shifting continuum of American journalism. In the early days of the Republic, the era that brought us John Adams’s Sedition Act, newspapers were a luxury item sold by subscription to a relatively narrow, educated elite. Often, these journals were owned and operated by political parties with the express purpose of advocating for their candidates and embarrassing their rivals. That changed with the advent of the steam-powered press, which so radically reduced the cost and sped up the process of printing a newspaper that editors could slash the cover price from six cents to a penny and market it to a working-class audience. Over the next century, print newspapers grew from a handful of blog-like broadsheets into a complex network of newspapers ranging from tiny, one-man-band local weeklies to national publishing chains run by tycoons like William Randolph Hearst and Joseph Pulitzer. In 1950, before TV began stealing eyeballs and ad dollars, there were 25 percent more newspapers sold each day in America than there were American households. The midcentury American newspaper, written for a local readership and dependent on local advertising dollars, naturally reflected the political outlook of its audience A newspaper in a segregated Southern town had to toe the segregationist line or go out of business, just as a newspaper in a well-to-to liberal suburb faced disaster if its news columns contradicted the views of its readers. But in both cases, editors had a strong incentive to avoid extreme rhetoric or wildly inaccurate reporting because they depended on local advertisers, who would pull ads from a publication whose reputation besmirched their own. The system was far from perfect, but it built the journalistic world I stumbled into the day I took that press release from my editor at the Mill Valley Record. The reporters and editors I worked with were not especially serious people, but they took their jobs seriously. When a reader buttonholed one of us on the street to complain about an issue of local import, we asked questions and followed up. We called both sides in any dispute, always. We knew the people we covered well, but we routinely rotated beats so we wouldn’t get too cozy with our sources. We called back to double-check facts, and when we screwed up, we wrote a correction for the next day’s paper. More than anything, we prided ourselves on being able to cut through the bullshit and explain in clear, direct prose what had happened. Thirty years later, that world is fast vanishing into the digital ether. No footloose 22-year-old without journalism training could expect to fall backward into a full-time newspaper job today, unless, of course, he or she was of the social class that could afford to take a nonpaying internship and follow that up with two years of journalism school. That, more than any nefarious liberal cabal, explains the leftward tilt of what remains of the mainstream media. As local newspapers in smaller cities and towns die off, we’re increasingly left with national publications and TV and cable networks based in liberal urban centers. Meanwhile, digital disruption has changed how reporters are trained, which is changing who enters the profession. A generation before me, news reporting was still a union job only a small step up from the guys who ran the Linotype machines. Today, thanks to the same forces of technological disruption that have hollowed out so many middle-class professions, journalism is the province of a highly educated and urban elite -- precisely the class of person most likely to look askance at a man like Donald Trump. 3. This, I think, is what Stephen Bannon means when he calls the media the opposition party. Bannon sees himself as leading a white working-class revolt against the multicultural liberal elite, which is neatly personified by the latte-sipping chattering classes of Washington DC. Of course, by declaring war on the media and by prodding his boss to make ever more alarming moves in office, Bannon is himself pushing an already liberal-leaning press corps in an ever more shrilly leftward direction. But really, this fact is less frightening than the fact that he can do it so effortlessly. Without that truth-seeking ecosystem of healthy small- and mid-size daily newspapers to explain national news in terms local readers can understand, Americans are left stewing in separate echo chambers, one urban, educated, and liberal, the other working-class, rural, and spoiling for a fight. Not only do the inhabitants of these echo chambers not talk to each other; they barely speak the same language. It’s heartening to hear that digital subscriptions to legacy media sites like The New York Times and The Atlantic are on the rise, just as it’s refreshing to see ordinary Americans using social media to organize and keep themselves informed. Maybe over time, as we grow more sophisticated about our digital tools, we’ll get better at using Snopes.com-like sites to knock down fake news stories and start crowd-funding citizen-journalists to cover small cities and towns the way I once did. There’s nothing inherently good or bad about the Internet. It’s a tool like any other. We just have to learn how to use it. For now, though, we would be crazy not to acknowledge the danger we face as a nation flying blind without a media we fully trust. No one in government has discussed reviving John Adams’s Sedition Act, but every day that Trump sends his press secretary into the White House briefing room to dress down the media or uses Twitter to gaslight Americans into disbelieving the facts they hear on the nightly news is a day we inch a step closer to that reality. Image Credit: Flickr/Ahmad Hammoud.
1. That yowl of pain you heard coming from the nation’s capital on Monday afternoon was the death cry of the print newspaper business as it was cut to the heart by the buyout of the Washington Post by Amazon CEO Jeff Bezos. Just forty years ago, during the Watergate scandal, the Post was an economic and cultural force potent enough to help take down a sitting president, and now it has itself been taken down by a guy who less than twenty years ago was working out of his garage selling books on the Internet. The surprise $250 million deal has coup de grâce written all over it, and in newsrooms across the country, reporters and editors – those who still have jobs – will be grousing over the rich symbolism of one of the crown jewels of American journalism being snapped up by a mogul made rich by the very technology that killed the print business model. But once the grumbling dies down, those of us who care about journalism and culture may have to concede two obvious points. First, we damn well better hope Bezos succeeds where others have failed in figuring out how to produce professional-quality content in the digital age, not just for the sake of the Washington Post or even of the news business, but for the sake of cultural and artistic production in general. Second, of all the billionaires with the means to buy a major cultural institution like the Washington Post, Jeff Bezos might just be the one who can reinvent it for the 21st century. The details of the Post deal are curious – and telling. For one thing, the newspaper’s parent company, which owns a diverse group of media and educational businesses including the Kaplan test prep company, is selling only its newspapers and holding onto its other businesses. For another, Bezos is buying the Post on his own, and won’t merge it into Amazon, the online retailing behemoth he still runs. Finally, Bezos has said he doesn’t intend to lay off employees and will keep Katherine Weymouth, granddaughter of legendary Post publisher and Washington powerbroker Katherine Graham, on as the newspaper’s publisher. In other words, the Washington Post Company sees its signature property as a money-loser, and Bezos appears to be stepping in as a white knight to save a cultural institution from falling into disrepair. The history of these sorts of deals is mixed at best. After all, just three years ago the Post Company sold once-mighty Newsweek for a dollar to yet another billionaire, 92-year-old Sidney Harman, who brought in former New Yorker editor Tina Brown, with a plan to merge the aging print weekly with the website The Daily Beast. Harman, however, promptly died, and with the magazine hemorrhaging readers, Brown just last week severed ties between the website and the print magazine, selling Newsweek to start-up International Business Times. As its purchase price suggests, the Washington Post is in far better shape than Newsweek was, but the paper’s core business of covering inside the Beltway news is under threat from websites like Politico. More importantly, the paper faces the same problem all legacy news organizations face, which is how to scale back its news operation to a level that is economically sustainable in a post-print era without doing fatal damage to the news gathering itself. To do that, though, requires a nuanced understanding of why the old business model failed in the first place. 2. There are two versions of the story of why print newspapers bit the dust, one a tech-geek fantasy, the other a more prosaic business tale. In the tech-geek fantasy version, spread by the likes of digivangelist Clay Shirky in his 2008 book Here Comes Everybody, newspapers were beaten at their own game by bloggers and regular citizens armed with iPhones and laptops who were able to deliver news faster and more cheaply than the old print warhorses. Ironically, Shirky and others who advance this theory are laboring under the same misconception that has plagued news executives for the last twenty years – namely, the assumption that the principal business of a newspaper is gathering news. Newspapers don’t sell news. Rather, they give news away for free in order to maintain a distribution system for business information, most of which takes the form of paid ads. Newspapers remained as lucrative as they were for as long as they did because until the introduction of the web browser in 1994, nothing else offered cheap access to the millions of ordinary consumers who picked up the paper that landed on their front curb every morning. Understanding this distinction helps explain why television hurt but did not kill newspapers. Television long ago entered more homes than newspapers ever did, and in many ways TV, which includes moving pictures and sound, is a better delivery device for news. But again, news isn’t the product for sale; advertising is, and by its nature, television can only effectively sell broad conceptual ideas that can be communicated visually in thirty seconds. You can use television to convince millions of Americans to shop at Safeway, but you can’t very well use TV to tell Americans about everything that's on sale that week at their neighborhood Safeway. And if you are trying to find a roommate or selling some old furniture, you can’t afford the thousands of dollars it would cost to run even a fifteen-second spot on a local station. For those kinds of tasks, you called up your local paper and bought a classified ad – until, that is, Craigslist and eBay came along and let people post those ads essentially for free. To repeat: newspapers aren't dying because they're getting beat on news reporting. Newspapers are dying because the Internet separated the news content from the advertising revenue stream. For generations news executives thought they were selling news, while in fact they were selling a pipeline to consumers that companies and individuals paid to use. Now, the Internet itself is that pipeline, and we’re watching a wild scramble to see who will control it and the rafts of dollars flowing down its many tributaries. So far, tech giants like Apple, Facebook, Google, and, yes, Amazon, are winning that battle hands down. This same battle, meanwhile, has been playing out across all forms of cultural and artistic expression. Twenty years ago, if a rock band wanted to find a wide audience for its music, it signed with a record label, which then recorded the band’s music and distributed it to stores across the country. Now, thanks to the ease of digital distribution, young musicians can bypass record labels and post their songs online for free. But if they want to make any real money from their work, they will almost certainly have to turn to Apple’s iTunes site and its direct pipeline to America’s ears. A similar story has played out in the movie business, which only a few years ago could depend on DVDs rentals to make up revenue lost at the box office. Now, thanks to streaming services like Netflix, DVD rental fees have dried up and movie studios are madly turning out special-effects-laden comic-book serials in the hopes of winning over American teenagers and Chinese moviegoers, the last groups still consistently willing to pay to watch a movie in a theater (as long as it’s loud, violent, and not overly dependent on the subtleties of spoken English). Books have been somewhat insulated from these disruptions because, so far, most readers still prefer physical books over e-books, but the terms of the battle are the same. Publishing firms, which have for generations paid writers to produce and editors to curate books are fighting tech giants Apple and Amazon, which view books primarily as loss leaders they can use to attract customers to their e-readers. So long as most readers continue to prefer printed books, publishing will limp along in its wounded state, rather like the news business after television but before the Internet. But there is a tipping point at which e-readers, and the recommendation engines controlled by the tech giants, could take over the curating role now played by publishing houses, thereby killing the publishing industry as we know it. 3. All of which brings us back to Jeff Bezos and the Washington Post. Over the last twenty years, much of the money and power once held by content producers – newspapers, record labels, movie studios, publishing houses, etc. – has transferred to the tech giants that now control the digital pipelines to consumers. This means that it’s much easier for any individual artist or journalist to reach an audience, which is a great and good thing, but it also means that the tech giants controlling the pipelines are taking ever increasing shares of profits. For the past decade or so, we have been enjoying a strange hangover period of the pre-digital age. A generation of journalists and artists trained in the dead-tree era, who have few other marketable skills, have continued producing art and journalism even though they are getting paid far less for their work than they used to. But every year more of these content producers are retiring or moving on, and we are entering a new period dominated by the first truly digital generation of bloggers and artists who are faced with the task of rebuilding the culture industries out of the ashes of the tech explosion. I and many others have argued that, so far at least, this generation has relied too heavily on memes and information derived from the legacy content producers. In journalism, this has meant hordes of bloggers feasting on an ever-shrinking supply of reported news from print-based news organizations. In film, this has meant kajillions of kids with camera phones riffing on existing story worlds, like Star Wars and Harry Potter, and uploading the results onto YouTube. As Jaron Lanier, a digital pioneer recently turned Internet skeptic, puts it in his 2011 book You Are Not a Gadget, we are a culture in danger of “effectively eating its own seed stock.” Obviously, this cannot go on forever, but thus far the most powerful technological disrupters have shown little interest in investing in the content carried along their digital pipelines. Apple, with its market-making iPhone and iPad devices, sparked a creative revolution in the world of apps, but when it comes to cultural content like books, movies, and news, all the tech companies have done is made it cheaper and easier to get what you want, cutting deeply into the profit margins for the content producers in the process. Amazon, which now controls a quarter of the book business, has of course played a huge role in this devaluing of cultural content, but in recent years Amazon has also quietly begun investing in content of its own. Since 2009, Amazon has launched imprints focusing on romance (Montlake Romance), thrillers (Thomas & Mercer), and sci-fi (47North), and now even general adult titles (New Harvest) and literary fiction (Little A). Compared with Amazon itself, these ventures are tiny, and they have run into trouble with rival booksellers like Barnes & Noble, which have refused to stock their titles. But whether these imprints succeed or fail, they demonstrate that Bezos has begun to wrap his mind around what it would mean if his company squeezed so much value out of the book business that publishing became in effect one long amateur hour. So, is the Washington Post purchase a step in the same direction, an effort on Bezos's part to invest directly in the content that fuels his billion-dollar pipelines? The short answer is nobody knows. By all accounts the deal came together quickly, and it may well be that Bezos himself is unsure just what he wants to do with the Post. For a man worth $25.2 billion, as Bezos is, a $250 million newspaper truly can qualify as an impulse buy. Perhaps this is simply the billionaire’s answer to collecting old-fashioned typewriters. Let’s hope that’s not the case because whatever you may think of Bezos and others who broke the pre-Internet business model, the fact is it's broken – and who better to fix it than the man who helped break it in the first place? Bezos, who has never worked in the news business, may be less attached to the dying print model than most print-news lifers and thus more willing to embrace digital-only innovations. As a man who has made his living tapping the powers of the interwebs, he may be better able to see that strict paywalls, which limit linking and bring in few dollars, are a dead end for most news organizations. As a CEO who recently bought out the reader hub GoodReads, he may be more open to recasting the newspaper as a community gathering spot, a sort of localized wiki combining conversation, community news, and event listings with ad revenues supporting a small, professional news staff. Most important, as a manager who has excelled at the long game, spending years investing in infrastructure for Amazon rather than diverting profits to shareholders, Bezos might be more willing to lose some money while figuring out how to marry news quality with profitability. Or maybe not. Maybe the guy just wants a $250 million toy. But let’s hope not, because if that’s the case we stand to lose a lot more than a grand old newspaper that once helped take down a president.
1. The modern world swooned last month when the bones found under a parking lot in Leicester, England were confirmed to be indeed those of Richard III. Part of the fuss was due to the wonderfully incongruous image of a 15th century king entombed under this age’s most banal and omnipresent architectural fixture — “What’s under my parking lot?” we mused breathlessly. And, to be sure, Anglophilia is ascendant in these tremulous days of “Downton” and Hilary Mantel’s fittingly titled historical novel Bring Up the Bodies. But the real business with the skeleton of course had to do with Richard’s status as a hated historical king, the deformed, dissembling, and nephew-smothering arch-bad guy whose downfall mercifully concluded the grisly War of the Roses. It’s more or less agreed among people who take the time to review these things that Richard was the target of a posthumous smear campaign orchestrated by the triumphant Tudor dynasty (whose founder Henry VII defeated Richard in 1485). Official Tudor chroniclers during the mid-16th century purposefully blackened Richard’s character, and Shakespeare followed suit by putting a hunchbacked tyrant on stage in the electrically charged history, The Tragedy of King Richard The Third. Rightly or wrongly, Richard went down in history as a regal punching bag, a ruler who embodied the worst of late-medieval chicanery. But now, with even the American media reporting at fever pitch on the miraculous discovery, the contested reburial plans, and digital facial renderings (evidently, Richard III resembled Shrek’s Lord Farquaad), it appears that England’s most reviled king may just get a reappraisal. Richard’s apologists hope that his newfound celebrity will encourage us all to submit our old Shakespearean prejudices to a round of honest fact-checking. It’s a simple procedure, in which historical evidence is marshaled to root out the mythology inherent in literary representation, to the point where the literary work in question can be dismissed as little more than myth itself. The technique has its fair uses — we might be grateful for it when dealing with, say, Kipling’s odes to imperialism. But its corollary is nearly always the driving of a wedge between art and history, two disciplines, we are admonished, that best not consort with one another. Already we can see the process at work with Richard III. Even Harold Bloom, high priest of American Bardolatry, recently averred in Newsweek that “Shakespeare’s ironic, self-delighting, witty hero-villain has a troubling relation to actual history.” This is defensible enough, yet we should remember that there is no corrective for propaganda like a reader’s awareness thereof. If we arm ourselves with this knowledge, Shakespeare’s art can yield deeper, more illuminating insights into Richard’s life and times than we might imagine. However, if we divorce Richard III from the “actual history” we are insensitive to literature and, worse, we weaken our own understanding of monarchy, power, and history itself. 2. In Shakespeare we encounter Richard III as an evil genius, one theatric-evolutionary step beyond the Vice character of medieval morality plays. Richard deceives and beguiles his way to the crown, killing off his rivals and then his friends until he has only ghosts to attend him. There is a clear logical schema for all this hell raising: the man is a monster, a hideous swamp-world creature, “Deformed, unfinished, sent before my time / Into this breathing world scarce half made up.” His monstrosity served him in wartime, yet now civil conflict has given way to stable Yorkist supremacy under his brother Edward IV, and the bellicose Richard finds himself without suitable vocation. The weapons have been hung up on the walls, and the lutes come down for seduction and other trifles of peace. “Since I cannot prove a lover / I am determined to prove a villain,” Richard intones, and that’s that. And yet Shakespeare just doesn’t come sans complication. Richard is too clever a thespian, too candid a soliloquist, and, belatedly, too tormented by conscience to lose all title to sympathy from the audience. He is too conflicted and desperate in his final scenes to bunk amicably with Iago in the prison of Shakespearean opprobrium — Iago who, like a mischievous alien beamed down to Venice exclusively to fuck with Othello, just sort of shuts down after Desdemona is murdered. In fact, Richard’s near-remorse on the eve of the fatal Battle of Bosworth Field provides material for some of the most contorted, dialogic, and self-conscious lines in the Complete Works. After a night of visitations from the ghosts of his victims, a frightened Richard “Starteth up out of a dream”: What do I fear? Myself? There’s none else by. Richard loves Richard; that is, I am I. Is there a murderer here? No. Yes, I am. Then fly. What, from myself? Great reason why: Lest I revenge. What, myself upon myself? …Alas, I rather hate myself For hateful deeds committed by myself! I am a villain. Yet I lie, I am not. Fool, of thyself speak well. Fool, do not flatter. What happened to the cool calculation and protean slipperiness of the old sly Richard? Whence this fabulously split subject, marooned by his own malfeasance? This Richard improbably presages the dazzling self-doubt of Gerard Manley Hopkins, the late-19th-century disconsolate and poet of brooding, rhyme-packed, sprung-rhythm verse. Turns out this most detestable of English kings, killer of little boys, and bane of decent folk everywhere also clutches that thing Hamlet purports to have, “that within which passes show,” a deeper interiority, a nagging conscience, an ego turned on itself. And yet Richard, like a good schoolboy intent on completing an assignment, has proved a villain. So why does he quake? The pith of the tale, for modern viewers at least, is that even as Richard proves a villain, he disproves the concept of pure villainy — at the end of the day, a much more consequential disproof than proof. The question at hand is not whether Richard kills people, rides roughshod over morality, or perversely revels in doing so. The problem for us — we distant modern viewers who have seen so many kings come and go in these history plays — is that as soon as Richard affixes the label of villainy to himself the label starts to lose its distinction, to melt into the milieu of political violence. 3. The other part of the “story” in the exhumation of Richard III — the interest over and beyond monarchic infatuation and historical import — also had to do with our abiding lust for proof. This time it came not from Tudor ideology but from the de-politicized precincts of scientific empiricism. And what proof: the identity of the bones of Richard the Third has been settled in a bravura display of genetic testing, radiocarbon dating, and advanced anthropological sleuthing. Mitochondrial DNA found in a tooth pulled from Richard’s jaw matches mitochondrial DNA of two descendants of his sister, one Michael Ibsen, furniture maker of London, and a second anonymous relation. As the story broke I joined the multitudes of media viewers in satiating my curiosity and awe. I read the articles, gaped at the curved spine, scrutinized the nasty blow to the head. I went on YouTube and found the videos produced by the University of Leicester detailing all of the digging, extracting, and testing done to get to the elusive bones, and then to their precious DNA cargo. Suddenly Richard III became a marvel of scientific certitude, a job-well-done of today’s sophisticated epistemological techniques. Now we know. “What do we know? Know Richard? There’s none else by...” So the bones are Richard’s. As he might have said, “I am I.” But the tautology does us as much good as it did the roles’ first interpreter, Richard Burbage, when he declared it before an Elizabethan audience. It was as if the moment Richard’s mortal remains hit the Internet, he ceased to be a villain or a politician and became merely a curiosity, stripped of intrigue or depth, a subject of pop historian enthusiasm. To be sure, it was a banner day for the Richard III Society, the organization which, since 1924, has dedicated itself to promoting a rehabilitated Richard. Members of the group, which as it happens is patronized by the current Duke of Gloucester, raised $250,000 to support the search for Richard’s grave and his genetic testing. It was money well spent. As one organizer told the New York Times, “Now we can rebury him with honor, and we can rebury him as a king.” This defanged, refurbished king was a civic-minded administrator who improved the commons’ channels for airing grievances and lifted bans on printing. His violence was understandable given the times. As for the young princes in the Tower — who really knows what happened. We all want to be remembered well, and that’s a hard proposition when you die on the losing side of bitter dynastic struggle. The Richard III Society may be correct in asserting that Richard was vilified without substantial proof. There is proof Richard did enact some nice, forward-thinking policies — as solid proof that the parking lot bones once fit together to form Richard’s scoliotic frame. But there’s something meretricious, or perhaps just distracting, about the whole question of proof in this case. If Richard wasn’t as bad as we think he is, there also seems to be no need to truly pal around with his ghost. To invoke a touchstone of modern political affability, Richard probably wasn’t a guy you’d want to have a beer with, a power-hungry warlord at best. And it remains likely that much of the evidence that would prove or disprove Richard’s rotten reputation has been lost. That’s no reason to gag Richard’s supporters — let them toast to their gallant White Boar if they want — but the lessons worth drawing from the whole business are too important to be lost in the penumbras of medieval recordkeeping or the weirdness of historical fanclubs. This is true especially when the crux of the lesson can be found in Shakespeare’s plays. By this I mean that, besides celebrating the Tudor ascension, Richard III and the other history plays demonstrate time and again the precarity, anxieties, and senselessness of the whole monarchical enterprise. No one belongs on the throne, not Richard and not Henry Tudor, but each must believe he does. Thus, it’s not that Richard III is so off the mark in representing the spirit of its subject. Rather, the great falsehood, deeply rooted and insidious to this day, is the belief that all the other players in the sordid history were so dissimilar from the scheming Duke of Gloucester. The theatrical Richard’s manic fear of self-harm (“Lest I revenge. What, myself upon myself?”) bespeaks the big idea here: his violence is selfsame with a larger violence, a culture of conflict that will undo him. Thus, a fascinating paradox is at work in Richard III, one with far-reaching implications for our notions about the proper relationship between art and history. It so happens that a piece of literature drenched in historical revisionism nonetheless captures Richard’s zeitgeist with impressive moral and political clarity. Our own pop-historical gaze seems downright blurry by contrast. How can this be? It seems the paradox is an effect of repression: due to our liberalist preference for transparency, information, and empiricism, we refuse to peddle in political myths. But our romance with regalia suggests their attractiveness; we maintain an unacknowledged appetite for a power system whose very structure rests upon the myth and spectacle of an unverifiable claim to kingship. Shakespeare’s art, on the other hand, luxuriates in this kind of myth. Writing centuries later about Dostoevsky, the critic Mikhail Bakhtin described how the great Russian novelist transformed the monologic “idea” into the living “image of an idea,” the irreducible dialogue that composes a single idea. This is precisely the mechanism of Richard III, a dramatic work that is not interested in dispelling myths, but rather in giving them life, and then subjecting them to the logical pressures of experience. Long before Richard’s bones were exhumed and his identity finally proven, Shakespeare had already unearthed from the pit of Richard’s soul the essence of his character, the self-contradiction of his idea: “Is there a murderer here?” No/Yes. It’s not villainy, it’s politics: monarchic despotism that sanctions horrific violence. It’s the stuff of kingship, and it’s still going on. The embattled Bashar Al-Assad is, after all, a king in trouble too. He might pause to think on Richard. Image via Wikipedia
Chicago is called “The Windy City” not because of our winds (which are present, but not markedly above average), but because of our citizens’ historical propensity to go on about themselves. The nickname took root during a late 19th-century rivalry with Cincinnati. Both cities had a meatpacking industry and baseball, and this was enough to stir up a war of words. We fought, bafflingly, over rights to the nickname “Porkopolis,” and our dueling baseball teams, the Red Stockings and the White Stockings. The Cincinnati sports writers, tired of our braggadocio, made “windy city” stick. And “The Second City” was not coined by A.J. Liebling in his outwardly snotty book about Chicago’s inferiority to New York. We earned that one in the 19th century as well, when the city burned to the ground and we built an entirely new city — the second city — in its place. When even our monikers are misunderstood, Chicago, demonstrably, has reputation issues. We’re outspoken and resourceful, and everyone thinks we’re just losers getting blown about! Being a proud Chicagoan, then, can feel like a defensive position. Or, more optimistically, that all of our cultural treasures are a secret. Enter The Chicagoan, a new outfit whose mission is to “document the arts, culture, innovators and history of Chicago and the greater Midwest through long-form storytelling.” The original Chicagoan was a weekly magazine, modeled on The New Yorker, published from 1926 to 1935. It was hit or miss, quality-wise, and went unremembered until University of Chicago professor Neil Harris discovered its archive in the library, and then edited the collection (The Chicagoan: A Lost Magazine of the Jazz Age) that brought it all to our attention. Now JC Gabel, editor of the much more recently defunct Stop Smiling magazine, has relaunched the brand with The Chicagoan Issue 1, a 200 page limited-edition glossy number that’s heavy on design. The Chicagoan as an organization also has digital editions, podcasts, and public events on its agenda, and Issue 1 acts as drum major for this cultural parade. Its greatest success would be to spotlight Chicago’s creatives in a way that excites the hometown crowd, intrigues the visitors, and leaves both eager to see more. Seen as a whole, I believe The Chicagoan has succeeded. In an effort to provide a balanced view of our city, the ten Chicago-focused features include seven about cultural innovators and three about crime. One feels a little jolt going back and forth between the two but, in a disheartening way, this may be quite an accurate reflection. The profiles are all proud and glowing (Chicago! We’ve got this great chef, and an amazing architect, and these cool music guys, and really good coffee!). The crime pieces, if it needs to be said, are more nuanced and gritty. Alex Kotlowitz, whose compassionate, participatory brand of journalism has focused on violence in Chicago for over a decade, is reliably worth looking in on. Here he is interviewed, along with filmmaker Steve James, about the “violence interrupters” they recently documented in The Interrupters. Chicago’s beat cop laureate Martin Preib, author of The Wagon, contributes a piece on the crime and confession that stay with him. I don’t want to sound flippant, but it’s nice to have these complicated, antireductive pieces in among the laudations. They stay with you much longer. The literary supplement includes the reliably great Joe Meno and a thinker on David Foster Wallace (the inclusion of which feels predictable but also might be the law? At least it’s out of their system). The last section, comprised of “dispatches from the Midwest,” is small but thoughtful. The real gem of the issue, justifying high hopes for The Chicagoan’s future, is the marquee piece on the history of Siskel & Ebert. The 47-page oral history combines interviews with their coworkers, bosses, friends, and rivals to tell the story of two talented men who sat at the heart of American film criticism for decades. Hubris, competition, ambition, luck, friendship, cruelty, and tragedy make the history of a syndicated talk show read like Greek drama. If Gabel et al. continue to coax such compelling stories out of our city’s history, then they have nothing to fear save the demise of publishing. Happily there is great camaraderie in being underrated, and Chicago has responded well to its glossy new champion. Sold only at independent stores throughout the city, and restocked in small numbers, getting your hands on the issue became the coup du jour for hipsters and literati alike. Remember early This American Life, before it started to always be about the economy? That’s what this could be — appreciative of the sincere efforts of interesting people, and generous in presenting them. Take that, Cincinnati.
1. When I was 17, I registered my first copyright. Unofficially. My three dearest friends and I were ready to share the recordings our rock band had made, but had some reservations about uploading them to the social media sites which were just then being retooled for independent musicians to self-market. Naïve about the system -- and hopelessly naïve about ourselves -- we expected that, without precautions, rogue musicians could present our songs as their own and claim for themselves the glory we were due. It turned out that we owned the copyright to our songs when we wrote them, but had to register that copyright in case of dispute. So I did what is called a Poor Man’s Copyright: I self-addressed an envelope, placed one of our CDs inside, mailed it and, when it came back to me, put it away, unopened, for safekeeping. This document proved that we had created what was sealed within before the postage date. I still have that envelope stashed away, and under current US law, that copyright will likely endure well into the 22nd century. And yet, it is hard to say what “copyright” will mean by then. Intellectual property is a dynamic concept, legally and culturally, one that is always being reshaped on one hand by changing methods of creation and distribution, and on the other by markets scurrying to catch up. The abstract line between public and private ideas -- the line that intellectual property tries to police -- is the very same line the Internet blurs so well. This January, copyright witnesses a simultaneous push and pull, a drive for greater stricture on one end, and a graceful unknotting on the other. While Congress resumes deliberations on the Stop Online Piracy Act (SOPA) -- the latest legislation meant to address the rise of social media, streaming services, and file sharing -- scholars in the UK and most of Europe are rejoicing the entry of James Joyce’s corpus into the public domain. Dubliners and A Portrait of the Artist as a Young Man have been in the American public domain for many years, but due to differing laws, certain editions of Ulysses and the entirety of Finnegans Wake remain protected for nearly another quarter century in the US. Hypothetically, under a SOPA regime, European websites publishing text of these newly public later masterpieces could find themselves dropped off of American search engines, or starved for funds if they rely on American companies like Google’s ad service and PayPal to generate revenue. The copyright status of Joyce’s work has been of particular interest to scholars and fans on both sides of the Atlantic, due mostly to the stubborn and sometimes egregious permissions policies set forth by Joyce’s sole heir and executor of his estate, his grandson, Stephen James Joyce. But this month, the Joyce copyrights enter a new twilight period made all the more striking by a history of differing national laws. In the 1970s, the US went from a publication-based copyright system to a European “biological” one, with a copyright term of the author’s life plus an additional 70 years in most cases. But the US Copyright Office set up certain benchmarks by which older works would be grandfathered in. Works published before 1923 would be in the public domain, while those published in the window between 1923 and 1978 would enjoy up to 95 years of copyright protection from the date of publication. And while the first edition of Ulysses dates back to 1922, the 1934 Random House edition (the first officially published in the US) enters the public domain in 2030; and Finnegans Wake in 2035. It is likely that the American copyright in the Wake will outlive Stephen James, an old man now without an heir of his own. And yet, despite the recent lapse of the copyrights in the EU, it is just as likely that Stephen James will fight to enforce that remaining American copyright to the very last. In 2006, D.T. Max profiled the heir for The New Yorker. In the brilliant piece, Stephen James revels in his antagonistic execution of the estate, believing he is complying with Joyce’s wishes: “I am not only protecting and preserving the purity of my grandfather’s work but also what remains of the much abused privacy of the Joyce family,” he once said. “Every artist’s born right is to have their work . . . reproduced as they want it to be reproduced.” For Stephen James, great works of literature (even those as dense as the Wake) are meant to be enjoyed singularly, without critiques, analyses, or guides. He has used his rights to the fullest in preserving his grandfather’s integrity and deterring the academy, which for too long, he believes, has piggybacked on Joyce’s genius. He once requested a $1.5 million permissions fee from a scholar hoping to make a multimedia version of Ulysses. When the scholar refused, Stephen James told him, “You should consider a new career as a garbage collector in New York City, because you’ll never quote a Joyce text again.” It is not easy to say what exactly Joyce would have wanted to happen to his work. He was at times an eager self-promoter, asking friends like Samuel Beckett and William Carlos Williams to write essays in support of the unfinished Wake and admitting to filling his books with enough puzzles to keep professors busy well into the 21st century. But he was also, throughout his life, strapped for cash, and spoke out for “Author’s Rights” when American publisher Samuel Roth began circulating unauthorized copies of Ulysses (the book was originally deemed obscene in the US; Joyce could claim no copyright in it, since it had been “stricken from the mails”). Indeed, with the Internet reshaping how we think of intellectual property, we might learn a lot from Joyce’s ambivalence, as his body of work begins its march into the public domain. 2. In many ways, the attitude of Stephen James Joyce resembles that of the Stop Online Piracy Act, stodgy and clinging to old ways. Introduced last fall, SOPA would have expanded the arsenal of cease-and-desist tactics that the entertainment industry has been deploying ineffectively for the last 15 years, starting with the crackdowns on file-sharers. Copyright holders would have been able to create an embargo against websites allegedly violating their copyrights by compelling payment processors and ad networks to suspend their services, with very little recourse for contesting the accusation. Congress was set to vote on SOPA before the end of month, but shelved the legislation last week following an internet-wide protest that included daylong blackouts of sites like Wikipedia, which opposes any restriction on the flow of information. Google and Facebook, also in protest, would have been asked by the attorney general to redirect users away from sites (particularly foreign sites) engaging in “piracy,” a policy that The New York Times, in a recent editorial, likened to China’s policing of the Internet. Besides, there is little reason to believe that outwitting the SOPA measures would not have become standard behavior for many Internet users, just as torrenting and streaming have. So how is it that we are not finding more secure ways to remunerate artists and thinkers? This is after all the most benign reason for reinforcing intellectual property laws. It seems to me that since the veritable explosion of the Internet, and since the economic collapse of 2008, the dialect movement of culture and economics has been accelerated and re-codified, plunging people on both sides back into an age-old confusion about art and money. In November, I attended a panel discussion on the “Creative Economy,” a concept elaborated over the last decade by such economists as John Howkins and Richard Florida, who argue that knowledge will become an increasingly important economic resource in the 21st century. At one point, a woman from a major media conglomerate spoke approvingly of the imminent crackdowns and lawsuits SOPA would have brought about. I was so irked when the rest of the panel agreed that, during the Q&A, I posed a troublemaking question: The internet is an incredible force for creativity, allowing people to share work freely, outside time and space. And yet the rhetoric around this exchange is extremely negative, invoking piracy and theft. To what extent, then, is the creative economy just another name for the capitalist economy? That last part, of course, was meant to inflame. But it didn’t. It was evident to one gentleman, a city official, that “creators” should be compensated: “How else are they supposed to make a living?” The Internet is at once one of the greatest products and one of the greatest drivers of a creative economy, though probably not the one that the panelists had in mind. Given the activities that it enables, it promises to drastically change the way culture is produced and consumed. Imitation is the sincerest form of flattery. Imitation is also how we learn, and how we align ourselves with the styles and ideas we find meaningful, and thus build community. Innovation for Joyce lay precisely in imitating the “open-source” texts available to him: his access to the public domain and his ability to play with the ideas and language he found therein allowed him -- at the height of modernism when art was meant to be self-referential -- to wink at the knowing reader with a spoof of Homer, Vico, or Shakespeare, and to change that same reader’s conception of all the great works that came before. The difference is, without web access, Joyce had to go to the library or bookseller for his material; and Joyce had to fund the publication of his books. But the Internet also threatens to do away with those terms (production, consumption) and the commodification of culture entirely, and this is where it gets tricky. Proponents of SOPA need only point to the act’s subtitle to defend it: “To promote prosperity, creativity, entrepreneurship, and innovation by combating the theft of U.S. property, and for other purposes.” All the buzzwords of creative economics are there. And yet the concerns of Stephen James Joyce (propriety and privacy) are not. 3. It feels like the “Creative Economy” encapsulates two opposing economies, creativity and consumerism, and copyright law is the locus where the conflict is most evident. And yet, this conflict is more than one between abstract entities: it’s a practical and emotional dilemma for any writer who has ever tried to make a career of his or her craft. Even Joyce, who borrowed famously, claimed his work as his own, and expected due compensation. The conflict between the openness of the creative process and the restraint of good business acumen is one embodied by Shem the Penman and Shaun the Post, the sons of HCE and ALP in Finnegans Wake. In the “Shem the Penman” chapter, Shaun scolds his brother for squandering all of his good stories in the pub: Comport yourself, your inconsistency! Where is that little alimony nestegg against our predictable rainy day? Is it not the fact […] that, while whistlewhirling your crazy elegies around Templetombmount joynstone, […] you squandered among underlings the overload of your extravagance and made a hottentot of deulpeners crawsick with your crumbs? Am I not right? Yes? Yes? Yes? Holy wax and holifer! Don’t tell me, Leon of the fold, that you are not a loanshark. According to Shaun, Shem will inevitably go broke because he was too much in love with telling stories, gave them out for free, has made people almost sick of hearing them. And yet Shaun senses that Shem is not naïve. He may in fact be something of a loan shark, and by giving easily up front, Shem can demand more later, when selling his book to the very people who have long enjoyed his yarns. Joyce was commenting on a very real and persistent phenomenon, one that even my friends and I in high school had some inkling of when we thought it wise to “register” our songs: it is necessary to self-promote up until the point where one is entitled to charge admission to one’s work. The Internet illuminates this threshold between ruthless self-promotion and entitlement, while greatly enabling the former and destabilizing the latter. None of us today will ever live to see our creative work become public domain, but Joyce lived in a time when his books were not guaranteed legal protection. In a speech he gave to the International PEN Conference in Paris, summer of 1937, Joyce reflected on the Ulysses piracy debacle and the successful international protest he organized against Samuel Roth: It is, I believe, possible to reach a judicial conclusion from this judgment to the effect that, while unprotected by the written law of copyright and even if it is banned, a work belongs to its author by virtue of a natural right and that thus the law can protect an author against the mutilation and the publication of his work just as he is protected against the misuse that can be made of his name. Joyce had a sense of propriety when it came to literature, that even if writers could not make their work lucrative, their visions should be respected. It’s true this did not always extend to the texts he spoofed and lifted from. And yet, author’s rights are an idea on which many of us would think favorably. This should by no means be interpreted as a mandate for his grandson’s obstructionist policies, nor for greater policing of the Internet. Rather, we are going to need a completely new online framework for supporting creators, and to get there, we might have to move beyond a tired notion of “copyright” and towards “author’s rights.” Image Credit: Flickr/Horia Varlan
In less than a fortnight, Orhan Pamuk, Turkey's Nobel Laureate in literature, made headlines in Turkish newspapers not once, but twice. It would have been an ordinary thing a few years ago when Pamuk, commonly perceived as one of Turkey's major political dissidents, would make news with his comments on the killings of Armenians in 1915 or the Turkish state's heavy handed treatment of its Kurdish minority. But this time newspapers seem to have discovered a new aspect of Turkey's most famous writer: his private life. When Pamuk, who has a daughter from his first marriage that ended a decade ago, started dating Indian novelist Kiran Desai in 2010, photographs of the couple walking on a Goa beach in India were published by a mainstream newspaper edited by one of Pamuk's old political enemies. Pamuk and Desai were quickly named as a power couple, one journalist calling them Mr. Nobel and Miss Booker. But after two books (Museum of Innocence and The Naive and the Sentimental Novelist, both containing Pamuk's words of gratitude to Desai for helping him with the final English texts) and numerous interviews accompanying the Turkish edition of Desai's Booker prize-winning Inheritance of Loss (all of them focusing on details of their relationship rather than Desai's novel), Turkish media seemed to have lost interest. That was until this December, when a young Turkish artist was photographed alongside Pamuk in New York's Columbus Circle mall. The following week, newspapers were covered with pictures of her paintings and a full page interview in the daily Sabah, whose American version first published the photographs, had the very Flaubertian headline: "I am Füsun from Museum of Innocence!" This was a reference to Pamuk's latest novel where the protagonist, engaged to be married, begins an affair with a younger girl, who journalists were now eager to identify as having been inspired by Pamuk's new girlfriend. Among readers of the interview were Pamuk's loyal fans who hoped to learn bits of information about his new novel which will reportedly be published in Turkish this year. It tells the story of a street vendor who sells "boza," a traditional Turkish beverage, and there was speculation as to whether the cover of the book would be produced by Pamuk's new girlfriend, who has painted portraits of boza sellers in the past. The latest piece of news, the most surprising to date, was published on the last day of the year. It alleged that Pamuk had an "illegitimate son" from a German professor specializing in Turkish literature. Pamuk is claimed to have never seen his son, who is now five years old. These dramatic claims were made by "an old girlfriend of Pamuk," whose name was carefully left out of the piece. Turkish newspapers made life very difficult for Pamuk in 2005 when he was turned into a hate figure by the ultra-nationalist Ergenekon gang which is claimed to include, alongside retired generals, solicitors, and politicians, a number of journalists who orchestrated campaigns against Turkey's dissident figures, labeling them as traitors and enemies of the country. During 1990s right-wing newspapers were notorious for their portrayal of Kurdish and socialist intellectuals: many artists, like the singer Ahmet Kaya, were forced to leave the country after editors made a habit of picking on them. Last year a Kurdish MP was forced to resign after photographs showing him with a girlfriend were published in the papers. With their newfound "private" methods, editors seem to have inflicted a deep wound as they turned the famously reserved Orhan Pamuk, whose political views continue to disturb the ultra nationalists, into a playboy figure in just a few weeks. It looks like an attempt by editors to exact revenge by hitting him below the belt. For Pamuk’s loyal readers, all this surely reads like one of Pamuk's own novels which always feature him as a character, but the serious point to be made here is that Turkish media’s attempts to trivialize dissidents by focusing on their private lives has a touch of the News of the World scandal about it, and this new tactic will probably be a new cause of concern for Turkey’s dissidents this year.
If your Twitter or Facebook feed includes anyone who cares about American poetry, you've probably seen a link or 11 to Rita Dove's recent letter to the editor in The New York Review of Books (and Helen Vendler’s painfully terse reply). If not, here’s a quick rundown: The November 24 issue of the NYRB included Vendler's review of The Penguin Anthology of Twentieth-Century American Poetry, edited by Dove. The anthologist responded with a letter calling Vendler to task, in particular, for explicit and implicit dismissals of poetry by black Americans. Vendler replied, in full, “I have written the review and I stand by it.” To understand what Dove objected to, you needn’t read any further than the opening paragraphs of Vendler’s review: Twentieth-century American poetry has been one of the glories of modern literature. The most significant names and texts are known worldwide: T.S. Eliot, Robert Frost, William Carlos Williams, Wallace Stevens, Marianne Moore, Hart Crane, Robert Lowell, John Berryman, Elizabeth Bishop (and some would include Ezra Pound). Rita Dove, a recent poet laureate (1993–1995), has decided, in her new anthology of poetry of the past century, to shift the balance, introducing more black poets and giving them significant amounts of space, in some cases more space than is given to better-known authors. These writers are included in some cases for their representative themes rather than their style. Dove is at pains to include angry outbursts as well as artistically ambitious meditations. Multicultural inclusiveness prevails: some 175 poets are represented. No century in the evolution of poetry in English ever had 175 poets worth reading, so why are we being asked to sample so many poets of little or no lasting value? Anthologists may now be extending a too general welcome. Selectivity has been condemned as “elitism,” and a hundred flowers are invited to bloom. People who wouldn’t be able to take on the long-term commitment of a novel find a longed-for release in writing a poem. And it seems rude to denigrate the heartfelt lines of people moved to verse. It is popular to say (and it is in part true) that in literary matters tastes differ, and that every critic can be wrong. But there is a certain objectivity bestowed by the mere passage of time, and its sifting of wheat from chaff: Which of Dove’s 175 poets will have staying power, and which will seep back into the archives of sociology? Notably, Vendler’s list of America’s foremost 20th-century poets is entirely white -- a fact that becomes especially significant when set up against her subsequent suggestion that this legacy of greatness is being crowded out in part by “introducing more black poets.” Up to a point, it's worth going easy on Vendler. Like Dove, she had a job to do -- the same job, really: make a case for what was worth reading in 20th-century American poetry. Dove made hers, and the NYRB asked Vendler to evaluate it. And after those two paragraphs Vendler’s argument mostly shifts away from issues of race and into critiques that, accurate or not, have more to do with Vendler’s dislike of what she calls “accessibility;” her defensiveness about what Dove refers to as the “poetry establishment;” and what Vendler describes as Dove’s “breezy chronological introduction, with its uneasy mix of potted history (in a nod to ‘context’) and peculiar judgments.” While any of these could be stand-ins for racial prejudice, I don’t believe they are. Instead, they feel like an uncomfortable mix of, on the one hand, Vendler’s legitimate arguments about selection and interpretation and, on the other, her fear that the poems she loves most won’t matter enough to others. But those first two paragraphs can’t and shouldn’t be ignored. Dove rightly takes her to task for this, effectively unpacking the implications of, for example, dismissing minority writers as being of merely “sociological” interest; suggesting that such writers tend to be valued for their “representative themes,” whereas the major white writers Vendler lists are supposedly notable for their “style;” and asserting that they write poems because they “wouldn’t be able to take on the long-term commitment of a novel.” (Vendler might argue that she didn’t mean any of these observations to be specific to minority writers, but she introduces all of them right after complaining that black writers are over-represented, and a critic who’s famous for her attention to detail should know that she’s setting up that reading of her remarks.) Dove also fairly marks the places where the shadow of such remarks can be discerned later on in the review. Ultimately, I think Vendler’s condescending talk about race and writing is driven by her defensiveness about her own tastes (and more about that in a bit), which of course does nothing to excuse it. But given that Dove and others have already effectively unpacked this most glaring aspect of the review -- and given that Vendler’s case seems far from unique -- it’s worth stopping to look at the assumptions that underpin most arguments against inclusiveness in art, including this one. Part of what leads Vendler astray is her belief in a kind of literary value that’s all noun and no verb -- that is, one that wants to define value without making room for the fact that many people do in fact value the very writing that, she says, is not, well... valuable. In the process, she, like many other critics (and not just of poetry), creates an oddly unpeopled universe -- or, at least, one that’s strangely devoid of living people. Vendler asks us to think of value in terms of a hypothetical and permanent future, one that will have unvarying and therefore conclusive (that is, correct) notions of what was good and bad in our writing. It’s an exasperating argument, since it asks us to defer to the critic’s mystical conjuring of our far off progeny, a population that will, of course, have the same values as the critic herself. But even if the critic is somehow right about what the academics of the 22nd century will value (and even if the 23rd, 24th and 25th centuries value the same things), it begs the question -- why should it matter? Our current canons are based on what a select group of current readers find useful, pleasurable, interesting, meaningful. Were readers in the 17th century wrong for sometimes finding pleasure in other places? Should they have been more concerned with what a Harvard professor might care about today? With some notable exceptions, taste is not a moral category. Yes, it makes a difference if we eat meat; and it matters, too, if our diets are full of sugar or salt. In different ways, it matters if we embrace art that enforces our prejudices, degrades others, or results from exploitation. The same is true if we choose to read in ways that inspire pettiness or abet us in living timid, unfulfilling, unimaginative lives. But more often than not, none of that is really at stake in these arguments. Just as some people will like poetry and some will like fiction, some sculpture, some movies, some wine -- some many things, some few -- there are countless ways to get to meaning through poems and just as many different experiences of meaning to arrive at. And almost all of them are worthwhile. In fact, we can enlarge ourselves by being more imaginative about value; it’s a way of learning about others that resembles the experience of art itself, an act or curiosity and creativity and engagement. Many critics seem to move in the opposite direction, letting in a sense that the appreciation of writing outside of their preferences somehow threatens the value of the poetry they want to champion. If page-counting is a necessary part of reviewing an anthology -- of unpacking its claims -- the treatment of artistic appreciation as a kind of zero-sum equation is not. There's a strange logic here, one that feels a little like the idea that gay marriages would threaten the sanctity of straight marriages (which is not to accuse any critics of homophobia -- just to note the ways in which a lack of imagination about other people's pleasures can turn into an unwarranted prejudice and a strangely militant attitude about the things others do and love.) Vendler's hardly alone in this. Harold Bloom has made a name for himself by defending the great tradition, as he imagines it, from the encroachment of all kinds of writing. In a nice bit of synchronicity, Bloom actually moved to the vanguard of the cultural wars by releasing his own anthology of sorts -- The Western Canon -- which made headlines for selecting 26 essential authors and defending their pre-eminence against an army of straw-men and -women: feminists, cultural theorists, etc., a group he likes to refer to as “The School of Resentment.” He, too, has passed judgment of Dove’s anthologizing, in his case when he made the selections for a Best of the Best American Poetry that largely discarded the choices of the series’ first 10 editors, including Rita Dove, and instead came up with his own roster of works that “will endure, if only we can maintain a continuity of aesthetic appreciation and cognitive understanding that more or less prevailed from Emerson until the later 1960s, but that survives only in isolated pockets.” It’s likely that some of the defensiveness that critics like Bloom feel comes from their awareness that their own selections may be subject to attack, their awareness that championing an all or mostly white or male roster of artists is going to leave them subject to charges of racism and sexism. But there’s a simple way around that: admit that the kind of writing you value is just one kind of potentially valuable writing. Keep in mind that, in trying to maintain the prerogatives and preferences of the establishment (quotation marks deliberately omitted), you’re trying to sustain a series of cultural traditions and institutions that have been hostile to women, blacks, and other minorities on grounds that have nothing to do with merit. Take seriously the ways in which others experience and uncover meaning at the same time you ask others to preserve space for the things you value most. And (hey, why not?) take a little bit of time to consider the possibility that female and non-white writers are already doing important work in that same vein -- and that maybe it doesn’t seem that way to you at first glance in part because you haven’t yet immersed yourself in a slightly different set of cultural experiences and associations. (On that last note, Vendler does eventually get around to praising both Carl Phillips and Yusef Komunyakaa, but it comes so late in her review that it doesn’t provide much counterweight, and her assertion that the “excellent contemporary poetry” of these two writers “needs no special defense” revives her claim that many other black writers are valuable only under the terms of some separate and lower standard.) The importance of this extends beyond racial inclusiveness. One of the most useful things a critic can do -- and one that Vendler herself has done at various points in her career -- is to open us up to new sources of pleasure and insight. In denying the value of so much that clearly does provide value for others (including, for me, the brilliant Gwendolyn Brooks, whom Vendler faintly praises for a “pioneering role” before expressing wild outrage at Dove’s claim that Brooks’ first book “confirmed that black women can express themselves in poems as richly innovative as the best male poets of any race”), a critic works against our capacity for imagination. We can, should, and will continue to argue about artistic quality, but we should do so while remembering that poetry can only live in the minds of living readers, and that its value comes out of their encounters with individual poems, which are, thank god, incredibly various (both the poems and the encounters.) Too much criticism suggests that we must serve art -- a supposedly timeless art removed from the particulars of people immersed in culture and history. And yet the most enduring value of Shakespeare -- the favorite cudgel of literary culture warriors -- is his ongoing service to individual readers, his ability to bring them joy and inspiration, bring them a more vibrant connection to the language we all speak in our own ways, rich grief, and insight into people living very different lives. Why worry so much about any other writing that provides the same?
1. It is sometimes hard to remember -- in our enlightened Internet era -- that the line between writer and critic was once very sharp, and that there was no love lost between the camps. "There are hardly five critics in America," Herman Melville once wrote, "and several of them are asleep." Not that you can blame the man, considering the drubbing he took at the hands of the critical establishment, but the quote gives a good sense of the bad blood brewing between writer and commentator all the way back in the 1850s. We don't lack for contemporary examples, either; in 1991 Norman Mailer called critic John Simon "a man whose brain is being demented by the bile rising from his bowels," after Simon panned Mailer's novel Harlot's Ghost. But surely it's not all bile and bellowing; there have to be other, more civilized examples of the writer playing nice in the critical sphere. Henry James, for example, had a prolific side gig as a writer of judicious criticism, and his essay "The Art of Fiction" is one of the most well-considered and fair-minded examinations of novelistic purpose you could ever hope to read. But even James, in the middle of his reasonable defense of novelistic art, couldn't help giving a swift kick to an unnamed "writer in the Pall Mall" who opposes “certain tales in which ‘Bostonian nymphs’ appear to have ‘rejected English dukes for psychological reasons’" - Portrait of a Lady, I presume? It seems that, no matter their composure, writers look to draw a little blood when they enter the critical ring. Maybe it has something to do with accepting blows in silence all those years. Which brings us to the latest example of a writer stepping into the ring to defend his work against a rapacious critic: award-winning author Jonathan Lethem v. award-winning critic James Wood, literary heavyweight bout par excellence. The first round of this fight happened recently, when the Los Angeles Review of Books published an essay by Lethem entitled "My Disappointment Critic," in which Lethem discussed his anger at Wood for panning his novel The Fortress of Solitude eight years ago. Lethem is not some cranky author we can write off lightly and go about our business. He is himself a thoughtful critic, and, as if to remind us of this fact, the title of "My Disappointment Critic" (and some of its content) alludes to his book The Disappointment Artist, a series of excellent essays about growing up in Brooklyn, the pleasures and perils of being an autodidact, and Westerns - among other things. His essay on the way to escape a subway train when you fear being pursued by other passengers is one of the best evocations of frightened childhood and how it shapes (urban) consciousness I have ever read. All this is to say that Lethem is more than familiar with a critic's responsibilities. Even when you're an author/critic with fame hanging heavy on your shoulders -- especially when you're stepping into the ring to defend your own work -- you're held to the sort of standard all criticism is held to: you have to marshal evidence and portray your viewpoint convincingly. One might even argue that writer/critic dealing with his own work has a higher bar to vault, because if he fails at any of these aims he looks worse than a reviewer writing a poorly-argued review. He looks like a whiner. So what are we to make of Lethem's new essay, in which he steps into the ring to defend his eight-year-old novel The Fortress of Solitude from James Wood, critical heavyweight of the age? Is he merely grousing? Or is he making serious critical claims? Lethem understands our concerns. He wants us to know right away that he knows what he's doing. "Why," Lethem writes, "violate every contract of dignity and decency, why embarrass us and yourself, sulking over an eight-year-old mixed review? Conversely, why not, if I’d wished to flog Wood’s shortcomings, pick a review of someone else, make respectable defense of a fallen comrade? The answer is simple: In no other instance could I grasp so completely what Wood was doing." And later: "Was this how Rushdie or DeLillo felt -- not savaged, in fact, but harassed, by a knight only they could tell was armorless?" This is Lethem's stated purpose: instead of taking the opportunity to complain about his own disappointment, Lethem is going to give his own disappointment greater cultural relevance. He is going to use his own experience to show us what James Wood looks like without the armor. He is going to accomplish something far more serious than simple griping: a true critical takedown. 2. The critical takedown is well-known cultural corrective with a long and glorious history. Renata Adler attempted something similar in her New York Review of Books article on Pauline Kael 31 years ago. James Wood himself performed similar treatment on Harold Bloom; it's no surprise that Lethem quotes both of these projects above his essay. The fellow critic providing cultural corrective to someone who has gotten too big for his or her britches -- it's practically a public service, if you do it right. In our current literary discourse critics can easily become unimpeachable. Wood gets the lofty heights of The New Yorker's book section whenever he feels like it, and if he's fudging his responsibilities, chances are a lot of people won't notice. It's more or less exactly the argument Adler makes in her takedown of Kael: most critics get sloppy on their soapbox. Their ingrained prejudices take over. So there's a precedent for the fellow critic accomplishing such a takedown, but rarely does the author being criticized make the attempt. Maybe this is because the burden of proof is uncommonly high when personal interest is involved. And Lethem's criticisms, for all of their higher purpose, do spring from personal concerns: Wood failed to see what Lethem was getting at in The Fortress of Solitude. "James Wood," he writes, "in 4,200 painstaking words, couldn’t bring himself to mention that my characters found a magic ring that allowed them flight and invisibility. This, the sole distinguishing feature that put the book aside from those you’d otherwise compare it to (Henry Roth, say). The brute component of audacity, whether you felt it sank the book or exalted it or only made it odd." This comment is, at its heart, disingenuous. Is the magic ring really the "sole distinguishing feature" that separates the Fortress of Solitude from Henry Roth? Wood would never make such a simplistic statement, nor would any other critic with a professional reputation to uphold. The act of criticism, in large part, is to figure out what distinguishes books from each other, and such distinctions never come down to one detail, whether it be a magic ring or a madeleine. But let's set this aside for now, and continue to Lethem's critical conclusion about Wood's review. "Perhaps Wood’s agenda edged him into bad faith on the particulars of the pages before him. A critic ostensibly concerned with formal matters, Wood failed to register the formal discontinuity I’d presented him, that of a book which wrenches its own “realism”-- mimeticism is the word I prefer-- into crisis by insisting on uncanny events. The result, it seemed to me, was a review that was erudite, descriptively meticulous, jive. I doubt Wood’s ever glanced back at the piece. But I’d like to think that if he did, he’d be embarrassed." I read Fortress of Solitude several years ago. I remember that magic ring. I remember it having the shaky status of a symbol, and that the boys who used it were themselves unsure of whether it represented real invisibility or some sort of wish fulfillment: imagination grounded firmly in realism (or whatever less offensive word Lethem wants to use). I certainly don't remember it ever "wrenching" the book's realism out of whack -- it was one thread in the greater fabric of a mimetic narrative. But let's set that aside too -- maybe Wood was wrong about the magic ring, and its singular symbolism within Fortress of Solitude. What we're really dealing with here is a takedown of Wood, after all, not a defense of Lethem's novel. That's why Lethem proclaims his larger purpose early in the essay. That's why he includes the paragraphs from Adler and from Wood himself, that's why he tells us Wood is "armorless" as a critic. What we're concerned with here is Lethem's critical judgment of Wood as a critic: "The result, it seemed to me, was a review that was erudite, descriptively meticulous, jive." Read that line again, substituting the word "book" for the word "review." Now imagine that this sentence appeared in a book review. I assume your critical alarm bells are ringing. Are we as readers expected to believe Lethem when he says that Wood was "erudite" and "descriptively meticulous," (not to mention "jive") without evidence? Lethem obliges us. He drops a Wood quote at the start of the next paragraph. "Wood complained of the book’s protagonist: “We never see him thinking an abstract thought, or reading a book … or thinking about God and the meaning of life, or growing up in any of the conventional mental ways of the teenage Bildungsroman.” ...My huffy, bruised, two-page letter to Wood detailed the fifteen or twenty most obvious, most unmissable instances of my primary character’s reading: Dr. Seuss, Maurice Sendak, Lewis Carroll, Tolkien, Robert Heinlein, Mad magazine, as well as endless scenes of looking at comic books. Never mind the obsessive parsing of LP liner notes, or first-person narration which included moments like: “I read Peter Guralnick and Charlie Gillett and Greg Shaw…” That my novel took as one of its key subjects the seduction, and risk, of reading the lives around you as if they were an epic cartoon or frieze, not something in which you were yourself implicated, I couldn’t demand Wood observe. But not reading? This enraged me." This is the only quote from Wood that Lethem uses in his essay, and he buries it within a full paragraph of editorialization. This on its own would give the average critical reader pause for thought. But when you look closer, when you read Wood in the original, you notice that there is a more fundamental disconnect at work. Lethem has fundamentally misunderstood what Wood was saying. Here is the Wood quote in the original, concerning the main character from Fortress of Solitude: "We never see [Dylan] thinking an abstract thought, or reading a book (there is a canonical mention of Steppenwolf, which is just more cultural anthropology, and just about it for literature in Dylan's life), or encountering music that is not the street's music, (italics mine) or thinking about God and the meaning of life, or growing up in any of the conventional mental ways of the teenage Bildungsroman. There is no need for Lethem to be conventional, of course; but there is a need for Dylan to have outline, to have mental personality." Wood's point in his review of Fortress is that Lethem is a fabulous cultural chronicler of childhood, but that he fails when it comes to describing adulthood's particular individual consciousness. There is something beautiful in Wood's phrase "music that is not the street's music" -- maybe this is why Lethem chose to elide it in his quote. It reinforces how much Dylan Ebdus's character is informed by group consciousness. But all Lethem can see is Wood's snobbery. "Wood is too committed a reader," Lethem writes, "not to have registered what he (apparently) can’t bear to credit: the growth of a sensibility through literacy in visual culture, in vernacular and commercial culture, in the culture of music writing and children’s lit, in graffiti and street lore." But this is precisely what Wood is talking about. He is pointing out that Dylan, for all his theoretical interest in Sendak and Heinlein, is not very interesting as an individual; far from ignoring street culture, Wood points out that street culture is what makes Dylan who he is. When Dylan grows up and loses sight of the street, Dylan becomes boring. Wood's snobbery is beside the point here; the critic admits that Dylan doesn't need conventional interiority, a world of high-brow books or high-brow music -- he just needs interiority, period. We're reminded once again of Henry James, the snobby fussbudget who occasionally got it right -- "the only obligation to which we may hold a novel is that it be interesting." Dylan, in Lethem's later pages, is no longer interesting, and Wood, as a critic, wants to try and explain why. 3. Maybe a close examination of Lethem's article will shed light on the reasons why so many authors attack their critics, and why literary fights can seem so personal. Because authors, at heart, are much more interested in the verdict a critic renders than the evidence they display. And why wouldn't they be? Authors understand that good reviews sell books and that bad reviews don't -- they are the most consumer-minded of all cultural observers, because they know as well as anyone how hard the literary marketplace can be. This isn't even considering the personal aspect of having one's work attacked in public, the feeling, as Edith Wharton put it, that "one knows one's weak points so well... it's rather bewildering to have the critics overlook them and invent others." Lethem, despite his own critical experience, isn't immune to this view. "The review," he writes, "wasn’t the worst I’d had. Wasn’t horrible. (As my uncle Fred would have said, ‘I know from horrible.’)" Lethem looks at Wood's review in a familiar cultural context -- is it good, or is it bad? Will it sell my book or will it turn people away? Does it make me look foolish or paint me as a genius? What's the judgment here? But what if the purpose of a review is not just to render judgment, but to explicate the way literature works? One can't fault Lethem for disliking having his own work on the operating table, but certainly he's been on the cutting end before. The pain of the writer is that he has to sit still while the critic pokes through the vitals of his work and shows them to the audience. When the critical work is at its finest, the audience is like a crew of medical students standing around a doctor at work -- even when we disagree with the way things are being handled, we can still see the body of evidence and draw our own conclusions. The process itself helps us learn; it adds to our understanding of literature as a whole. That is, if the body on the table would only stop complaining. 4. This is extreme, I know. The body of work on the operating table has its own concerns. Staying alive, for example. An irresponsible critic, like an irresponsible doctor, runs the risk of killing the work -- we don't call it a "hit piece" for nothing. And if Lethem is right, and Wood is not doing high-level criticism anymore -- if, like Adler's vision of Pauline Kael, he has gone "shrill," "stale," has fallen prey to the tendency "to inflate" -- then we have legitimate cause to worry for other books, other authors. Where do we go to find if a critic -- or an author -- is being irresponsible, is failing at their literary mission? We go to the text, naturally -- we render the evidence as best we can. This is the burden of proof, the burden the critic takes on when making judgments. This is the burden Lethem must assume if he is to be a critic of Wood's own critical project. "When Wood praises," says Lethem, "he mentions a writer’s higher education, and their overt high-literary influences, a lot. He likes things with certain provenances; I suppose that liking, which makes some people uneasy, is exactly what made me enraged. When he pans, his tone is often passive-aggressive, couched in weariness, even woundedness. Just beneath lies a ferocity which seems to wish to restore order to a disordered world." Leaving aside the question of whether or not all critics (and readers) like things of certain provenances, we find ourselves again with the verdict but no facts. If Wood is passive-aggressive, why not show it? And what are we to make of Wood's supposed ferocity, his drive to correct the world? Are we supposed to take Lethem's word on Wood's intellectual makeup? Lethem gives Wood some credit: he points out that Wood wrote "4,200 painstaking words" about Fortress of Solitude. I would highlight another salient point: of these words, eight hundred (or nearly a fifth of the article) are direct quotations. Say what you will about the subjectivity inherent in what a critic chooses to quote, Wood uses ample evidence from Lethem's own text to make his points -- and nearly 600 quoted words come in blocks, without any editorializing from Wood at all; the critical equivalent of a primary source. This is not just a feature of Wood's review of Fortress -- it is a feature of his critical style. Wood may be blinkered, he may be a high-culture pedant, but he quotes with vicious abandon: great block quotes of prose that give the reader a decent sense of how the writers he picks use language, so that no matter what verdict Wood renders the reader is capable of viewing the evidence on its own merits. Take Wood's review of Alan Hollinghurst's The Stranger's Child, for example. As readers, we are quite justified in our anger when Wood attempts to parody Hollinghurst's style with his own prose; critics, whether they are also writers or not, are supposed to keep their own prose out of the critical game, lest we realize just how disingenuous they are. Or, as Hollinghurst himself put it, "it exposes your own fear of the charge that you don't know what you're talking about." But we can't fault the rest of the review of Stranger's Child for anything other than having an extremely intense, well-considered, and well-supported opinion, because we have the tools to respectfully disagree with the opinion if we like -- Wood gives us reams of quotation on which to draw our own conclusions. I happen to disagree with Wood's conclusions about Hollinghurst, as I do with many of Wood's conclusions, but I do not make the mistake of thinking that my disagreement with Wood's verdict means his article is a failure. I am interested in his ideas, I am interested in his evidence. Then again, it's not my book under the scalpel -- if I were Hollinghurst, I imagine I would be furious. Not being Hollinghurst, however -- a fact I share with the vast majority of the readership of The New Yorker -- I am free to enjoy the article on the merits. Quibble how you will with the verdict Wood renders on The Stranger's Child, just as Lethem does with the verdict he renders on Fortress of Solitude in 4,200 painstaking words, but it’s difficult to fault his methods -- considerable quotation, much of it in blocks, and statements based on these quotations. This is why Wood remains a sometimes inspiring, sometimes infuriating, consistently debatable literary critic. (A critic, mind you, who saw fit to send Lethem a postcard in return to the angry letter Lethem sent him when this review was published -- and here, perhaps, we can allow ourselves a little incredulity -- eight years ago. A postcard pointing out that he had actually liked a lot about Fortress of Solitude -- maybe it's Lethem, not Wood, who ought to be embarrassed upon re-reading the review, so many years later.) Lethem has now written 1,700 words attacking, not just Wood's article, but his entire approach to book reviewing, his "bad faith" -- and he supports his argument with 47 of Wood's own words. Whether or not you would like to see Wood exiled from his favored perch atop The New Yorker's book section -- and many do -- this is not a ratio to inspire particular confidence. It is very difficult to analyze anyone's bad faith. Lethem himself points this out at the end of his essay; that he goes ahead and attacks Wood's bad faith despite his own assertions is evidence of his critical perspective. Lethem has every right to be angry at Wood, for criticizing a work which he held dearly, for rendering a verdict that might hurt the work in the marketplace. But those of us who care about criticism are more interested in the evidence than the verdict, and in the case of Lethem v. Wood, the evidence is skimpy indeed. Image: Generationbass.com/Flickr
Award-winning polyglot Turkish author Elif Şafak has been accused of plagiarism by a translator in Turkey, where her newest novel Iskender was released on August 1. Shortly after publication Iskender, which had already sold upwards of 200,000 copies, was called out by a blogger for its resemblance to the Turkish translation of Zadie Smith's White Teeth. The comparisons move from the general to the specific, with one vignette in particular offered as the most damning evidence of perfidy. Shortly thereafter, Smith's Turkish translator, Mefkure Bayatlı, doubled down with a full accusation of plagiarism. The kerfuffle, which is front-page news in Turkey, does not of yet seem to have surfaced in the American literary blogosphere, despite the relative renown of Şafak in this country. Şafak, who writes in both Turkish and English, has enjoyed huge popular success globally for, among other novels, The Bastard of Istanbul, The Saint of Incipient Insanities, The Flea Palace, and The Forty Rules of Love. She was the winner of the Union of Turkish Writers prize for The Gaze and she is a frequent presence on the Turkish best-seller list. She has done the professorial/lecture circuit in the U.S., appeared on NPR, and written for the New York Times, The Guardian, and The Wall Street Journal, among other publications. In May, Şafak shared a stage with Jonathan Franzen and Salman Rushdie as a PEN presenter. In short, she's a big deal (and in Turkey, a huge deal). For those out of the loop, here's a brief timeline of the scandal (NB: highly unprofessional translations ahead): August 1: Iskender hits shelves. A novel about a bi-cultural immigrant youth living in London. August 3: Culture blog Fikir Mahsulleri Ofisi (very loosely, "Department of Ideas") reviews an advance copy of Iskender in a post titled "Elif Şafak's new novel is a little too 'Familiar.'" The review details the many ways in which the characters and themes of Iskender resemble those of White Teeth: Muslim immigrants living in London, inter-generational conflict, and so on. The blog makes an extended comparison of thematic and character similarities, before delivering the parting shot -- two versions of one moment spent daydreaming in front of a basement apartment window. The money quotes are here (note that passage was taken from the Turkish translation of White Teeth, so what follows is the Turkish translation back into English, with many apologies to Zadie Smith and translator Bayatlı for liberties taken). Bowden's living room was situated below the road and there were bars in the windows so that the view was partially obscured. Generally Clara would see feet, tires, exhaust pipes and umbrellas being shaken. These instantaneous images revealed a lot; a lively imagination could conjure many poignant stories from a bit of worn lace, a patched sock, a bag that had seen better days swinging low to the ground. (White Teeth, p. 30, Everest Publishing) He would sit cross-legged on the living room rug and gape at the windows near the ceiling. Outside there was frenzied leg traffic flowing right and left. Pedestrians going to work, returning from shopping, going on walks... It was one of their favorite games to watch the feet going to and fro and try to guess at their lives -- it was a three-person game: Esma, Iskender and Pembe. Let's say they saw a shining pair of stilettos walking with nimble, rapid steps, their heels clicking. "She's probably going to meet her fiance," Pembe would say, conjuring up a story. Iskender was good at this game. He would see a worn, dirty pair of moccasins and start explaining how the shoes' owner had been out of work for months and was going to rob the bank on the corner." (Iskender, p. 135, Doğan) August 4: Burak Kara, writing for Vatan newspaper, prints a statement from Bayatlı, the Turkish translator of White Teeth: A coincidence of this magnitude isn't possible. Şafak, using Zadie's book as a template, made the family Turkish and wrote a book. She simplified the topic. I especially note the similarity of the window story. Ten parallel stories like this can be written, but the window story isn't even a parallel. This is called plagiarism. It's like an adaptation. It surpasses inspiration... August 7: Şafak, one of her editors, and the General Director of Doğan Kitap Publishing respond in the Sunday print edition of Milliyet newspaper (web version here). The editor defends the book, noting that White Teeth bears resemblance to Hanif Kureishi's The Buddha of Suburbia (published before) and Monica Ali's Brick Lane (published after). "There are a number of similarities between Smith and Ali's books," stated Şafak's editor. "Doctoral theses have even been written on this topic comparing the two novels. And yet no one says that Monica Ali plagiarized." The General Director, too, addresses the natural and inevitable similarities between works of immigrant literature dealing with similar themes: "These are probably not the only two novels for whom the basement apartment represents a state of destitution." And Şafak hits back: Enough already! Iskender, which I wrote in England, which my English publishers read line by line with great pleasure, which my English agency represents with great pleasure, will be published back-to-back in England and the U.S. in 2012 by Penguin and Viking, two of the best publishing houses in the world. Given all this, I don't take seriously the accusations levied by a handful of people whose intention is to wear me down. As with all of my books, my hard work and imagination is evident in this novel. I'm fed up, we're fed up with the reckless attacks against people who do different work. My reader knows me. Iskender is my eleventh book, my eighth novel. This is what I say to those dealing in slander, gossip, and delusional behavior. August 8: Fikir Mahsulleri Ofisi, the blog that published the original review, addresses its old and new readers, reminding them that their original statement was simply that the book "might show influence to the extent that opens the way for an argument of plagiarism," and that the real accusations were made by Smith's translator. Like any hapless blogger who starts a shitstorm, they are gratified and bewildered by the new readership, alarmed by the repercussions, and disgusted by some of the comments. It's as if internet shitstorms are the same in every language! August 10: Fikir Mahsulleri Ofisi publishes a timeline for new readers, a response to Safak's response, and an epic polemic about the state of criticism in Turkey. There was value in bringing this to light: plagiarism is serious to the last degree, and not a claim that can be made lightly. But it is not an insult or an attack. As far comments like [columnist] Deniz Ülke Arıboğan's tweet, "to accuse an author of plagiarism is no different than to curse them" -- well, to curse someone is ill-mannered, it's hitting below the belt. One refrains from responding to curses. As for plagiarism, when it is held up with concrete information, it is a serious claim that must be responded to with a cool head. It's a criticism. Since this isn't something that is well-known in Turkey let me spell it out again so that it's well understood: CRITICISM. Moving to the political, the post goes onto criticize people who use Şafak’s 2006 appearance in court for denigrating the Turkish state (Article 301) as a reason to excuse or discount the plagiarism controversy: Just as Elif Shafak's liberty to write novels in the face of conservative laws, the liberty of others to criticize her novels must be held sacred, too. What to do about one warning left by a commenter who calls him/herself Elif Şafak: "If you don't erase this, criminal prosecution can be started against you?" What indeed? Without having read both Iskender and the Turkish translation of White Teeth, it's impossible to weigh in on the validity of the claims, but it'll be interesting to see what comes of this. We would love to hear from readers who have some perspective.
1. Collaborating with another writer is something I've done only once. It was for a Washington Post Magazine cover article about the stock car racing legend Richard Petty, who was making his first run for political office in the fall of 1978. At the time I was working as a newspaper reporter in Greensboro, N.C., and after work I would drive the 22 miles to Petty's home with one of the paper's editorial writers, and we would spend the late afternoons talking with Petty as he drove his customized van along the back roads of Randolph County. Petty was always dressed in his trademark cowboy hat, cowboy boots and wraparound shades as he knocked on doors, flashed his famous thousand-watt smile and urged people to help elect him to the board of county commissioners. Naturally, Petty lapped the field. When it came time to write the article, my collaborator gave me his notes and disappeared. This delighted me. I was free to sit alone in my room using his notes and my own to write a draft of the article as I thought it should be written. My collaborator then made suggestions, some of which I heeded, most of which I ignored. The article appeared under both of our bylines, with mine before his, an arrangement that struck me as more than a little unfair. We also split the $750 paycheck down the middle, which struck me as enormously unfair. Afterwards I felt like the character Nelson Head in the Flannery O'Connor short story, "The Artificial Nigger," a young yokel who survives a harrowing visit to the big city of Atlanta and vows never to return. To paraphrase Nelson, my feelings about collaborating with another writer were I'm glad I did it once, but I'll never do it again. 2. My vow has remained intact for more than 30 years, but I recently learned about a group called NeuWrite that has forced me to reconsider my abiding disdain for the art of collaborative writing. The group began to take shape back in 2007 because a Columbia University neuroscience grad student named Carl Schoonover had arrived at a blunt realization. "Lots of interesting neuroscience research gets reported badly," he says. "And most scientists can't write for shit, myself included, because they don't teach you how to write in science grad school. The trick was to find writers." So after discussing the idea with his colleagues, Schoonover persuaded Stuart Firestein, the chairman of Columbia's biology deparment, to introduce him to Ben Marcus, who heads the university's Master of Fine Arts program in non-fiction writing. Marcus offered the names of half a dozen of his students who might be interested in collaborating with neuroscience grad students, and Schoonover took each of them to The Hungarian Pastry Shop near campus to pitch his idea. In early 2008, the group came together for the first time at an informal salon in the home of Firestein and his wife Diana Reiss, a psychology professor at Hunter College. "I think you need to develop trust for it work," Schoonover says. "We scientists are accustomed to collaboration. It's built into the scientific process. But the writers were very reticent, especially at first." As the members became more familiar and comfortable with each other, scientists started pairing up with writers and working together. Eventually the salon atmosphere of the meetings gave way to a classic MFA workshop format – members would bring in a piece of their own writing for the group to discuss; established science writers would be invited to speak; the group would read and discuss examples of high quality science writing. Schoonover wound up pairing with Abigail Rabinowitz, 32, who has since gotten her MFA and gone to India on a Fulbright grant to study surrogate motherhood in Mumbai. Rabinowitz had wanted to be a scientist when she was growing up, and the announcement that NeuWrite was forming in early 2008 caught her eye. "I wanted to find my way back to science through writing," she says, "and I thought this would be a great way to look at writing from a different perspective and possibly find new stories." Schoonover and Rabinowitz's first collaboration was on an article for Science magazine about a show at the American Museum of Natural History called "Brain: The Inside Story". "First, we heard the museum's directors speak about how they'd planned the show," Rabinowitz recalls. "Then Carl and I walked through the show together and shared impressions. If I wasn't sure about something, he explained it to me. Our impressions were very similar, even though we were coming from different backgrounds. We both felt the show wasn't organized visually as well as it could have been." Next came the hard part. "So we sat down together with a computer," Rabinowitz continues. "We both had a lot of notes, and we outlined the piece together. I had a vision for the introduction when you walk into a kind of spaghetti forest that represents the brain. Carl also thought it was a good way into the piece. Then we moved through the show, and that became the article's structure. I typed while we were both speaking – not trying to hone language, just trying to get basic ideas in order. Then I wrote the first draft until the halfway point and e-mailed the draft to Carl, who then edited what I'd written – not structure, but word choice and one factual error and some added information. Then he wrote the second half. He sent it back to me and I edited what he'd written. We both killed the other's darlings." More and more refined drafts went back and forth a half dozen times. Changes were tracked on each draft, and the collaborators spoke frequently by phone. The finished product possesses two things you don't always find in science writing: accurate, easily comprehensible information related in a style that's brisk and clear. The pair's next collaboration was an article for the New York Times about the emerging field of optogenetics, which uses flashes of light to control electrical activity in specially engineered neurons. The technique is beginning to yield insight into such human disorders as Parkinson's disease and anxiety. Rabinowitz now feels that collaboration, though painful, is worth the trouble. "Ultimately I think it produced better writing than I could have done myself," she says. "Carl knows what he's talking about. If he liked something I wrote, I got the joy of recognition. But it can be frustrating too. I wouldn't want to write this way with most people I know, because it's hard and there has to be a good reason to do it. If you're writing with somebody else, you need to communicate very well." For Greg Wayne, a grad student in theoretical neuroscience and a member of NeuWrite, this hasn't been his first exposure to collaborative writing. Wayne and his brother, a novelist, had worked together on humor sketches, a form that's "incredibly amenable" to collaboration, he says. "With humor, there's a joke every line, and that can be edited immediately. Is this funny? Does that work? But if you have long, discursive writing, sitting at the same keyboard is much more difficult. I think novel writing would be just about impossible." Wayne collaborated with the writer Alex Pasternack on an article for Science magazine about a panel on artificial intelligence at the World Science Festival – replete with robot demonstrations, including Watson, the "Jeopardy!" champion. The experience left Wayne convinced that there are times when two minds can produce better science writing than one. "For the article we divided up responsibility based on what we know best," Wayne says. "Alex, as a writer, was going to look at social issues, how the public views artificial intelligence, how people think about a Stanley Kubrick sci-fi movie. As a scientist I would focus on the nuts and bolts of how the robots work. In the end, neither one of us alone would have been capable of writing what we wrote together." 3. Tim Requarth studied Spanish literature as an undergrad and wrote a book about his father's dementia before entering Columbia's neuroscience program. Requarth, who recently wrote a review here at The Millions of the neuroscientist David Eagleman's best-seller, Incognito: The Secret Lives of the Brain, teamed up with Schoonover to help run NeuWrite. "I was a logical person to step in because I've had a foot in both words – science and writing," says Requarth, who has collaborated on articles for Science and Scientific American with Meehan Crist, who has just finished writing a book called Everything After, about traumatic brain injury. "One thing we've all discovered is that it works better if one person writes the first draft. Meehan and I discuss the ideas and arrive at a sketch, details to include, how to start. Then I sit down and write. Then Meehan does a first-pass edit, and we pass it back and forth until we're both happy with it. When someone reads your rough draft, it's like letting them see you half-dressed. It's about arriving at a level of intellectual comfort – or having faith in the process. In a successful collaboration, both people feel like they did less than half the work." Requarth is now working to start a second NeuWrite group that will branch beyond the neuroscience field and beyond the Columbia campus. He's recruiting students from other science disciplines at NYU and CUNY, as well as journalists. Another group is beginning to form in Boston. Schoonover is optimistic that the group's tenets will spread. "We're trying to make the argument to science editors that the best way to guarantee accuracy and avoid hype is by having a scientist involved in every step of the crafting of articles," he says. "Once we show that this collaboration between writers and scientists works with NeuWrite, we'd love to see it become routine. We're sowing the seeds for expansion." (Image: Christmas DNA from pagedooley's photostream)