While it should come as news to absolutely no one that Sony is readying Dan Brown’s Angels and Demons (IMDb) for the big screen (Would it surprise anyone if Dan Brown’s grocery list fetched an eight figure deal?), what might come as a shock is the price paid to screenwriter Akiva Goldsman. That price, $4,000,000, is a new record for a “for hire” project, and ties the payday Shane Black received for “The Long Kiss Goodnight” (IMDb) for most money ever paid to a screenwriter for a single writer credit. Goldsman secured this filthy lucre despite tepid (read hostile) reviews of his adaptation of The Da Vinci Code (IMDb). With this record-setting paycheck, and kudos from the ever-fawning LA Times column “Scriptland,” does this signal a new golden age of screenwriting? Not according to this LA Weekly article “Screenwriters in the Shit“. It’s articles like this that make me want to move to the sticks and take up animal husbandry.
Reading Nikil Saval (my Stanford friend and colleague)’s review of The Dark Knight at n+1 today, I found myself of two minds about his take. I too had exclaimed angrily about the impossible bustiness of the whole troupe of Russian ballerinas Bruce Wayne (Christian Bale) kidnaps on his yacht, and the befuddling reappearance of Cillian Murphy (villain psychiatrist from Batman Begins), as well as the almost unwatchable chaos that was most of the action scenes, and the manipulative gotcha “black criminals are human too!” scene. I had exclaimed about other things Nikil hadn’t mentioned – like why bother getting the wonderful Maggie Gyllenhaal to play just another insipid damsel in distress (albeit, weakly disguised as a “strong, independent woman”: she’s a DA! and she kicks the Joker in the balls while wearing an evening dress!)But the meat of Nikil’s review was his reading of the Dark Knight’s plot as political allegory. I am a rather bad reader of allegorical plots and having been told in vague terms by many people that the political implications of this Batman were intense, I hoped my symbolic reading skills were up to the task. My reading of the plot as allegory went something like this: our country has been taken over by a demented clown who burns money and oil and whose motives are incomprehensible. As you can imagine, I was very disappointed reading thus – I thought that this was not a very useful or provocative take on the current state of the union. I had also been told that Batman “goes over to the dark side” in this movie, but as far as I could see, except for wearing the black he always had, he was still the good guy (we knew his motives remained pure). Never, not once, did he seem taken in by the thrilling chaos that the Joker was peddling. (I had had a vague image of Batman and the Joker a la Danny Aiello and Bruce Willis in the under-appreciated Hudson Hawk synchronizing a heist by both singing “Swinging on a Star” in the same tempo. Holding hands while causing mayhem together! What fun! Try pulling that plot off next time, Christopher Nolan!) Again, I was disappointed.As Nikil’s review will show you, I missed rather a lot. The crux of the allegory and the moral ambiguity lies in Batman’s recourse to criminal methods to get the job of crime fighting done: his creation of a god-like surveillance system that violates the privacy of every resident of Gotham to find the Joker, and his beating information out of the Joker about the location of hostages and ticking bombs. In this, we can see the spectral reenactment of our own political situation: The US, which imagines itself as the world’s superhero, the champion of good, betraying its ideals (civil rights, the sovereignty of law, peace, justice) to defend these same ideals. Here was the genuine ambiguity and the interesting symbolic plot I had missed. As Nikil puts it “to fight anarchy is to lose one’s bearings, and move one’s own soul dangerously close to evil.” And this anarchy, of course, is terrorism and terrorists embodied in the Joker.No matter what you might, in the end, think of Batman’s (or the United States’) ultimate moral affiliation after these adventures, Nikil’s plot reading holds. My being of two minds takes issue more with Nikil’s idea that The Dark Knight is somehow a propaganda classroom, manufacturing citizenly consent for US policy and reinforcing in even its youngest viewers “every conceit that this childishly self-regarding nation has about its mission in the world”:And so the Joker, like other criminals in the film, is treated by Batman the way America treats terrorists: he is tortured. Intellectuals who favor the use of torture in the United States often reduce the ethical question to a hypothetical “ticking bomb” scenario, in which a terrorist reveals he has a plot to blow up thousands of people in one hour, and the only way for officials to extract information from the lunatic in time is through ruthless physical violence. “Ethics 101,” Charles Krauthammer calls it. “Hang this miscreant by his thumbs. It is a moral duty.” It doesn’t matter that, in a real Ethics 101 class, one would learn that legal ethics is not reducible to a childish theoretical picture; that there is not a shred of historical or present evidence on which to base such hypotheticals. (There are bombs in the real world, but they never tick.) Yet the real-world debate over torture is frequently reduced to this argument, because it has a terrifying simplicity to it. As in the scenario itself, the argument doesn’t even give you time to think: you are simply asked to decide, and your decision then becomes actual policy. When it is presented in something like real time, as it is in The Dark Knight, it actually functions as “Ethics 101” for the children who see the film.And I take issue with this not only because, dunderhead that I am, the only childishly self-regarding conceit I came away from the movie with on my own was “our president is a psychopathic jester who is burning down our economy and must be stopped at any cost – damn the law.” No, I take issue with this because it means that there is no difference between art and life – that the moral and social rules and actions we observe and tolerate in comic books and novels foist themselves upon us as we read and work their way into our real lives. Saying that children who watch Batman are being primed to condone their country’s use of torture is like saying that reading Patricia Highsmith’s Ripley novels will make us kill our rich friends and assume their identities – or, at least, that we’ll approve of those who do.This just gets what movie-going and novel-reading is about all wrong. I think most people go to the movies for escape – we get out of our own heads, away from our own worries, we suspend the real world for a while to move into a variety of different, often joyfully impossible, worlds. Here we find respite from our own lives. I also think the rules of genre are comforting. Real life-plots are unpredictable: We never know in real life when we’re walking into a chapter of personal tragedy, when things might take a romantic turn, when they’ll go Beckett-y or Kafkaesque, but if I rent, say, 27 Dresses or The Holiday, I have the comfort of knowing how it will go, even though I have the pleasure of not knowing quite how it will go. It’s soothing. And I don’t, unless I’m Don Quixote, get up from either movie and expect life to yield up to me the personalities or plots I just watched. Just as I don’t get up from Batman thinking that I wouldn’t mind seeing more terrorists water-boarded, even if, obedient student of the comic book genre that I am, I accepted whatever “the good guy” had to do to get the job done as “good” – which is probably why I missed identifying Batman’s “criminal activities” as such (“Yeah, but it’s Batman who’s spying on everyone. Now if the Joker, it’d be another story“).Admittedly, this is part of a larger resentment and even anger I harbor against intellectuals at the movies – and indeed part of my somewhat perverse occasional campaign against taste and connoisseurship generally. Should Batman induce such anguish and demand such moral seriousness at it does at n+1? (“Why so serious?” as the Joker puts it.)Although I agree with much of Nikil’s reading, I find in it something repellent (morbid, paranoid, despair-inducing) that I associate with the Leftist intellectual temperament. I have written before about Theodor Adorno’s Minima Moralia, Reflections on a Damaged Life and it is from Minima Moralia that I find the purest expression of this attitude that troubles and repels me:There is nothing innocuous left. The little pleasures, expressions of life that seemed exempt from the responsibility of thought, not only have an element of defiant silliness, of callous refusal to see, but directly serve their diametrical opposite. Even the blossoming tree lies the moment its bloom is seen without the shadow of terror; even the innocent “How lovely!” becomes an excuse for an existence outrageously unlovely, and there is no longer beauty or consolation except in the gaze falling on horror, withstanding it, and in unalleviated consciousness of negativity holding fast to the possibility of what is better… The malignant deeper meaning of ease, once confined to the toasts of conviviality, has long since spread to more appealing impulses… Every visit to the cinema leaves me, against all my vigilance, stupider and worse.Adorno’s book is a long collection of fragmentary meditations in the same inconsolable tone as this one. And while I have moments of deep sympathy for his tragic worldview, his sense that everything in our world is broken and sinister and corrupting, I think, for myself, that to linger in this mindset for long would be devastating. I would kill myself. I continue to marvel that the anguished consciousness on display here managed to survive itself for 250 pages. There is something of Adorno in Nikil’s take on the Dark Night – that watching this movie – maybe movies generally? – can be dangerous and morally suspect: That we Americans are watching our crappy, multi-million dollar nation-affirming movies while the world we set on fire burns. We retreat into movies (becoming ever easier in this era of Netflix, iTunes, and pay-per-view), neglect the world, and become dumber for our retreats into escapism, thus less capable of fixing the world we fled in the first place. “History will record,” Nikil writes, “that, while a monumental catastrophe overtook the world financial markets and a new colonialism destroyed the lives of nations, the United States still found time and money to resolve in its films what it could not, for the life of it, perform in the world.”Maybe History will. And maybe my logic is disgraceful and maybe I am deluded – or just weak (a junkie). The number of head pats, cheek pinches, and chin chucks I continue to get even now that I am almost 30 suggests that intellectual seriousness continues to elude me: but I love movies. And I defend them. They allow me to go into worlds that are more beautiful and make more sense than ours. Going to the movies, reading novels, is a kind of idealism for me, a longing for order and beauty that I will never find in this world. Maybe this isn’t morally justifiable, but it’s psychically necessary. Even Batman, flawed as it was, gave me a much needed respite from myself.Is Batman the problem? Is Batman a bigger problem than is an impenetrable seriousness, than a relentless critical certainty that would seem sometimes to insist that despair is the moral highground?
Whether or not Invictus—Clint Eastwood’s forthcoming film about South African rugby—manages to sentimentalize apartheid, the 2009 film season has already been defined by gritty, emotional realism. Kathryn Bigelow’s anti-epic, The Hurt Locker, was the first fictional film on the Iraq War to approximate the conflict psychologically. Save for David Simon’s HBO miniseries Generation Kill, only The Hurt Locker achieves verisimilitude; sitting through the picture in the theater must feel something like serving in country. More recently, Lars von Trier’s harrowing, and ultimately, despicable masterpiece Antichrist, was at least as unwatchable, to those who haven’t suffered such a loss, as losing a child is unimaginable.
Lee Daniels’ second feature, Precious: Based on the Novel Push by Sapphire, (the novel was adapted by Geoffrey Fletcher), does for life in the ghetto what Bigelow did for modern war, and von Trier for filicide. At the start of the film, Precious (Gabourey ‘Gabby’ Sidibe) is sixteen, illiterate, obese, and pregnant with her own father’s child—for the second time. She’s a quiet girl in the initial scenes, and when we first hear her speak it’s in voiceover. Though she may be piteous—in fact, there may never have been a character more deserving of sympathy—her tone isn’t pitiable. Precious lives in her imagination, and it is only in her fantasies that she finds sovereignty. At home with her mother, she is a slave in the truest sense: she’s an indentured servant and a source of income for her master. Welfare is the lone industry in the film, and the greater the recipient’s need appears, the larger the monthly check. For Precious’ mother, Mary (Mo’Nique), to survive without working, she must keep Precious—and whatever issues from Precious—under her ward. Before a visit from their social worker, Mary’s mother brings over Precious’ first child, whom Mary, on paper, claims as her own. The girl is a product of incest and afflicted with Down syndrome. She’s called Mongo, short for mongoloid. We’re never given her real name.
Precious is, throughout, a film about perverse subjugation. Mary profits from her control over Precious (the fuller the house, the greater the yield), but the gains themselves are dependent upon stasis. If Precious leaves the house, Mary can’t eat. Precious, therefore, is a resource and a crutch, and as any head of an empire, Mary fights violently to keep her progeny in the commonwealth. In an early scene, after a counselor from Precious’ school stops by the apartment building to conference with Mary, thinking she’ll be reported—for abuse and neglect, ostensibly—Mary attacks her daughter, throwing at her whatever she can find, in this case a shoe. Later, she throws a television.
The film is in many ways unbearable to watch, and because of that all the more necessary to see. But unlike Antichrist, for instance, which was relentlessly horrific to no purpose (and still astounding in spite of that), Precious strives to alleviate misery. Precious moves to an alternative school where she meets Ms. Rain (Paula Patton), who commits herself wholeheartedly to Precious’ resurrection. What keeps this plot point from turning saccharine, though, is the fact that Precious herself may be past saving. Even if she were to earn her GED and go to college, by the end of the film, one feels, as an observer, that her psychic wounds are too deep to close. How can she, for instance, tell her son about his father?
Precious is the preeminent victim of her circumstances. But at the same time, excepting a few intense moments of introspection (the grandest one being the final encounter she has with her mother at the welfare office, a scene that may be the most terrifyingly cathartic of the year—surpassing even Charlotte Gainsbourg’s self-mutilation scene in Antichrist) she doesn’t dwell very long on her plight. And that she remains optimistic in spite of everything is either this film’s greatest flaw, or its triumph. If it’s the latter—and I would argue it is—Precious is much more than an exposé of poverty or an argument against government aid. It earns its optimism, if only because the labors necessary to achieve that hope are so awful.
When Precious leaves the welfare office, carrying her son, and holding her daughter’s hand, I thought of the close of Kubrick’s Paths of Glory. After the execution of the innocent deserters in that film, the surviving soldiers sit in a mess hall. A beautiful young woman comes out onto the stage. The men jeer her as she begins to sing. But soon they stop. They’re unable to summon more insults. The beauty of her art, for that moment, eliminates their horrors. Similarly, Precious’ love for her children wipes away, temporarily, the mess of her circumstances.
Lee Daniels, by making this film, trained his eye—subjective as that eye may sometimes seem—on a family whose abhorrent situation, terrifyingly enough, isn’t unique. For many audiences (even those familiar with the fourth season of The Wire), Precious ought to come as a sickening shock. Despite its dream sequences and fantasies, it is overwhelmingly real.
All readers have seen literary works they adore adapted for the screen, cataloging, scoffing, cringing, and wondering at changes to the original narrative — or, if lucky, delighting in them. No readers, though, have had the experience that devotees of A Game of Thrones, or more specifically, of George R.R. Martin’s in-progress suite of novels A Song of Ice and Fire, are about to. The upcoming season of HBO’s Game of Thrones will reportedly push past Martin’s fifth and most recent book, extending numerous plotlines beyond where readers last left their heroes. The series will continue to do so until it concludes, presumably reaching its denouement long before Martin can publish the two remaining novels he plans.
Fansites are abuzz with virtual hand-wringing about this, their anxiety different from the usual panic about a screen version’s faithfulness. Game of Thrones is about to go where no adaptation has gone before, into the realm of the unpublished source, adapting books that do not yet exist, that will become available later — thus undercutting the very premise of adaptation. Anyone fatigued with Game of Thrones, the socio-technological phenomenon — most illegal downloads! most on-line videos of viewers watching characters die! — may find their interest piqued by the show’s challenge to modern assumptions about adaptation and the idea of canon.
Our notions of original and adaptation logically privilege chronology. We call the first published version of a narrative the original and consider the versions that follow adaptations — less definitive, and somewhat degraded. We make exceptions, of course: William Shakespeare’s plays are adaptations, but their stature is elevated by his genius and cultural context. (For Shakespeare’s time, indeed, notions of originality and adaptation would have made no sense.) We are also used to privileging print above screen, but chronology seems to takes precedence: nobody gives a darn that Graham Greene’s screenplay and subsequent novella of The Third Man call (absurdly) for the hero to get the girl at the end, because nobody saw his screenplay before the film came out; the novella also arrived afterwards.
These principles lurking in our thoughts, we usually watch screen adaptations of our favorite books with a kind of dual consciousness, what adaptation theorist Linda Hutcheon calls (with a nod to Mikhail Bakhtin) “an ongoing dialogical process,” and “an intertextual pleasure that…some call elitist and others enriching.” That is, we watch adaptations and enjoy comparing them to the source, perhaps thinking That’s not what happens in the book or I caught that in-joke. The adaptations I have in mind here are neither the inspired by kind, nor the let’s focus on two minor characters instead of Hamlet kind. Productions like Game of Thrones are predicated on a large degree of faithfulness. Sure, the series has deviated and bastardized — every season moves further afield of the books — but it does so largely in order to keep protagonists in the foreground and Martin’s structure intact.
Until now. The producers, to whom Martin has revealed his plans for the conclusion of his books, have announced that henceforth the adaptation will diverge significantly. Naturally, they have not announced how much, or starting when, or with which plotlines and character arcs, and that’s where this gets interesting. Devoted readers’ “intertextual pleasure” will be tempered with uncertainty, as they may find themselves thinking: That’s not what happens in the books — yet! or I don’t know any more about this than my idiot friend here does. The commentariat has expressed concern about spoilers for the books, but the fact is, no one will know when the show is revealing Martin’s plot and when it is telling a different story. As a corollary, when readers finally receive Martin’s sixth and seventh novels, they may be discomfited by literary narratives contradicting the screen version.
This reversed chronology of print to screen destabilizes categories of original and adaptation. Yes, the next three seasons of Game of Thrones will still spring from Martin’s fictional world, but when the series becomes first to portray developments beyond the books’ chronology, when its narrative unfolds in dialogue not with a prior text but only with fan speculation, labeling it an adaptation will seem wrong. What if Martin revises his plot under the influence of the show? (Will anyone know that he has not?) Which then becomes original, and which adaptation? The conceptual binary is inadequate.
Similarly disrupted by the particularities of Game of Thrones is the notion of canon, the designation of certain texts as authentic at the expense of others. The term dates to the early Christians, who felt the need to legitimate the real gospel created by the right people under divine guidance, as opposed to apocryphal spin-offs. Our current idea of canonicity derives from this sense of a unified and godlike authority. Its 20th-century paradigm is perhaps the case of Sherlock Holmes: when Arthur Conan Doyle, tired of churning out detective stories, killed off the beloved sleuth in 1893, readers filled the void with fan fiction and biographies, even after Conan Doyle bowed to pressure and resuscitated — and copyrighted — the character in 1903. The preponderance of Sherlockiana was termed non-canonical by the literary industry, despite much fan dissent. It is an example that highlights canonicity’s deference to the powers of the creator, authorial intention combined with intellectual property law and the marketplace.
In recent years, the deployment of canonicity has resurged as technology has exponentially expanded the dissemination of texts. It is especially present in the context of science-fiction and fantasy, genres that are set in fictional realms, worlds subsequently used in adaptations and continuations, whether licensed (such as recent novels depicting Isaac Aasimov’s Foundation world, or commercial video games, role-playing games, etc., based on film and book franchises) or unlicensed (fan fiction, costumed play). The idea of canon helps those who care maintain clear divisions between what really happened in that universe, according to its creator(s), and what is some loser’s version of what could have happened. Of course, there are disturbances in the force: the Star Wars films re-edited and revised by creator George Lucas in the 1990s have been anointed by their creator as canon. But so many enthusiasts publicly denounce Lucas’s rewriting of specific moments — such as when Han Solo is fired upon by Greedo first, and only then shoots back — that the significance of canon diminishes. Lucas’s reaction has been to make the revisions the only versions commercially available and claim that the original reels are ruined. The canon, it turns out, is auteur theory beholden to intellectual property rights and to estates covering their assets, but may be challenged by audiences voting with their mouse-clicks and wallets.
Game of Thrones makes all this clearer, even as it offers the possibility of a less monolithic sense of canon. It may be, years from now, that the novels will be seen as canon, that audiences will instinctively defer to Martin’s vision. But Martin himself, by inviting the show creators to deviate from his plot, has opened up the possibility that two versions can exist on equal terms. Then, as now, more people will have seen the series, and seen it first, than will have read the books. Someday it may be considered as canonical as the second of the two Adam and Eve stories in the Old Testament.
It is hardly news by now that Broadway theater has become a high-priced museum of its former self. This year’s Broadway season, which kicked off earlier this month, will feature a few new plays, including a limited run of Outside Mullingar from Pulitzer-winner John Patrick Shanley in January, but for the most part Broadway theaters will host the usual disheartening mix of jukebox musicals, retooled Disney movies, and revivals of hoary classics populated by downshifting movie stars.
For those who care about theater as an art form, it is this last category, the endless stream of revivals of classic American plays populated by movie stars, that really hurts. Sure, there are theaters off-Broadway and in other cities around the country that still commission and produce new plays, but the Broadway revivals, like the production of Tennessee Williams’s The Glass Menagerie starring Cherry Jones that opened earlier this month, show that there was once a time when serious new plays found favor not just with a small, theater-loving elite, but with a broad cross-section of middle-class America.
My own grandparents, like many educated young people in the 1940s, loved culture and fine things, but they lived in an isolated mill town in Southern Virginia without good bookstores or restaurants, much less a vital theater scene. So, like thousands of their fellow Americans, once or twice a year, they hopped a train to New York to eat a few decent meals, shop at the department stores along Fifth Avenue, and “see the shows,” which for them meant Broadway. This was, for a generation of American provincials like my grandparents, the height of sophistication and an annual ritual that sustained New York theater for decades.
Now that golden age of serious, culturally ambitious drama is gone forever.
Or is it? Certainly, given the sky-high ticket prices and the emphasis on circus-like musicals catering to baby boomer nostalgia, the next generation of great American dramatists like Tennessee Williams or Lorraine Hansberry, whose 1959 classic A Raisin in the Sun is being revived this spring, won’t be returning to Broadway any time soon. But in fact we have a platform for serious, character-driven drama in this country, and it is more popular and broad-based than Broadway ever was. It’s called cable television.
The inexorable slide of quality theater from the cultural mainstream and the rise of cable TV as the defining dramatic art form of the 21st century is a prime example of technological “creative destruction” at work. The theater of Broadway’s Golden Age was indeed terrific stuff, but as a consumer product it was wildly inefficient. Because shows were live and unrecorded, they could be seen by a limited number of people, many of whom had to travel hundreds of miles to get to the theater. Successful Broadway shows spawned touring companies – as hit musicals still do to this day – but such tours are costly to run and audiences in the smaller cities inevitably get a watered-down version of the real thing, with lower quality actors and production values.
Cable shows like Homeland or Breaking Bad, which airs its series finale this Sunday, are cheap and easily accessible to anyone with a subscription to cable or Netflix. More importantly, though, thanks to a complex set of market forces, all the incentives push cable channels to hire top-drawer actors and writers and allow them the artistic freedom to create compelling characters and story lines, much the way the best Broadway plays did half a century ago. This fragile cultural moment won’t last – more on that later – but for now it seems clear that if Tennessee Williams and Lorraine Hansberry were writing today they would be showrunners for a cable series, because that’s where the audience is.
You can measure the Golden Age of American theater in many ways, but I would mark it from the 1944 debut of The Glass Menagerie to the opening night of Edward Albee’s Who’s Afraid of Virginia Woolf in 1962. There were, of course, serious American playwrights before then – Eugene O’Neill is the best-known, but there were plenty of others – but those writers always seemed slightly ahead of the popular culture of their time. Likewise, many great American plays have debuted since 1962, and a select few, like Tony Kushner’s Angels in America, became part of the wider national conversation.
But for a short time after the Second World War, American commercial theater hit that elusive sweet spot where popularity meets ambitious social and artistic agendas. In his fascinating 1987 autobiography Timebends, Arthur Miller speaks of this era as
a time when the audience was basically the same for musicals and light entertainment as for the ambitious stuff and had not yet been atomized…So the playwright’s challenge was to please not a small sensitized supporting clique but an audience representing, more or less, all of America.
Miller explains how this broad-based, yet culturally hungry audience shaped the work of the era’s two greatest writers, himself and Tennessee Williams. Both men were, to differing degrees, outsiders to American culture – Williams because he was unapologetically gay, Miller because he was a Jew with strong radical beliefs. In another era, Miller says, they might well have slanted their work to please a minority audience that already agreed with them, but suddenly in the postwar years there was a mainstream audience waiting to hear what they had to say, and being both great artists and profoundly ambitious men, they opened their work outward to a mass audience.
To do that, they didn’t preach to their audiences like Clifford Odets did in his political plays of the 1930s or bash the viewer over the head with a bleak vision the way O’Neill too often does in his plays. Instead, Miller and Williams created characters – indelible, psychologically complex protagonists like the struggling salesman Willy Loman riding on a smile and a shoeshine or the tragic, half-mad Blanche DuBois forever depending on the kindness of strangers. These characters had to be psychologically complex and indelibly drawn because that’s how you appeal to a heterogenous audience not already united by social background or political outlook: you get audiences to care deeply about a character, to see themselves in someone who may not be in any outward way like them. Once you’ve done that, an audience will follow you anywhere.
Interestingly, it wasn’t the movies that put an end to Broadway’s Golden Age. Hollywood’s own Golden Age, stretching from the advent of sound in the late 1920s to the late 1950s, roughly overlaps that of Broadway. No, it was TV that killed the Broadway of Miller’s era – that and probably the jet plane. At a time when the only viable home entertainment was radio and all but the stratospherically rich traveled by train, car, or boat, Broadway theater was part of a broader leisure industry that catered to Americans like my grandparents yearning for cultural experiences they couldn’t enjoy in their own hometowns.
But once the desire for entertainment could be satisfied by a magic box in the living room and a desire for horizon-broadening travel could by satisfied by plane trips to Europe and beyond, Hollywood and Broadway had to adapt or die. They did so by splitting their audiences – “atomizing” them, in Miller’s terms – into high and low. After a decade of trial and error, Hollywood reinvented itself in the 1970s with ambitious, director-driven films like Roman Polanski’s Chinatown and Woody Allen’s Annie Hall and money-spinning summer blockbusters like Jaws and Star Wars. Broadway did much the same thing, filling the bigger houses with crowd-pleasing musicals like Cats and A Chorus Line while supporting more adventurous, writer-driven work by the likes of David Mamet, Sam Shepard, and Wendy Wasserstein.
This worked for a time, thanks in large part to off-Broadway and the regional theater movement, which allowed playwrights to grow their careers at subscription-based resident theaters around the country and then bring their most popular work to New York for a money-making Broadway run. This system, low-paying and outside the mainstream as it was, still made for some pretty terrific theater. Shepard, sustained by a long-running affiliation with San Francisco’s Magic Theater, introduced audiences to his singularly bleak and funny Western vision, while August Wilson, who premiered most of his plays at the Seattle Repertory Theater, opened a window onto working-class black characters quite nearly invisible to the mainstream.
But while regional theater provided an audience for more adventurous fare, unlike in Arthur Miller’s day, it was no longer the same audience that went to see the big musicals. Mamet, Shepard, and Wilson, talented as they were, were no longer writing for “an audience representing, more or less, all of America,” but for the “small sensitized supporting clique” that Miller saw as an artistically narrowing force. And then, lo and behold, the free market worked its magic. As Broadway ticket prices escalated to pay for ever more lavish, spectacle-driven musicals, it became harder to persuade theatergoers, even the ones who like the more ambitious stuff, to risk several hundred dollars on a new play.
Enter Carrie Bradshaw and Tony Soprano. Gallons of ink have been spilled, and thousands of terabytes expended, trying to explain why audiences have become so obsessed with characters on modern cable shows, but as Adam Davidson demonstrates in a December 2012 New York Times “It’s the Economy” column, the answer has more to do with business models than any quirk of culture. When there were only three major networks, programming success depended on producing a great number of shows that were just incrementally better than what was on the two other networks, which inevitably led to the creation of a vast wasteland of expensively bland mediocrity.
But once cable blew up the TV dial, giving viewers hundreds of channels to choose from, programmers had to shift their strategy. Now, it wasn’t enough to be just a little better than the competition; now, your shows had to be a lot better. You didn’t have to come up with a huge number of great shows, just one or two at a time would do, but they had to be so good that viewers would become obsessed with the characters and story lines to the point that they would shun cable providers that didn’t carry the channels where those shows appeared.
In other words, out of the morass of network TV, the very technology that ended Broadway theater’s Golden Age, came a sort of small-screen Broadway in which a few big talents – David Simon of The Wire, Lena Dunham of Girls, Vince Gilligan of Breaking Bad, and so on – have been given wide artistic latitude to create characters and stories audiences will care about. Because cable providers often operate as near-monopolies, the average cable bill has doubled in the past decade, and viewers pay close to $90 billion a year for cable service. That is a huge pot of money, and for many cable companies nearly half of their revenue is pure profit, so there is an enormous incentive to get the formula right.
But as Davidson points out in his Times column, this fragile model is already fraying at the seams. So far at least, cable subscribers aren’t canceling in large numbers, but as piracy becomes more pervasive, fewer younger people are signing up for cable in the first place. “When people in their 20s move out of their parents’ house or dorm room, they are less likely to get into the habit of paying for cable,” he writes. “If they get addicted to Breaking Bad, they’ll often download it free through file-sharing services like Bit Torrent or wait for it to come out on iTunes.” To make up for lost revenue, cable providers have to jack up rates, which drives more new viewers away, setting up a vicious spiral that, according to one industry expert Davidson spoke to, could cause the entire edifice to collapse as early as 2016.
What comes after that? The short answer is nobody knows. It could get seriously messy there for a while, leading millions of Breaking Bad and Mad Men obsessives to bore their children with talk of the Golden Age of Cable. But if this history teaches us anything, it is that there is always going to be a sizeable audience that cares about quality drama enough to pay real money for it. After all, in the 1940s, Broadway’s principal competition was local amateur productions and guys on their front porches telling funny stories – a sort of analog version of today’s BitTorrent downloads and YouTube cat videos. My grandfather, who told some pretty funny stories himself, was willing to plunk down serious money to take his family to New York for a few good meals and a chance to see the best writers and performers of his age. I have no idea what entertainment technology will look like when my future grandchildren begin to hunger for something more edifying than a quick joke or a funny story, but my bet is they will be able to find it if they are willing to pay for it.
Image via studentrush.org
“Small Magazines,” Ezra Pound’s 1931 appreciation of literary magazines, contains a confident proclamation: “the history of contemporary letters has, to a very manifest extent, been written in such magazines.” Commercial publications “have been content and are still more than content to take derivative products ten or twenty years after the germ has appeared in the free magazines.” Pound bemoans that larger publications are unable to “deal in experiment.” Instead, these commercial magazines poach from “periodicals of small circulation,” those “cheaply produced” in the same way a “penniless inventor produces in his barn or his attic.” Thus was created a romantic refrain: modern American writing has its foundation in literary magazines.
Only one of Pound’s favorite magazines still publishes: Poetry. It might be difficult to call Harriet Monroe’s concern a “little magazine”: in 2002, philanthropist Ruth Lilly gave $100 million to the Modern Poetry Association, the publisher of Poetry. That organization has since become the Poetry Foundation, and, according to The New York Times, Lilly’s gift is “now estimated to be worth $200 million.” The gift has lead to an excellent website, interdisciplinary events and readings, television and radio promotion of poetry, and educational outreach programs. But how many readers outside of the traditional organs of American literature — aspiring and published poets, students in secondary classrooms and college campuses, and critics — know of, or read, Poetry?
That might not be a fair question to ask. Literary magazines, by form and function, might require narrow focus. Narrow does not mean niche. Literary magazines have consistently enhanced and reflected larger literary trends without being as noticeable as those wider trends. Experimental publications helped spread Modernist writing and thought. As Travis Kurowski writes in the introduction to Paper Dreams, his comprehensive anthology of literary magazine history and culture, Modernist literary magazines “gave people a tie-in to an imagined community of readers.” Kurowski does not use “imagined” in the pejorative sense. Rather, he speculates that “literary magazines, due to their subject matter and even the smallness of their production, create a somehow more significant and longer lasting community than larger circulation magazines and newspapers.” Note Kurowski’s valorization of community over circulation. I might add further qualification. Literary magazines are uniquely important in observing the ripples, fragments, and failures within trends. They give readers and researchers the ability to see the flash beyond the snapshot, and in doing so, document moments in American literary history with more nuance than what is gained by only cataloging single-author books. Take Granta: 8, Summer 1983: the “Dirty Realism” issue. I once argued at Luna Park that it was the best single-issue ever of a literary magazine. The process was a thankless exercise, but I was attempting to make the point that even an individual issue of a literary magazine offers a complex cultural sample. Editor Bill Buford explains his collection of a strand of American writing marked by concise prose, destructive relationships, and a particular pessimism. The single issue contained writing by Raymond Carver, Jayne Anne Phillips, Richard Ford, Frederick Barthelme, Tobias Wolff, Angela Carter, Carolyn Forché, Bobbie Ann Mason, and Elizabeth Tallent. Not a bad snapshot and flash.
But I’m writing these words as a lover of literary magazines, an affection that was instilled in me at Susquehanna University. The Blough-Weis Library subscribed to Poetry and The Missouri Review, but also gems like Beloit Poetry Journal, where I finally read a poem — “Trout Are Moving” by Harry Humes — that connected me to the genre. If I held a collection by Humes, my 19 year-old mind might have lost interest after a few of his Pennsylvania-tinged, domestic elegies. Instead, I bounded to work by Ander Monson and Albert Goldbarth. Literary magazines made writing manageable and approachable. Our workshop professors used those publications as part of the curriculum, and not because they thought we could publish there. At least not yet. The point was that an awareness of contemporary publishing is necessary, particularly for undergraduates who think the only words that matter are the ones that come from their own pens.
Now when I receive a review copy of a short story collection or purchase a new book of poetry, I immediately turn to the acknowledgments page. And this might be a personal quirk, but I try to find the original issues in which the pieces appeared, and read the work there tucked between writers both established and obscure. I loved Jamie Quatro’s debut, I Want to Show You More, and it yet it felt more personal to read “Demolition” in The Kenyon Review. Literary magazines are the legend to the map of American letters. Yet I worry that this appreciation reveals me for who I am: a writer who submits to these magazines, who uses them in the classroom. This cycle does speak to the insular world of small magazine publishing.
Does anybody outside of our circle care? What is the wider cultural influence of literary magazines? To be certain, I am not sure there needs to be one. An insular economic system will likely fail, as evidenced by the graveyards of defunct magazines, but that does not mean an insular artistic system is inherently bad. Nor should we assume more literary magazines fail than niche publications or commercial releases. Here’s a better question: if for those of us in the circle — writers, readers, editors, teachers, and professors — literary magazines are a mark of credibility and authenticity, what are they to those on the outside? Do these publications carry any particular signification or importance within popular culture?
It would be incorrect to simplify popular culture to film and television, but it is a useful place to begin this consideration. I recently wondered if and when literary magazines have been referenced or included in these visual mediums. I began with two examples that stuck in my mind. In the “Christmas Party” episode of The Office, Mindy Kaling’s character, Kelly Kapoor chooses a “book of short stories” during Michael Scott’s ill-advised game of Yankee Swap. At least to my eyes, that book is an issue of The Paris Review. A more direct literary magazine reference is in the 2007 film Juno, when the titular character says jocks really want girls who “play the cello and read McSweeney’s and want to be childrens’ librarians when they grow up.” The reference was probably lost on many, but on a small but aware crowd, it did its job. Even if that job was simplification.
I couldn’t think of any more examples, so I went to that pop culture land of crowdsourcing, Facebook, for help. My literary friends delivered. What follows is a sampling of some of the most interesting occurrences, with original contributor citation in parentheses, plus my own investigations.
1. In Cheers, Diane receives a form rejection from West Coast magazine ZYZZYVA. Sam writes a poem that is later published in the magazine (Martin Ott). This appears in the “Everyone Imitates Art” episode, which originally aired on December 4, 1986, during the show’s fifth season. Diane enters the bar, overly excited about a letter from ZYZZYVA. Sam asks: “Who’s ZYZZYVA?” Diane responds: it’s “not a who. It’s a new literary review. Dedicated to publishing the prose and the poetry that’s right on the cutting edge.” The magazine was founded in 1985 by Howard Junker. Diane has submitted a poem, and received an extremely swift two-week response. Frasier Crane takes a skeptical look at the letter, and concludes that it is a form rejection. Diane disagrees, saying that it is a “soon and inevitably to be accepted later,” reading that “your work is not entirely without promise.” She proudly says they are “almost begging for another submission.” Sam agrees that the response is a form letter, and boasts that he could submit a poem that would receive the same type of response. The episode breaks, and when it returns, Diane asks about Sam’s poem. He points to a magazine on the bar, and tells her to open to page 37 and read “Nocturne”: by Sam Malone. She drops the issue and screeches. Diane thinks Sam has plagiarized the poem. She vaguely recognizes the overwritten lines. Somehow, in the span of three weeks, ZYZZYVA has received Sam’s submission, responded, and published it in an issue. Writers everywhere roll their eyes. Frasier tries to console Diane: “this literary magazine’s circulation must be 600.” Diane delivers the ultimate literary magazine rejection rant: “The original 600 readers drop their copies in buses and taxicabs and doctor’s offices and another 600 people pick them up and take them to the airport where they go all over the country. Then they get taken on international flights: Tierra del Fuego, Sierra Leone. All the remotest parts of the world. Soon, I defy you to find a house, a hut, an igloo, or a wickiup that doesn’t have a copy on the coffee table. Then, then, everyone in the world, every living thing will be laughing at me because he got published and I did not!” More sting arrives later, when Woody sends in a poem of his own and receives the same form rejection as Diane. Dejected, Diane vents to Sam, who has created this mess. Sam finally admits that he copied the poem from Diane’s own love letters to him. She considers herself published and validated. In the words of Howard Junker himself, Onward!
2. The Paris Review is mentioned in the 2000 film, Wonder Boys (Neil Serven). Grady, a struggling novelist, talks about one of his students: “Hannah’s had two stories published in The Paris Review. You’d best dust off the ‘A’ material for her.” With no further explanation, the reference is an accepted barometer of literary quality. Yet for a magazine quite aware of its social status, the review’s cultural capital seems localized to the literary community. We might be stretching the parameters a bit too thin here, but co-founder George Plimpton appeared in the “I’m Spelling as Fast as I Can” episode of The Simpsons (Aaron Gilbreath).
3. We could spend years arguing whether The New Yorker should be considered a literary magazine proper, but it does regularly publish fiction and poetry, so it merits mention. The magazine appears in the film 42nd Street (1933). Dorothy Brock, played by Bebe Daniels, holds an issue of the magazine with Eustace Tilley on the cover (Win Bassett). In The Squid and the Whale (2005), Laura Linney’s character, Joan, is published in an unnamed literary magazine, and later appears in The New Yorker (Neil Serven). That more prestigious publication is revealed in a scene at a restaurant. Bernard, Joan’s estranged husband, is surprised to learn that an excerpt from her forthcoming novel appears in the magazine. Another character, Sophie, says the story “was kind of sad, but really good.” Bernard changes the subject. Later, their son Frank’s inappropriate behavior at school prompts a meeting with the principal, who, at the end of the conversation, says that she read and enjoyed Joan’s story in The New Yorker: “it was quite moving.” The magazine also appears often in Adaptation (2002), with the identifying “sprawling, New Yorker shit” (Alex Pruteanu). An early scene occurs at The New Yorker magazine office, where writer Susan Orlean — author of The Orchard Thief, which main character Charlie Kaufman is attempting to make into a film — discusses going to Florida to write an essay for the magazine. Kaufman is having trouble due to the “sprawling” nature of the book, hence the magazine reference as literary code. Kaufman first uses the word “stuff”; later, The New Yorker style is “sprawling…shit.” The magazine, with work by Orlean within, appears open and at a restaurant table in the film. Later, Kaufman watches Orlean, seated alone, reading another magazine. In Kaufman’s voiceover: “Reads Vanity Fair. Funny detail: New Yorker writer reads Vanity Fair. Use!” And the magazine’s cartoons were lampooned in “The Cartoon” episode during the final season of Seinfeld (Tim Horvath). The New Yorker’s Cartoon Editor Bob Mankoff had some fun analyzing the episode here and here.
4. In Mad Men, the character Ken Cosgrove has a story published in The Atlantic Monthly (Brenda Shaughnessy). The publication occurs in episode “5G,” the fifth episode overall of the series. The story is titled “Tapping a Maple on a Cold Vermont Morning.” His contributor bio is as follows: “A graduate of Columbia University, Kenneth Cosgrove has lived in the New York area for most of his life. Working for the advertising firm of Sterling Cooper puts Mr. Cosgrove in a unique position to observe and study the trends that shape America today. This is his first story to appear in The Atlantic.” Pete Campbell, jealous, longs for his own fiction to appear in (you guessed it) The New Yorker, but is disappointed to learn that the piece only makes it into Boy’s Life Magazine (James Chesbro). The Missouri Review’s Managing Editor Michael Nye has a nice reflection on this episode, and the writer archetype in film, here.
Can you add to the list in the comments?
Image via Nigel Beale/Flickr