“It was the hemlock that made Socrates great.” —Seneca
“Honorable purpose in life invites honorable purpose in death.” —David Buckel
On an early spring morning in 2018, when the stars were still out and Manhattan glowed in all of its wasteful wattage across the East River, a 60-year-old retired lawyer named David Buckel made his way past the Grand Army Plaza and the Brooklyn Museum down to Prospect Park. In those hours before the dawn, Buckel dug a shallow circle into the black dirt, which investigators believed he made to prevent the spread of the fire he was about to set, having understood that our collective future would have enough burning. The former attorney paused to send an email to The New York Times—it read: “Most humans on the planet now breathe air made unhealthy by fossil fuels, and many die early deaths as a result—my early death by fossil fuel reflects what we are doing to ourselves”—doused himself in gasoline, and lit a match. Buckel was pronounced dead by 6:30 a.m.
As a civil rights attorney, Buckel’s life was one of unqualified success—an early supporter of Lambda Legal, he’d fought doggedly for LGBTQ rights not just in New York, but in seemingly inhospitable places from Nebraska to Iowa. As a human being, his life was defined by companionship and love—raising a daughter with his husband of 34 years alongside the girl’s biological mother and her wife. And as a community member, his life was committed to stewardship and responsibility—rather than using wasteful fossil fuels he walked several miles every day to the Brooklyn Botanic Garden where, in retirement, he organized the center’s composting program. By all accounts Buckel’s life was centered around justice, both personal and ecological, which should go to some length in explaining his death on April 14, though the philosopher Costica Bradatan reminds us that wherever “self-immolators are going they are not asking anyone with them: their deaths are fierce, but remain exclusively their own.”
With some incongruity, I thought about Buckel’s sacrifice as I sat by my apartment complex’s pool on a hot New England summer day (they seem hotter and hotter), welcoming my 35th birthday reading Bradatan’s poignant, provocative, astute, moving, thoughtful Dying for Ideas: The Dangerous Lives of the Philosophers. A philosopher at Texas Tech University, as well as an honorary research professor at Australia’s University of Queensland, Bradatan has in his role as religion editor for The Los Angeles Review of Books consistently proven the utility of philosophical and theological ideas when it comes to the art of living. In Dying for Ideas, Bradatan examines those who sacrificed their lives for ideas through a “purely secular martyrdom” (though one of his subjects was a saint): the deaths of thinkers for whom the nature and impact of their executions “confers a discrete sublimity upon these philosophers’ gestures. It is great to die for God, but it may be greater to die for no God.” Like the women and men Bradatan writes of, from Thomas More awaiting his beheading in the Tower of London’s turret to the anti-Soviet dissident Jan Patočka tortured in a Prague investigation chamber, Buckel had lived his life, and his death, for an idea. In Buckel’s death, Bradatan might argue, we see not just suicide, but an argument about life, where what “we feel toward the person performing [self-immolation]…is in fact a complex mix of fear and respect, of fascination and repulsion, attraction and revulsion, all at once.”
What Dying for Ideas clarifies is that we must not look away from the pyre; we must consider these deaths that often turn out to be the “threshold where history ends and mythology begins,” as Bradatan writes. Buckel attempted a sacrifice similar to those of the Buddhist nuns and monks who’d self-immolated themselves during the Vietnam War, people of incomparable bravery and detachment whom Buckel had long admired. In short, the attorney wished to turn his body into a candle “helping others to find the right direction,” as Bradatan might have put. As we perhaps approach our own collective martyrdom, measuring the rising tide and warmth alike, there is a pressing wisdom that we must grapple with, what the medievals called the Ars morienda, the art of the good death. Can we notice the warmth of Buckel’s auto de fe amidst the escalating temperatures, or do such flames recede into the apocalyptic heat?
Morbid thoughts to have in the sunniness of a Boston June, listening to a “’90s Hits” Spotify playlist. Yet in the hazy days of the Anthropocene, it’s hard not to feel the uncanniness that acknowledges Buckel’s life and death lived with authenticity, while the rest of us wait for the ice caps to melt listening to Radiohead and Soundgarden on our smart phones. While taking a break from Bradatan’s astute readings of those martyrs like Socrates, Hypatia, and Giordano Bruno who died not for God or country, but rather for philosophical ideas, I paused to scroll through my Twitter feed, only to come across a Vice article titled “New Report Suggests ‘High Likelihood of Human Civilization Coming to an End’ Starting in 2050.” That’s the year that I’ll be (theoretically) “celebrating” my 66th birthday. If I hadn’t paid attention to why Buckel died when it happened, it would behoove me to take notice now.
Dying for Ideas came out three years before Buckel’s suicide, but the glow of his burning body couldn’t help but give off more light by which to read Bradatan’s book. He writes of a death like Buckel’s, that it “does not always mean the negation of life—sometimes it has the paradoxical capacity of enhancing it, of intensifying it to the point of, yes, breathing new life into life.” As Socrates died for the good, as Hypatia died for reason, as More died for faith, and as Bruno died for magic, so Buckel died for the Earth. When scheduling my reviews and writing projects for the summer, I hadn’t intended to necessarily be reading Dying for Ideas on my 35th birthday, but there’s something appropriate in having Bradatan as my Virgil for the date that Dante described in The Divine Comedy as being “Midway upon the journey of our life/[when] I found myself within a dark forest.” Now the dark wood has been paved over, and all of its pollinating insects are going extinct, so for a member of the millennial generation facing the demise of our entire ecosystem and the possible extinction of humanity, both Buckel’s death and Bradatan’s invocation of those who faced their own demise with magnanimity and wisdom hang humid in the air.
Dying for Ideas embodies the French Renaissance essayist Michel de Montaigne’s conviction that “To philosophize is to learn to die.” Bradatan makes the compelling case that to evaluate philosophers in their entirety, you must ascertain not just the rationality of their arguments, but if their lives were consistent with their claims—and that for a few singular philosophers, if their deaths also make their arguments. For philosophers like Bradatan, philosophy is a method as much of a discipline; not merely the realm of logical syllogism, but something that prepares us for the “perilous journey from the agora to the scaffold or the pyre.” If you’ve ever taken an undergraduate philosophy course, Montaigne’s aphorism might not strike you as accurate, for the Anglo-American academic discipline goes a long way to perennially proving Henry David Thoreau’s crack that there are a lot of philosophy professors, but precious few philosophers.
In the American and British academy, the vast majority of university departments tend to orient themselves towards what’s called analytical philosophy, a tradition that rather than engage those eternal philosophical questions concerning the examined life, is rather content with precise, logical anal retention; pleased to enumerate all of the definitions of the word “is” rather than to question how it is that we should live. Fortunately, Bradatan is a disciple of that other contemporary wing, the continental philosophy of 20th-century France and Germany that is still capable of approaching the discipline as “an act of living…[that] often boils down to…learning how to face death—an art of dying,” as Bradatan puts it. He brings to bear not the rigid, rigorous, logical arguments of analytical philosophers like A.J. Ayers, Saul Kripke, and Willard van Orman Quine—men whom for all their brilliance and importance did little to illuminate the philosophical issue of how are we to live?—in favor of the impressionistic and poetic truths embodied by thinkers like Simone Weil, Martin Heidegger, and most of all Pierre Hadot.
The last philosopher has never quite achieved the reputation he deserves in the Anglophone world, his philosophy concerning “care for the self” rather filtered through by more popular disciples, such as the historian Michel Foucault. In Hadot’s estimation, the pre-Socratic philosophers of ancient Greece approached their thought not as a means of abstract contemplation, or as a way of producing one more publication for the tenure review file, but rather as a solemn, rigorous, unsparing honest examination and method that served to transform one’s very life. As Hadot wrote in What Is Ancient Philosophy?, there is “No discourse worthy of being called philosophical, that is separated from the philosophical life.” For philosophers in Hadot’s stead, including Bradatan, to approach philosophy as if it were simply the academic discipline that investigates the history of ideas, or with even more futility as a type of logical game, is to abdicate a solemn responsibility. In reducing philosophy to nothing more than just another university department with journals and conferences it’s to reject philosophy as something that can change your life.
Thus do we honor Socrates, that ugly little gadfly who in acknowledging his ignorance was paradoxically the wisest man in Athens. Socrates is Bradatan’s first martyr-philosopher, the most celebrated of men who died for an idea. Such is the power of the event, the forced drinking of hemlock at the hands of an Athenian state that claimed Socrates had corrupted the youths of the city and preached against her gods. He is not an uncomplicated figure, from the detached, calm, almost-absurdly-professorial figure of clean lines and smooth surfaces in French painter Jacques-Louis David’s The Death of Socrates, to radical journalist I.F. Stone’s The Trial of Socrates with its revisionist interpretation of the anti-democratic teacher’s death as being in some sense warranted.
What Bradatan does differently is that he reads the death of Socrates itself as an argument to be interpreted—treating the moment of extinction itself as one would a proof. The historical Socrates is himself mute; though he appears in a few scenes of the dramatist Xenophon, and while he is the beguiling central character in the dialogues of his student Plato, no actual writing of the founder of Western philosophy itself exists. Considering Plato’s treatment of Socrates in dialogues like The Republic, The Symposium, The Crito, and The Apology, Bradatan writes that the teacher’s “voice is authoritative, compelling, commanding, almost annoyingly so. Yet his silences—whenever they occur…these silences can be unbearable. This is Socrates at his most uncanny.” With all of the words of Socrates that we have being mediated through the mouth of another, Bradatan rather returns to his death as the ultimate silence, a masterpiece to be read as rigorously as any dialogue of Plato’s.
In Socrates’s willing execution, Bradatan sees a consistency of purpose and a representation of how the philosopher argued we should live our lives. Had Socrates capitulated to the court, had he admitted to wrong-doing and served a lighter sentence, it would be to invalidate his own teaching. Such consistency of purpose is something that united the martyrs he examines, no matter how different their actual thought may have been. Bradatan writes that “With the spectacle of their dying bodies alone they had to express whatever they could not communicate through all their rhetorical mastery…death was the most effective means of persuasion…such a death is a philosophical work in its own right—sometimes a masterpiece.”
When Bruno was hoisted aloft the burning green wood of Rome’s campo de fiori, he abjured the penitential crucifix presented by the tonsured Dominicans, offering back rather a blasphemous stream of invective in his thick Neapolitan accent, the proud heretic to the very end. And from his execution, though Bruno was himself an advocate for magic, occultism, and hermeticism, the Italian Enlightenment would find inspiration against the forces of Inquisition and superstition that had condemned him; his death becoming a far more potent argument than anything he ever actually wrote. More than a millennium before, and Hypatia made a very different argument after Christian zealots under the command of Alexandria’s bishop Cyril grabbed the Neo-Platonist mystic and mathematician, dragged her through the filthy streets of that cosmopolitan city and ultimately stripped her of her clothes, and then cut her very flesh from her body using either sharpened pottery shards or oyster shells depending on the account. Supposedly Hypatia uttered no words or cries. Hypatia’s silence had its own lessons as a woman in a culture that denies half the population its bodily autonomy, a rational mystic who thought the material world was fallen and an illusion. Though none of Hypatia’s writings survive, she has often been figured as the first feminist, her silent protest even as she was murdered making a powerful argument about the power of obstinate quiet. By contrast, More approached the infinite not with silence, but with wry British humor, the ever ironic and mutable founder of Utopia ascending the scaffold and telling his executioner, “See me safe up, as for my coming down, I can shift for myself.”
All of us will rendezvous with the eternal one day, and most philosophy professors will die in their beds. As Bradatan confesses, “No matter how hard we fight, how graceful our dance, how bold our stance, the end is always the same: total annihilation.” But these philosophers were those whose death was as an apotheosis, albeit accomplished in different ways: from Socrates’s lecturing to Hypatia’s silence, Bruno’s cursing to More’s joking, all in their varied ways confirming Bradatan’s contention that “the point is not to avoid death but to live without fear and humiliation before it comes.” Were that the only critical intervention of Dying for Ideas, it’d be powerful enough, but Bradatan makes his most important (and audacious) contribution concerning the societal importance of such sacrifice.
Drawing on the work of the French anthropologist Rene Girard, Bradatan applies what’s called the “scapegoat mechanism,” Girard’s contention that all of human civilization is propelled by the occasional sacrifice of some kind of innocent agent on whom all of the sins of the culture are imparted. This is most obvious in Christianity, as Girard writes of the “non-violent God who willingly becomes a victim in order to free us from our violence” in Evolution and Conversion: Dialogues on the Origins of Culture. As Christ was supposed to be a sacrifice who died for the world, so Bradatan argues that these philosophers’ deaths functioned as sacrifices for the truth, the radical honesty that Foucault calls parrhesia. The philosopher possesses a radical freedom and powerful powerlessness that is perhaps only matched by that of the jester; she has the agency to confront and compel the truth where others are mired in lies and delusions, and in her sacrifice the philosopher dies so that we may light ourselves out of our ignorance. All of Bradatan’s examples are partisans of parrhesia, and all took leave of this world at moments of profound social and cultural dislocation and crisis. Socrates was sacrificed by a democratic state extricating itself from years of tyranny, Hypatia skinned by a crowd that watched the eclipse of paganism and the ascendancy of Christianity, Bruno immolated upon pyres set by men whose religious certainties were challenged by the scientific revolution, More decapitated as Christendom was fractured by the Reformation, and Patočka tortured by a communist state in which the people no longer had any faith. Bradatan explains that scapegoats are needed in places where “An atmosphere of doom settles in, society is divided, the crisis persists. Everything seems to move in circles, the usual remedies don’t seem to work. Something radical is needed.” What we see in Socrates’s calm disputation or More’s scaffold stand-up act is not simply death, but the sacrifice of parrhesia so that the rest of us may know truth, an example of thinkers whom Bradatan describes as “mystics [who] are God’s traffickers” that “routinely smuggle bits of eternity into our corrupted world.”
So, I return to my phone, skimming accounts of collapsing ice shelfs and flooding river banks, reading articles about how humanity may have less than a few decades left, pushed to the brink by the irrational, insatiable hungers of our economic system and its supporting faith. As analysis, Bradatan offers us a claim about how some brave souls die for ideas, how their sacrifices are meant to illuminate those malignancies that threaten a society in collapse. Women and men like Bruno and Hypatia are set aside from the realm of the rest of us, they are, in the original sense of the word, sacred. It is the action of the sacred that, according to Bradatan, when it “erupts into our lives…its presence is unmistakable, its effects lasting, its memory haunting.” The martyred, these sacred women and men, are separate from us regular folk who are more content to scroll through Twitter by the pool rather than die for an idea. If ever a people needed a holy sacrifice uttering parrhesia on the scaffold, a sacred scapegoat shouting the truth from a pyre, it is our own. In the social media frenzy of orthographically-challenged White House tweets and the hermetic reading of Mueller investigation tea leaves, the media moved on from Buckel’s martyrdom in Prospect Park with disturbing promptness—just another horrific story receding deep into our news feeds, his radical honesty drowned out in the static of so much otherwise meaningless information permeating our apocalyptic age. Perhaps it’s time to listen to him, before its too late.
Image credit: Unsplash/Cullan Smith.
What shape does hell take? In Norse mythology she—the goddess Hel—is a girl with a face half beautiful, half rotted away. Brave warriors had a place in Valhalla, whereas Hel’s domain is for those who did not die honorably in battle. Greek mythology does not allow for such clear-cut distinctions. Death sends you to Hades: you are down, unless by some act of godly intervention your fragments are thrown skywards to settle as a constellation, not quite a god nor solely a symbol.
But what if hell could be contained within a frame—constructed on an axis of text and image? That question of containment, of framing and fragmentation shapes the genre-defying form of Orpheus & Eurydice: A Graphic-Poetic Exploration (OE), by the artist Tom de Freston and his partner the poet Kiran Millwood Hargrave. Published as part of Bloomsbury’s Beyond Criticism series, the book is a both a study of mythic narrative structures and an act of mythmaking in its own right: de Freston’s images follow Orpheus down to Hades while Millwood Hargrave’s words give voice and agency to Eurydice. Essays by academics and cultural critics are interwoven with the narrative, and just as there is no such thing as a self-contained myth disconnected from a wider network of stories, so the book is just one part of an ongoing collaborative project. OE is an exercise in world-building, using film, performances, and exhibitions to test the dimensions of the myth: the myth of the man, the musician, who entered the underworld to rescue his wife from the clutches of death.
I’m stuck on the cliché, “clutches of death.” Transmitting a myth quickly will do that to language—make it portable, acritical, squeezed into conventional structures. Who’s to say this is a myth of a man, his song, his rescue mission? By letting the story divide and multiply into frames and fragments, the book permits the myth its slipperiness; indeed, in the opening pages of the book, the narrative seems to have slipped from its template entirely.
We see a man, a painter, willing his wife back from the dead as he daubs her image across a triptych. We will know him as Orpheus, a self-indulgent slob dressed only in a pair of grubby white briefs. The woman on the canvas is Eurydice, she wears the dress she died in. And then she too, slips—her body falling until she is no longer contained by the painting which, wiped of its image, becomes a threshold to the underworld. Orpheus enters because he has read Eurydice’s poetry, which tells of a man who looks back and loses his wife forever. He sets out to remake the myth and rewrite his wife, rescuing himself in the process—and yet it seems the story is doomed from the start. A misplaced minotaur is appointed as Orpheus’s guide, a botched version of Dante’s Virgil, falling into frame in a manner reminiscent of a powerpoint presentation. Together they’ll follow the thread, down to catch a wife, a wife in free-fall through a grid of graphics.
De Freston’s images are loud: there will be scenes of screaming beasts, crashing canvases, bodies bound in kaleidoscopic contortions. Yes, Orpheus can sing—one wailing o which extends wordlessly across several spreads—but he is unable to listen, unable to exit his self-centered orbit. His story is told in soundless freeze-frame; it is Eurydice who speaks, who utters her own images of “welling mud,” “parcelled buds,” “tongue through teeth.” Handwritten on notepaper in a sotto voce script, Millwood Hargrave’s poems are placed unobtrusively between pages—and yet they are less like pressed flowers than gaping mouths, blooming wounds. Her language is the traumatic meeting between body and spirit, the temporal and eternal, lust and loss, a language that voices Eurydice’s ambivalence as Orpheus stumbles in the dark towards her. She knows that “an e is not just a broken o,” and when o aligns with e she will not come quietly.
This is a radical retelling, but its radicalism is not a matter of “reinterpretation.” It is true that we are inclined to read Orpheus more sympathetically—his role as lover and musician is enough to prove his virtue, and his actions appear to meet the criteria for the archetypal tragic hero: he risks all in an act of superhuman bravery, and loses the one he loves in a moment of human fallibility. However, for a story to be reinterpreted it must first be fixed, and it is the essence of myth to be shifting and contradictory. To set out to create a new version of a myth would be to misunderstand the nature of the medium—reinterpretation is inherent in the telling itself.
In Plato’s Symposium, for instance, Orpheus is said to be a coward who, rather than resolving to die for love, chose to save his skin and enter Hades alive. Indeed, Orpheus’s eventual end—torn limb from limb by the frenzied female followers of Dionysus—does seem ill-suited to a hero. When his fragments were eventually gathered by the muses it is worth noting that it was his lyre, not his body, that made it to the status of a constellation, and it is this ambiguity between heroism and ignominy, pure art and bodily abjection, which has made Orpheus such a fertile subject for writers and artists.
The preface to OE places the book as one part of a mythic evolution, referencing Rainer Maria Rilke’s Sonnets to Orpheus (1922), Jean Cocteau’s Orphée (1950), Anaïs Mitchell’s album Hadestown (2010) and David Almond’s young adult novel, A Song for Ella Grey (2015). The writer Ann Wroe anatomized Orpheus’s shape-shifting form in her award-winning “biography” Orpheus: The Song of Life (2012), and has contributed an essay for OE in which she turns her attention to Eurydice. Looking back to the original meaning of Eurydice’s name (“wisdom” or “wide ruling”), Wroe asks not what OE makes new, but what it retrieves. “This meaning of Eurydice, dark germinating wisdom, has long been lost,” she writes. “But we see glimpses of it here.”
And so, to retell is never truly to make new—we are bound to an eternal return, a recurring backwards glance. The radicalism of OE, I would argue, is a result of placing those remembrances of the darker parts of the myth within a structure that retains volatility, that stays unstable. The reader is thereby granted not only an alternative reading of the myth but an alternative means of constructing narrative and making sense of what we see. In short, an alternative approach to reading.
Existing in the shadowy space between art and literature, text and image, graphic narratives are drawn towards those dark corners, to the parts of a story usually left unseen. There is something inherently subversive about the form, due partly to its detachment from genre, partly to the potential for dissonance between text and image. This dissonance lends itself to humor—I’m thinking of the cats that appear in Regina Doman and Sean Lam’s graphic biography Habemus Papam! Pope Benedict XVI (2012), and the phallic intrusions in Piero’s graphics for Introducing Roland Barthes (2006). Even when posing as “illustration,” as demonstrated in Maira Kalman’s graphics for The Elements of Style, Illustrated (2005), the temptation to “read into” the text can prove too hard to resist.
Whether or not we refer to graphic narratives as comics, the form has always contained elements of darkness. In their “wordless novels” of the early 20th century, artists Frans Masereel and Lynd Ward made darkness both a matter of style and content; their heavily inked woodblock prints do not shy away from scenes we might rather not see, whether a public lynching, police brutality, or a gigantic man pissing on a city. Graphic narratives are unique in their ability to combine dark humor and unflinching representations of trauma, and yet it took until 1992, when Art Spiegelman’s Maus (1986) won the Pulitzer Prize, for this quality to be taken seriously. Since then, the form has been appreciated as a powerful means of addressing political upheaval and human suffering: Joe Sacco’s Palestine (1993, 1996) paved the way for the practice of graphic journalism, and important recent publications have included Threads: from the Refugee Crisis (2017) by Kate Evans and Rolling Blackouts: Dispatches from Turkey, Syria, and Iraq (2016) by Sarah Glidden.
In his reports of his experiences in Bosnia and the Middle East, Sacco does not pose as an authority or an all-seeing eye. Instead, he enters his narratives as a character, a diminutive nerd in blank Goggle glasses. Likewise, the graphic narrative’s style of “truth-telling” is less about revelation than disorientation: it makes darkness visible and disrupts conventional patterns of interpretation. As the narrative progresses across the page, time is represented spatially—the trouble is that space is liable to becomes unstable. In OE, the underworld is an atemporal zone with no fixed spatial footholds. The grid offers no protection against falling out of frame, and images transform—without warning—from line drawings, to digital renderings, to photographs—photographs that, with their deep chiaroscuro, appear to take on the quality of sculpture.
That restless attitude to medium and representation is a symptom of the form’s entrenched self-referentiality; whether or not a graphic narrative is evidently “experimental,” it is always a comment on the way information is communicated and consumed. As readers fill in the gaps between frames and reconcile text and image, they, we, become complicit in the manufacture of meaning; the extent to which we are made aware of this process depends on the degree of disruption to the narrative flow. We are equally complicit when sequentially connecting the words of a line of text, or organizing the the simultaneously presented elements of an image. However, by combining these two processes, graphic narratives make the act of reading manifest. They reveal it on the surface of the page.
In Nick Sousanis’s Unflattening (2015), the first doctoral dissertation to be produced “entirely in comic book form,” that self-referentiality reaches its apotheosis. As the end product of his PhD at Teachers College, the book is a radical assault on academic conventions, seeking to actively deconstruct “boxed-in” thinking with an argument that leads the reader down, diagonally, across the gutter of the page and into empty space. By allowing “the visual to provide expression where words fail,” Sousanis argues, we free ourselves from linear thought processes, creating a networked, “multidimensional” mode of thinking by combining simultaneous and sequential patterns of interpretation. “Lacking access to ‘as it is’,” he writes, in a text box surrounded by crowds of eyes, “we make do with ‘as it appears.’” Making do, in this case, is less about making the best of a bad situation than taking advantage of space between appearance and reality, and seeing what we can make it do—seeing what we will read between the lines.
It took time before graphic narratives were deemed worthy of critical attention. Now, Unflattening and OE prove that the form is a mode of critical enquiry in its own right; a recognition that, in turn, makes way for a more nuanced understanding of “creative criticism.” Such criticism does more than just aspire to artistry, throwing in a few metaphors or enacting its subject matter. Instead, it weans the reader off a reliance on the text, converting them from the role of receiver to that of critical thinker: someone who is aware of their own process of reading, whether of an image, a text, or the world around them. Good philosophy has always worked in this way, pushing beyond the literal meaning of the text to force the reader to address the question on their own terms. However, what might be achieved in philosophy through complex literary techniques—I’m thinking, for instance, of Søren Kierkegaard’s use of pseudonyms—comes naturally to graphic narratives. By definition, the form works beyond the level of the text, making us readers of our own act of reading. What we read into the reading, however, depends on the world we have entered. Whereas Unflattening is a utopian world of sense-making, synthesis, and empathy, OE is less interested in synthesis than the act of ripping.
In Plato’s Phaedo, to live is to be torn asunder by the opposing forces of time and eternity. OE places us on either side of the rupture, and tells us to look down. The rip, the split, the tear, become an aesthetic, a subject, and a mode of thought: this is a world where making meaning is as much about rupture as it is about connectivity, where even the idealized act of “collaboration” is a type of compromise, a separation from oneself. After all, what sense is there to be made of a world where bodies break, are forgotten, exploited, and where love can tear you in two—three—four —or fragments too small to see? In this world, the “o,” the perfect whole and empty hole, is something to be feared: it is all Orpheus has left when he exits, in one piece, from the underworld, doomed to a life of singular solitude—that is, until he is torn into multiple pieces by the maenads.
Perhaps, to submit to the ripping is the most honest way to live: to enter the rupture and look death in the eye. What we see is a living hell. What we see is the world we live in.
A literary controversy (or what passes for controversy in our fairly tame circle) erupted last month when the Pulitzer Prize Board elected not to award a Pulitzer Prize for a work of fiction. It was the first time they had done so since 1977. The reason why this can happen has to do with the way the Pulitzer Prize Board’s selection process works: three initial readers — this year they were novelist Michael Cunningham and critics Susan Larson and Maureen Corrigan — pore over several hundred books published in the previous year and settle on three finalists. Then they turn this list over to the twenty members of the Board, eighteen of whom have voting power (who knows why the board includes two members who can’t vote) to pick one. A majority vote among the Board is required to select a winner. This year, a majority could not come to agree on one book.
The three books nominated were: Swamplandia!, the second book by my friend Karen Russell, a garrulous oddball romp that forays into satire and surrealism; Train Dreams, by Denis Johnson, a decorated luminary on his way to becoming an old guard figure as our village elders like Vonnegut and Updike are vacating their positions; and The Pale King, the unfinished last novel of David Foster Wallace, the most energizing, polarizing, and influential literary voice of our generation, his reputation as a genius now safely beatified by his suicide.
Apparently not one of these three books was liked enough unanimously by ten people on the Board, and so none was awarded the most prestigious literary prize in America this year. “There’s always going to be dissatisfaction, frustration,” said Sig Gissler, the administrator of the Pulitzer Prizes, regarding the indecision. “But [this year] the board deliberated in good faith to reach a decision — just no book got the majority vote.”
When the unusual and disappointing decision was announced, the reaction among the literati—writers, I suppose, and critics, and a vast rearguard of booksellers, bloggers, and book geeks on Twitter who have greatly expanded and diversified the circle of conversation in recent years — was like the moment in the courtroom drama when the unassuming girl on the witness stand calmly says something that suddenly changes everything, and the room bursts all at once into a frenzy of barely contained whispers. What’s more, the Pulitzer Prize Board was pissing on a parade that already felt drenched. Just a few days before, the hobbits of the publishing industry had been dismayed when the Justice Department sued three major publishers over e-book pricing, siding with Amazon like Saruman sided with Sauron, whose ominous red eye sweeps across the land from his Dark Tower in that northwestern Mordor, Seattle.
Ann Patchett, a novelist who last year published a book eligible for the prize (State of Wonder, a novel as magnificent as her other masterpiece, Bel Canto), and now also a bookseller, as she recently opened an independent bookstore in Nashville (so she’s got two horses in this race) maligned the Pulitzer Board’s non-decision in a widely read op-ed piece in The New York Times. “If I feel disappointment as a writer and indignation as a reader, I manage to get all the way to rage as a bookseller,” she writes. She argues that the bestowal of a Pulitzer Prize has the power to get people excited about a book in particular and books in general, and under the shadow of our current zeitgeist, it’s a bad time to put down literature. “What I am sure of,” she writes, “is this: Most readers hearing the news will not assume it was a deadlock. They’ll just figure it was a bum year for fiction.”
Patchett’s piece is heartfelt and impassioned, and in some respects I agree with her — but what this controversy mostly did was remind me of how fundamentally I dislike the whole idea of literary prizes at all. I believe with all my soul that the concept of a board of twenty journalists — or people of any profession for that matter, it doesn’t really make a difference who they are — awarding a prize to a work of art, putting an official stamp of approval on one book and thus by implication saying the other books published that year aren’t as good, should strike us as misguided, shortsighted, and dumb.
I’m not saying this in a sour-grapes way, as a novelist who also wrote an eligible book that was published last year. If I were awarded the Pulitzer, it’s not like I’d fling it in their faces. Obviously I would kiss their feet with gratitude. I have benefited greatly from a literary prize, the Bard Fiction Prize, for which I am hugely grateful, and was nominated for a couple of others, the Dylan Thomas Prize in the UK and the Young Lions Fiction Prize here (which Karen Russell did win, by the way). These prizes can help writers out tremendously, especially early in their careers, giving them prestige, publicity, and money, and for that, they’re a good thing. But this isn’t about me — I’m making this argument not as a writer, but from a more abstract standpoint, from a big-picture view.
There was a shrewdly observant piece in n+1 that was rerun in Slate last year by Chad Harbach (whose roaringly hyped novel, The Art of Fielding, also came out last year) titled “MFA vs. NYC,” and given the headline, which pretty much spells it out, “America now has two distinct literary cultures. Which one will last?” I found the piece spot-on about its observation that our literary culture is sharply bifurcated into two contingents, one concentrated in the publishing mecca of New York City, and the other scattered far and wide across the land at various colleges and universities. Harbach is sharply critical of MFA programs, essentially making all the usual arguments against them and coming down on the side of NYC. After I got an MFA at the ur-program, the Iowa Writers’ Workshop, I moved to New York City, because I figured that’s where writers go, and I’ve lived there for the last few years. So I feel I’m in a commodious place from which to observe these two literary cultures, and I must say, though both the insular little MFA world and the New York City world of literary culture come with their own and different forms of attendant bullshit, there is far, far — and I mean far — more bullshit in NYC.
The difference between the two cultures becomes most profoundly evident contrasting the books that get talked about at the bar over after-class or after-work drinks, respectively. There are many books I came to fall in love with that altered the course of my writing and changed what I thought could be done with literature that were recommendations from some of my friends in the MFA program. We would excitedly talk about what we had been reading lately, or great books we had read before — it was a conversation that was happening constantly and everywhere. A quick list of things I discovered in grad school from my friends’ recommendations that hugely affected me would include the philosophy of Antonin Artaud, the poetry of Paul Celan, Flann O’Brien’s At Swim-Two-Birds, J.P. Donleavy’s The Ginger Man, Joe Wenderoth’s Letters to Wendy’s, the stories of Mavis Gallant, Thomas Bernhard’s The Loser. And I dashed out that list in part to illustrate that we were not exactly shrieking and hyperventilating about the brand-new hot young rising stars of American fiction. (Well, some of us were, but I wasn’t one of them. And indeed in retrospect I notice how most of what I just listed were the recommendations of my poet friends, by necessity bound for academia, if they were lucky, and not for the networky New York literary scene.) Of course, we wanted lustily to be those hot young rising stars of American fiction soon. But when we talked about books, we would pull out the interesting and unusual jewels of our collections the way a music geek will pull out a rare LP in a plastic sleeve. We didn’t really give a shit about what book won what prize and did such-and-such really “deserve” to win the Pulitzer? Those are the kinds of gossipy, facile book conversations you have in New York, where everything is in some way tainted with commerce. Ours were the conversations of collectors, enthusiasts, purists, of people genuinely interested in the art itself, and I miss them.
All that is by way of suggesting that literary prizes are mainly manifestations and obsessions of that buzzy New York literati hive, which can become less of a hive and more of an echo-chamber. It’s an observable phenomenon: a book comes out, which for whatever reason gathers a tsunami of critical praise that perpetuates itself — for by the time the great wave makes landfall, some critics may either be hesitant to disagree with their peers, timorously fearing that they’re missing something everyone else can see (Naked Emperor syndrome), or what’s more probable, their perception has been primped by the power of suggestion, in the same way we are more likely to declare a fine wine magnifique if we know before tasting it that the bottle cost a hundred dollars than if it cost ten. This is why sometimes quite mediocre books wind up vaunted with widespread and lavish praise, and are sometimes even buoyed all the way up to the Pulitzer. But mediocre books getting overpraised does not bother me seriously, as I would rather let ten guilty men go free than hang one innocent — it irritates me far more when truly great books are ignored, which happens all the time.
A book has a vertical life and a horizontal one. The vertical life is what happens to it up to, during, and very soon after its publication; the horizontal life is what happens as the years and decades and even centuries slide by. As the Pulitzer is awarded to a work of fiction published in the previous year, all it can take stock of is a book’s vertical life, which sometimes can be deceiving. I’m sure this helps explain some of the more embarrassing retrospective head-slaps in the Pulitzer’s history, such as when, in 1930, it awarded the prize to Oliver La Farge’s Laughing Boy — a second-rate and now utterly forgotten book by an utterly forgotten writer — for the year in which both Hemingway’s The Sun Also Rises and Faulkner’s The Sound and the Fury were published. It’s perfectly natural they would make that mistake; back then, Faulkner and Hemingway were not yet Faulkner and Hemingway, they were just a couple of young writers who happened to be named Faulkner and Hemingway. The Pulitzer Board would try to atone for their sin years later by awarding them both (Faulkner twice) prizes for far lesser works after their reputations were already secure. The hype of the moment does not necessarily translate into lasting luminance. Just scroll down the list of all the past winners of the prize, and count how many you’ve ever heard of. Start at the bottom and move upward chronologically, and you’ll find the occurrence of familiar names increases as we move closer to the present. This is not because the Pulitzer Board has gradually been growing wiser — it’s because we’re living now, not a hundred years in the future. Then we’ll see. We can’t help it — we’re blinded by our own times; all prizes are like that, and that is why, as a measure of what is good and what is not in art, they are not exactly the trustworthiest oracles.
Also, a twenty-member prize board may be seducible by groupthink. I trust groupthink more when we’re talking about the long and justice-bending arc of history, not twenty journalists (eighteen of whom have voting power) talking about fiction, which is not even their forte. Come to think of it, why have we been letting a roomful of people who don’t necessarily know anything about literature tell us what the best book of fiction was last year, year after year? Why didn’t they just let Michael Cunningham, Maureen Corrigan and Susan Larson pick it? I would be more interested to hear their opinions on the matter, anyway. (The 2012 board did include one — exactly one — fiction writer, past winner Junot Díaz. The only other person on the board I’d heard of was New York Times columnist Thomas L. Friedman, who I’m sure is a wonderful man but the dude writes like a clown honks a bicycle horn.)
Let me tell you a story about the problem with a group of people of about that number locked in a room trying to come to a decision about a work of art, fiction specifically. The stakes here are much smaller, but the phenomenon I believe is similar. For a short time I was a submissions reader for a fairly well-known, medium-cachet literary review. There were usually about ten to fifteen of us around the editorial meeting table. Each of us would read through the slush pile and select a few stories we liked, and then the boss would Xerox the top stories for everyone, we’d all go home and read them, pick out our favorites among those, and at the next meeting discuss which stories to put in the issue. After all our arguing and deliberation, usually the pieces that wound up being selected for publication were not the most interesting, or what I thought were the best of what we had to choose from. They were the pretty good pieces that we could all compromise on. Because a truly great and interesting work of art will have both its loving defenders and its outraged detractors, such a work is intrinsically less likely to be selected for honor by a large committee. That is the nature of good art: it provokes. I agree with Churchill that democracy is the worst form of government except all those others that have been tried from time to time, but not when it comes to lionizing certain novels over others. That I prefer to do on my own, thank you very much.
Historically, this obsession with prizes — and its grandchild, the micro-hysteria over those “best-of” lists that seasonally return to stipple the hills like dandelions — seems to be an impulse particularly characteristic of the twentieth century and beyond: the first Nobel Prize in Literature went in 1901 to the great Sully Prudhomme (what, you’ve never heard of him?), the first Pulitzer Prize for Fiction in 1918 to Ernest Poole for His Family, the first National Book Award in 1950 to Nelson Algren for The Man with the Golden Arm, the first National Book Critics Circle Award in 1975 to E.L. Doctorow for Ragtime, and the first PEN/Faulkner in 1981 to Walter Abish for his How German Is It. I’d say the only one of those that’s still well remembered today is E.L. Doctorow’s Ragtime (although I happen to have read Nelson Algren’s The Man with the Golden Arm — it’s pretty good).
However, there’s also an argument that this misguided impulse is not necessarily so much a modern one as an inherently human one (and we have plenty of those), when one considers that in ancient Greek festivals, prizes were given out, as they were for the more objectively measurable outcomes of athletic contests, to the best plays. But this phenomenon was in evidence even back then — that of the critics of the time failing to recognize what history would discover greatness in: angered and confused by the way he broke the conventions of Greek drama, the judges snubbed Euripides.
The next-to-next-to-last time the Pulitzer Board chose not to award a prize at all was in 1974, when all three of the readers recommended Thomas Pynchon’s Gravity’s Rainbow, and every member of the Board categorically denied it. Considering what a rambunctious, rebellious book it is, and considering the long life it has since enjoyed as both a cult classic and a classic, a necessary item on the bookshelf of every druggy collegiate pseudo-intellectual on his way or not to becoming an intellectual, fiercely hated by many and by many fiercely loved (and both parties have their points), it is so fitting that that, of all books, would be bestowed this negative honor; if anything, it’s an enduring badge of coffee-shop cool, and it well deserves it Of course Gravity’s Rainbow can’t win a Pulitzer. It would be like a punk band winning a Grammy.
Here’s a question. Imagine Satan were to appear in a sulfurous cloud as the host of some Faustian game show, on which the contestants, who are artists at inchoate and uncertain stages of their careers, are forced to confront interesting spiritual dilemmas. Old Scratch says to the Young Writer, I offer you a choice between two fates. In the first, he says — and this seductive vision appears in an orb of smoky light hovering above his outstretched claw — your books are met with blazing success. Every critic fawningly gushes over your work. You’re heralded as a genius. You’re interviewed on TV and on widely-syndicated NPR programs, your phone won’t stop ringing with interview requests. Packed houses at every reading you give. The New York Times Best-Seller List. The money rolls in, you easily clear your outrageous advances. You win the National Book Award, you win the National Book Critics Circle Award, you win the PEN/Faulkner, you win the Orange Prize if you’re a woman, you win the Pulitzer. The movies based on your books hit the screens with famous actors and actresses playing your characters, and everyone says the books were so much better. This is your life. But! — and the vision vanishes — know this: after you die, after your life of literary celebrity, interest in your work will fade. None of the shadows you made will stick to the cave walls because, in the end, none of the cave-dwellers was moved to chalk its outline when it was there. Over time, the world will forget you. Or, behind door number two… The world, if it ever knew you, will forget you in your own lifetime, and you will die in obscurity, uncelebrated, unfulfilled, destitute, and bitter. But! —in the years following your death, your work will be rediscovered, and one of your books in particular will even become a classic that lives on for many generations and forever changes the landscape of our collective imagination. In other words, you’ll be Herman Melville.
Now, both of these are rare and lucky fates. If the variables were at all uncertain — if in the first case there was a chance your work would be remembered, and in the second there was a chance you’d remain forgotten — it would be a much harder decision. But I’d like to think that any artist who is truly interested in art would choose the second option in a heartbeat. I know I would, and I’m not too humble to say so. It’s the first option, not the second, that’s the Faustian bargain: heaven on earth, hell for dessert.
The reason a real artist would choose the second option over the first has nothing to do with any inner nobility — far from it; in fact each fantasy springs from the same megalomaniacal, insatiable hunger. (It’s no coincidence that Hitler was a failed painter and Franco a failed poet. The heart of an artist beats wild and greedy in the chest of every despot. It’s the very same source of energy that produces both.) It is because, while worldly recognition may be an object of lust, immortality is an object of love. As I once read in Plato’s Symposium, and was so amazed by their truth that I’ve never forgotten these sentences, “the soul has its offspring as well as the body. Laws, inventions and noble deeds, which spring from love of fame, have for their motive the same passion for immortality. The lover seeks a beautiful soul in order to generate therein offspring which shall live for ever.”
This is why, for any artist, dying in obscurity is among the worst nightmares. If I had a time machine, I would visit Herman Melville at his deathbed and tell him the good news from the future, so he might go into that good night with some sense of satisfaction. But on second thought, why wait until the very end? I’d go further back and tell him sooner, give him something to help him through those nineteen years he spent growing old as a customs inspector, his public literary career long dead in the water after the critics of his day shouted him out of town as a crackpot, though he was still returning home every night to quietly scribble out poetry and a novella that would be published many years posthumously as Billy Budd. On third thought, seeing as he was in fact working on Billy Budd, and wasn’t so frustrated he’d completely given up writing, maybe somebody already told him. On fourth thought, maybe he didn’t need anyone to tell him, because he knew he was a genius and held out hope the world might one day see it.
All in all, I would urge readers to not pay too much attention to big prestigious literary prizes. In a perfect world, I would wish for every writer a magical bag of money that is never empty (to level the financial question) and simply do away with them all: no Pulitzer Prize for Fiction, no National Book Award, no PEN/Faulkner, no Man Booker, no Nobel Prize in Literature. Let writers write, let critics have their say, let readers read, let time decide.
It doesn’t really matter, though. Even without the magic moneybags, and even with the swells of cacophonic hype surrounding all the literary prizes and all the literary darlings of any given moment, history will plod on, and the Ozymandias of now will be the half-sunk and shattered visage of later. F. Scott Fitzgerald, who never won a Pulitzer, will remain F. Scott Fitzgerald, and two-time Pulitzer Prize winner Booth Tarkington will remain Booth Tarkington. And anyway, I am absolutely certain there have been many writers the equal of Fitzgerald who, through their own bad luck or other people’s bad taste, were never published and never read, let alone given prizes, and it’s especially to these unknown soldiers of literature that I raise my glass. John Kennedy Toole killed himself believing he was doomed to be one of them, and he most certainly would have been, had his mother not accosted Walker Percy years later with his manuscript of A Confederacy of Dunces, which went on to win a twelve-years-posthumous Pulitzer Prize. It was a nice gesture.
Fans of Raymond Carver’s short fiction got a treat last year when the Library of America published the celebrated writer’s Collected Stories. Yet for some of his readers, the book cast a disquieting shadow over his career and work. Editors William Stull and Maureen Carroll included in this new volume a manuscript which they entitled Beginners, an alternate version of the 1981 Carver collection published by Knopf as What We Talk About When We Talk About Love. Nearly thirty years ago, Carver’s editor Gordon Lish cut this manuscript by some 55 percent, essentially against Carver’s wishes. Though WWTA went on to become a critical success and a watershed in Carver’s career, the extent of Lish’s influence on the book has raised questions about just who is responsible for Carver’s artistic success.
In that regard, the Library of America volume’s inclusion of the complete manuscript of Beginners, all seventeen stories, offers readers a chance to draw their own conclusions about who Carver was as a writer, and about the meaning and worth of these contested stories. What follows are my own conclusions.
1: Just Leave Well Enough Alone?
I do understand the feelings of those who, perhaps without having read Beginners, feel a certain weariness at the idea of it. On the way to a chess match today, I was talking with a student at the high school where I teach. In his English class, he’s reading Robert Penn Warren’s All the King’s Men. There’s been some confusion because a number of students purchased a different edition of the novel, one that includes scenes that Warren’s editors removed from the novel for its original publication. Only recently, decades after All the King’s Men has become a modern classic, have these additional scenes been spliced back in. In addition, Willie Stark, one of the central characters in the novel, has had his name changed to Willie Talos, Warren’s original name for him.
For Pete’s sake, I found myself thinking. Do we really need this? Wasn’t the novel great enough as it was? And long enough already? Can’t we just leave well enough alone? Willie Talos?
For those who fell in love with Carver’s work while reading WWTA, I can imagine a similar reaction to the publication of Beginners. There’s a feeling of having been baited-and-switched, perhaps. Or of having received an assignment to re-do work one had already completed. There’s an impulse to just throw up one’s hands and say, “It is what it is, and there’s no turning back time.” Or even to say that Lish was the one who made Carver great in the first place.
I understand these reactions. But having read both versions of this story collection in their entirety, my conclusion is that Beginners is vastly superior to WWTA, and indeed a work of art at least equal to Carver’s subsequent collection Cathedral. I don’t mean to be histrionic, but while reading the two versions side by side, I often felt that Lish’s treatment of Carver’s stories verged on the criminal. In a just world, Beginners would be published as a stand-alone volume to replace the shell that Lish made of it.
2: I See a Darkness
The conventional shorthand is that Lish’s versions are bracing and bleak, Carver’s verbose and sentimental. In actuality, however, many of the stories are more disturbing in their original form than in their eventual published form.
In the story “The Fling,” for instance, a father meets his adult son in an airport bar and makes a long confession about the affair that ended his marriage to the man’s mother. “I’ve got to tell this to somebody. I can’t keep it in any longer,” he tells his son. The son, who narrates the story, doesn’t want to listen, much less to forgive. The encounter ends in further estrangement between the two:
He hasn’t written, I haven’t heard from him since then. I’d write to him and see how he’s getting along, but I’m afraid I’ve lost his address. But, tell me, after all, what could he expect from someone like me?
It’s a story about the human need for reconciliation, the sacramental quality of confession and our inability, sometimes, to provide that for those who’ve hurt us. In the original version, the father’s guilt is compounded by the fact that his affair also led to the ghastly suicide of his mistress’s husband. In addition, he characterizes his first sexual encounter with this woman as a kind of rape. In comparison to the WWTA version of this story, entitled “Sacks,” this earlier version has an even darker view of the human capacity for evil—and concomitantly the father’s guilty desire for forgiveness takes on an even more profound resonance.
The most chilling example of the darkness in Carver’s vision, though, is the story “Tell the Women We’re Going,” which culminates in the rape and murder of a woman by one of the main characters. This story is one of the creepiest I’ve read in my life, right up there with Dan Chaon’s “Here’s a Little Something to Remember Me By.” It’s creepy largely because of the patience with which it builds to its horrifying climax. It follows a pair of high school chums who grow into adults with wives and children, then one Sunday afternoon leave their families to go for a drive in the countryside. They drink all afternoon and then head out toward Painted Rocks and the Naches River, encountering a pair of women on bicycles along the way. Their dealings with these women begin with flirtatious banter, then gradually gain menace, until one of the men is half-chasing (and then truly chasing) one of the women up an isolated rock. The violence is described in awful detail, but what makes it most awful is how understandable Carver makes it: we’re in the murderer’s head, seeing the steps that lead to his terrible acts.
At the same time, Carver also does a brilliant job of distinguishing between the two men, one of whom is reluctant to participate in the back-and-forth with the women, and who has parted from the other woman after nothing more than a brief conversation. At the end of the story, he comes upon the scene of the crime and is horrified by what he sees:
Bill felt himself shrinking, becoming thin and weightless. At the same time he had the sensation of standing against a heavy wind that was cuffing his ears. He wanted to break loose and run, but something was moving toward him. The shadows of the rocks as the shape came across them seemed to move with the shape and under it. The ground seemed to have shifted in the odd-angled light. He thought unreasonably of the two bicycles waiting at the bottom of the hill near the car, as though taking one away would change all this, make the girl stop happening to him in that moment he had topped the hill. But Jerry was standing now in front of him, slung loosely in his clothes as though the bones had gone out of him. Bill felt the awful closeness of their two bodies, less than an arm’s length between. Then the head came down on Bill’s shoulder. He raised his hand, and as if the distance now separating them deserved at least this, he began to pat, to stroke the other, while his own tears broke.
Following an incredibly intense narration of a brutal murder, this passage puts us into the experience of the murderer’s friend: the violent shift in his perspective on his old buddy; the surreal quality of coming face to face with this enormity; and, simultaneously, the recognition of the murderer’s humanity despite his new and unbridgeable differentness.
Compare all of that to Lish’s version of the ending (the pursuit, murder, and reaction, in their entirety):
Bill had just wanted to fuck. Or even to see them naked. On the other hand, it was okay with him if it didn’t work out.
He never knew what Jerry wanted. But it started and ended with a rock. Jerry used the same rock on both girls, first on the girl called Sharon and then on the one that was supposed to be Bill’s.
Lish has stripped the story’s ending of its narrative drive and emotional power and replaced them with a cheap jolt. Both stories are bleak, but only Carver’s version expands our understanding of the world by taking us viscerally into the abyss.
3: Less is Less
The radically truncated stories in WWTA cemented Carver’s identity as a minimalist in many people’s minds. Yet a comparison of the stories in Beginners with their counterparts in WWTA demonstrates how false that label is, and how impoverished the minimalist versions really are.
In one of his letters to Lish about the manuscript, Carver wrote the following:
I’m mortally afraid of taking out too much from the stories, of making them too thin, not enough connecting tissue to them.
His fears were well-founded. Lish took from these stories their rich sense of human possibility—their meaningfulness, to put it bluntly.
Lish altered the title of “Want to See Something?” to “I Could See the Smallest Things,” a telling change. For in Lish’s version, the narrator, an insomniac woman who walks out to her backyard in the middle of the night to find a troubled neighbor at war with slugs, comes away from the story with only the smallest changes in her perceptions about her life. She returns to her husband, hears him snoring, then says:
I don’t know. It made me think of those things that Sam Lawton was dumping powder on.
I thought for a minute of the world outside my house, and then I didn’t have any more thoughts except the thought that I had to hurry up and sleep.
In Carver’s version, the woman’s nocturnal sojourn has given her a new perspective on her life and her marriage. She returns to bed and is moved to talk to her husband about her love for him along with her fears about their relationship:
I felt we were going nowhere fast, and it was time to admit it, even though there was maybe no help for it.
Just so many words, you might think. But I felt better for having said them.
He’s still asleep during all of this, but she realizes that that doesn’t matter, and that, in fact, “he already knew everything I was saying, maybe better than I knew, and had for a long time.” The story is about a dark night of the soul, a revelation, a moment of intense awareness that leads to no apparent solution or change except for the profound internal change in the narrator. Lish’s version gives us only the faintest whisper of such a realization.
Many of these stories, as Carver notes in his letters to Lish, are also deeply connected with Carver’s recovery from alcoholism. “If It Please You,” for instance, is about a former drinker, James Packer, who has overcome his desire for booze by taking up needlework, something that another alcoholic recommends as a way to fill up the time formerly devoted to drinking. It’s an activity he finds satisfying. He also knits things that connect him to others’ lives—“caps and scarves and mittens for the grandchildren,” “two woolen ponchos which he and Edith wore when they walked on the beach,” and an afghan that he and his wife sleep under.
In the end of this story, James is full of bad feelings: anger at some “hippies” who cheated at bingo earlier that night; and fear about his wife, who may have uterine cancer. In Carver’s story, he tries to pray—to take solace in another activity endorsed by AA, which demands belief in a higher power. The story ends with a powerful meditation on prayer, and a real spiritual change for James:
He felt something stir inside him again, but it was not anger. He lay as if waiting. Then something left him and something else took its place. He found tears in his eyes. He began praying again, words and parts of speech piling up in a torrent in his mind. He went slower. He put the words together, one after the other, and prayed. This time he was able to include the girl and the hippie in his prayers. Let them have it, yes, drive vans and be arrogant and laugh and wear rings, even cheat if they wanted. Meanwhile, prayers were needed. They could use them too, even his, especially his, in fact. “If it please you,” he said in the new prayers for all of them, the living and the dead.
Lish appears to understand or sympathize with none of this. In his version, called “After the Denim,” there’s no prayer at all, and even the knitting is depicted only as an expression of lonely anger, the desperate act of a man on a shipwrecked boat (recalling a photograph James sees earlier in the story):
Holding the tiny needle to the light, James Packer stabbed at the eye with a length of blue silk thread. Then he set to work—stitch after stitch—making believe he was waving like the man on the keel.
4: It’s All Right to Cry
In his drastic cutting of Carver’s stories, Lish evinces a real discomfort with, or perhaps blindness to, the sacramental—moments of transcendent awareness, spiritual awakening, and yearning for reconciliation. His aesthetic is one of surfaces. Perhaps he’s aiming to make Carver’s stories more like Hemingway’s, with only the tip of the iceberg visible and the weight concealed. But mostly what he does is lop off the bulk of the berg, leaving just a floating ice cube.
He cuts out the moments that are most tender and beautiful. For example, in “Gazebo,” a story no less heartrending and sad in Carver’s version, a woman talks with her adulterous husband about a time when she believed that their marriage would last a lifetime:
I remember you were wearing cutoffs that day, and I remember standing there looking at the gazebo and thinking about those musicians when I happened to glance down at your bare legs. I thought to myself, I’ll love those legs even when they’re old and thin and the hair on them has turned white. I’ll love them even then, I thought, they’ll still be my legs. You know what I’m saying? Duane?
It’s a wonderful moment, and a sad one, a moment of palpable love and lost hopes. It’s the type of detail that sticks in your head, that you remember years after reading a story. Lish cuts it.
In “Beginners,” a contemporary version of Plato’s Symposium in which two couples sit around a table with gin and tonic and talk about love, Mel McGinnis tells a story that he thinks illustrates what real love is. In that story, an elderly husband and wife named Henry and Anna Gates are hit by a drunk driver and nearly die. Mel, a doctor, gets to know Henry as he and Anna recover in separate rooms, and Mel is moved by his account of their long marriage. The couple used to be snowed in alone all winter in their country home, and each night they would play records and dance together before falling asleep under piles of quilts. Henry, incapacitated in the hospital, is depressed because he’s separated from his wife. When they are finally reunited, though, the scene brings observers to tears:
She gave a little smile and her face lit up. Out came her hand from under the seat. It was bluish and bruised-looking. Henry took the hand in his hands. He held it and kissed it. Then he said, “Hello, Anna. How’s my babe? Remember me? Tears started down her cheeks. She nodded. “I’ve missed you,” he said. She kept nodding.
As I read this scene, I found myself crying, not only because of the beauty of the moment, but also out of a sadness that this scene was axed from the version of this story that most people know. In Lish’s version, Mel’s story culminates with the rather mundane observation that the husband’s “heart was breaking because he couldn’t turn his goddamn head and see his goddamn wife.”
This version of the story doesn’t bring us to tears, and maybe that’s how Lish intended it, fearing what he called Carver’s “creeping sentimentality.” But, of course, there’s a difference between sentiment and sentimentality. The point of Mel’s story is not that everyone does or should or can love each other as the Gateses do; Carver even leaves open the possibility that the story isn’t entirely true. But in this contemporary re-working of Plato, the story of Anna and Henry is a kind of idealized vision of love, one that beguiles and inspires the four people in the story, who have been hurt but live on to love again.
Carver himself was hurt by what Lish did to his stories, judging by the letters he wrote him. That hurt must have been complicated enormously by the critical success that the altered stories went on to attain. What’s inspiring, though, is how Carver held on to his own vision: the stories in his 1983 collection Cathedral hew to the model of those in Beginners, and include the story “A Small, Good Thing” essentially in the version originally prepared for Beginners.
In this story, a little boy named Scotty is struck by a car on the day of his own birthday party. He falls into a coma and, after several days in the hospital, dies. His mother had ordered a birthday cake for him a few days before the accident, and the baker who has made it begins calling with nasty messages because Scotty’s parents have not picked it up.
Lish amputates the second half of the story, which he titles “The Bath”: Scotty never dies, and the story ends ambiguously, with Scotty’s mother getting another phone call from the baker.
But in Carver’s version, after Scotty’s death his parents go to the shop and confront the baker. Though he is initially defensive, the baker is suddenly struck with shame. He apologizes and gives the grieving parents hot rolls to eat, telling them that “Eating is a small, good thing in a time like this.” The story ends with another sacramental moment, one of communion between these broken people:
“Here, smell this,” the baker said, breaking open a dark loaf. “It’s a heavy bread, but rich.” They smelled it, then he had them taste it. It had the taste of molasses and coarse grains. They listened to him. They ate what they could. They swallowed the dark bread. It was like daylight under the fluorescent trays of light. They talked on into the early morning, the high pale cast of light in the windows, and they did not think of leaving.
It’s all right to come together in times of sadness, this story assures us. It’s all right to risk being sentimental by entering into the sacramental. It’s all right to cry. And it’s all right to write a story that might make someone cry, that might squeeze someone’s heart with horror or sadness, or with small, good things like eating, dancing, knitting, or prayer.
The subsequent evolution of Carver’s career makes it clear that he realized it was okay to write such stories. The publication of Beginners offers a lavish bounty of them.