Annotate This: On Marginalia

“We have all seized the white perimeter as our own And reached for a pen if only to show We did not just laze in an armchair turning pages; We pressed a thought into the wayside, Planted an impression along the verge.” —Billy Collins, “Marginalia”

Sometime after the fourth century, an unknown transcriber of the Mithraic scholar Lactantius Placidus accidentally conjured into history a demon named Demogorgon. Writing in the margins of Placidus’s commentary on Statius’s Latin poem Thebaid, the transcriber turned his attention to a line concerning “the supreme being of the threefold world.” By way of gloss, the scholar noted that Statius had been referring to the “Demogorgon, the supreme god, whose name it is not permitted to know” (even while Placidus apparently knew it). Etymologically the provenance of the word is unknown. Aurally it reminds one of the daemons of ancient Greek philosophy, that indwelling presence that acts as a cross between consciousness and muse; a terrifying sounding being, with its portmanteau connotations of both “demon” and of the serpentine-locked “Gorgon.” Most uncanny of all is that no reference to the “Demogorgon” appears to exist before the Placidus’s marginalia.

As if he had manifested the creature from the very ether, the Demogorgon has replicated from that initial transcription through literary history. After that initial appearance, the Demogorgon appeared in Giovanni Boccaccio’s 14th-century On the Genealogy of the Gods of the Gentiles, where the Italian author connected the entity to the demigod Pan while interpreting a line from Ovid’s Metamorphoses; by the Renaissance he’d be incantated in works such as Ludovico Aristo’s I Cinque Canti, Edmund Spenser’s The Faerie Queene, and Christopher Marlowe’s diabolical play Doctor Faustus. A few centuries later, and the sprite mentioned in Placidus’s gloss would be name-checked by Voltaire, and he’d be conjured in Percy Shelly’s Prometheus Unbound and Herman Melville’s Moby-Dick.

By the 20th century, the Demogorgon would become a character in Gary Gygax’s role-playing phantasmagoria Dungeons & Dragons, and he now enjoys his ninth life as the bestial, reptilian antagonist of the first season of Netflix’s exercise in Gen-X nostalgia Stranger Things. Cultural footnote though the Demogorgon may be, that scribbling in the border of Thebaid endures. What Spenser described as something “Downe in the bottome of the deepe Abysse / Where Demogorgon in full darknesses pent, / Farre from the view of Gods and heauens blis, / The hideous Chaos keeps, their dreadful dwelling is.” More prosaic an explanation for the creature’s genesis—whoever had been copying Placidus’s commentary had misread the Greek accusative referencing the Platonic concept of the “demiurge.” All those deltas and gammas got confusing. There never had been a Demogorgon, at least not outside of that initial misreading. Even Placidus nods, it would seem (just like the rest of us). At least that’s how it’s often interpreted, but in the genre of marginalia, which is its own form of instantaneous commentary on a literary text, there is a creative act in its own right. Such commentary is the cowriting of a new text, between the reader and the read, as much an act of composition as the initial one. From this vantage point, the Demogorgon is less a mistake than a new being born in the space between intent and misinterpretation. A conjuring appears. So much depends on marginalia.   

In his 1667 epic Paradise Lost, John Milton replicates that transcendent transcription error when he invokes “the dreaded name / Of Demogorgon,” but the blind poet got the marginalia treatment himself in a used copy of his work that I read during my doctoral composition examinations. My copy of William Kerrigan, John Rumrich, and Stephen M. Fallon’s The Complete Poetry and Essential Prose of John Milton has a delightful addition made on its title page. Marginalia by way of doodle, where some bored and anonymous undergraduate, a Placidus in her own right, added a cartoon thought bubble to the 1629 portrait of the young poet posed soberly in his stiff, starched, ribbed collar as if an oyster emerging from a shell, leading the annotator to imagine the author thinking “I am a seahorse, or a snail.” Not my favorite marginalia as it is, that’s reserved for a copy of the Pelican Shakespeare edition of The Merchant of Venice heavily annotated by a reader who’d clearly no previous familiarity with the play. When Shylock gives his celebrated soliloquy, in which he intones, “If you prick us, do we not bleed? If you tickle us, do we not laugh? If you poison us, do we not die? And if you wrong us, shall we not revenge?” the previous owner approvingly added in the margins “Bring your own BOOYEA!” Whoever got their first taste of The Merchant of Venice from the copy that I now possessed was rightly rooting for Shylock, so much so that when they got to the final act and discovered the moneylender’s heartbreaking forced conversion, they wrote in a corner of the creased and dog-eared page “Aww,” then choosing never to annotate this particular copy again.

Such marginalia greatly enlivened my reading of the play; in part because the weird enthusiasm of the previous owner was innately funny, but not without being equivalently moving. As all marginalia is, those little marks that people make in borderlands of a book, in the margins and on the title page, underlined text and notes scribbled wherever there is a blank space requiring commentary, exegesis, digression, or doodle. They exist as the material result of a reader having grappled with literature. Since the era of literary mechanical reproduction (i.e. print), there has been the risk of all books partaking in a dull uniformity with every other object that shares their particular title; marginalia returns the actual book to its glorious singularity, print is converted back into manuscript as my copy of The Merchant of Venice is individual from all the others in the Pelican Shakespeare series as a result. Marginalia in a used book is an autograph from the reader and not the author, and all the more precious for it. Such scribblings, notations, and glosses, whether commentary on the writing itself, or personal note, or inscrutable cipher known only to its creator, is artifact, evidence, and detritus, the remainder of what’s left over after a fiery mind has immolated the candle of the text. A book bloody with red ink is the result of a struggle between author and reader, it is the spent ash from the immolation of the text, it is evidence of the process – the record of a mind thinking. A pristine book is something yet to be read, but marginalia is the reading itself. Far from the molestation of the pristine object, the writing of marginalia is a form of reverence, a ritual, a sacred act. So rarely do you get the opportunity to write back to authors, whether out of love or hate. Marginalia lets you do it for even the dead ones.

Such reverence for marginalia was hard-won for me; I’m not the sort of reader who took naturally to jotting observations in the corner of a page. When I was growing up, I approached my books with a bibliomaniacal scrupulosity that was marked in its own neuroticism. To prevent the pages of paperbacks from curling around each other in the un-airconditioned summer humidity, I used to take a ruler and make sure that they were perfectly lined up on the edges facing the back of the bookshelf, so that their spines greeting those who might peruse their titles were strung along like crooked teeth. Books were to be gingerly opened, carefully placed, and certainly never allowed to have ink vandalize them. An observer might note that all of this obsessiveness doesn’t have much to do with actually reading; as S. Brent Plate writes in his own reflection on marginalia and the totemistic quality of books at The Los Angeles Review of Books, “this fetishization cannot be sustained.” Graduate school broke me of that affectation, the need to actually ingest the content of a book became more important than treating a copy of Michel Foucault’s Discipline and Punish as if it were the goddamn Book of Kells (which incidentally has its own marginalia). Disciplining and punishing books is precisely what we did in wrestling with the ideas therein; no wonder so many violent metaphors are used in describing the process of reading, whereby we “crack spines” and drench pages in lurid corpuscular red ink.

When I first began writing book reviews several years ago, I still hadn’t quite shaken my previous idolatry of paper and binding. Writing my first published review of a book (it was Colin Dickey’s Afterlives of the Saints considered at The Revealer) and I concocted an elaborate system of color-coded Scotch-tape tabs and enumerated page numbers listed in a document so as to be able to reference portions of the text I might need to paraphrase or quote, all while avoiding anything as gauche as dog-earing or underlining. Untenable is what this system was. Now I struggle with at least the books I’m tasked with reviewing as if Jacob with his nocturnal angel, and the marked, torn, broken books that limp away testify to an event that in some way altered us both. At least evidence that there was an event that we can call reading. Out of interest I checked some of the most recent books that I had to read for my supposedly professional opinion (I don’t do this with novels from the library of course), and my marginalia is a record of my perseverations c.2019. In one I wrote underneath the printed words “seems anemic, feels as untrue as feeling that God can’t be cruel,” and in another I penned “AMERICAN TRAGEDY.” At the very least, the people who purchase the corpses of my volumes read after I’ve deposited them into the book donation bin will be able to psychoanalyze my hypergraphic observations.

Referencing exhibits at the National Gallery of Art in Washington, D.C., and the Regenstein Library of the University of Chicago, Plate notes that today is a veritable golden age of the form, even as digital publication would ironically seem to announce its eclipse. The plucky dons of Oxford University even sponsor a Facebook group for the analysis of evocative specimens of the form spotted in the wild. The BBC reports one volume from the Bodleian Library in which a student wrote “I hate these clever Oxford people.” One reader recorded their graffito in the pages of the Labour Party’s response to the EEC with “Why the fuck is this all so boring…” An annotation in a scholarly journal reads “This article is a load of balls.” Much as with the literary Banksy who imagined my Milton dreaming of a beautiful aquatic invertebrate existence, these marginalia have little to do with simply annotating the book, and everything to do with engaging with the text as if they were an interlocutor (as angry as those engagements sometimes are).

What the exhibits, studies, and Oxford group signify is that marginalia has long come out from between the covers as it were. A demonstration of how literary theorists interested in material history—as well as critics concerned with that nebulous collection of attributes that invisibly radiate out from the book proper and which are known as “paratext” (including everything from covers and blurbs to prefaces and reviews—have been academically concerned with marginalia now for a generation. Writing in Early Modern English Marginalia, scholar Katherine Acheson notes that the form is a “record of our complex material, intellectual, emotional, and psychological interactions with the book, and therefore [they present]…a special kind of history of those marvelous things and their readers.” A history of marginalia, from the saucy medieval monks who used manicules to mock their own transcription errors, to the 17th-century mathematician Pierre de Fermat’s unfulfilled promise in a marginalia to have found a proof that no positive integer greater than two can satisfy the equation an + bn = cn (and which awaited three centuries until it was again proven), is as a history of the human mind itself.

Marginalia has gone digital, with projects like The Archeology of Reading in Early Modern Europe (administered jointly between Princeton, Johns Hopkins, and UC London), Annotated Books Online, and repositories of authors from Walt Whitman to Charles Darin and their marginalia available to the historian and the merely curious alike. Harvard’s Widener Library has an online collection allowing anyone to read the annotations of “John Keats, Herman Melville, [and] Hester Lynch Piozzi,” among others. And marginalia has finally earned its indefatigable scholarly champion in the form of H.J. Jackson and her exhaustive study Marginalia: Readers Writing in Books. Jackson surveyed a voluminous amount of material written and read across the century’s books consumed by both the famous and the average, so as to develop a taxonomy of the form. She writes that “Readers’ notes in books are a familiar but unexamined phenomenon. We do not understand it well. We have mixed feelings about it, sometimes quite strong ones, such as shame and disapproval.” Beyond simple note-taking, Jackson discovered that those who annotate their books do it for a variety of reasons, even while those reasons may be “private and idiosyncratic.” Readers address the author, they address an imagined audience, they address posterity and the absolute. They are written to express ecstatic agreement and vociferous disagreement, to interrogate the book as if it were under oath, and to merely express physically the existence of readers themselves in the most potent objects that embody writerly ambition. Jackson observes that “All annotators are readers, but not all readers are annotators. Annotators are readers who write. Annotation combines—synthesizes, I should say—the functions of reading and writing. This fact in itself heights the natural tension between author and reader.”

As enjoyable as anonymous marginalia can be, most of us seem more interested in the annotations of famous writers considering other famous writers, for the obvious reasons. Aspiring seahorse or snail John Milton’s heavily annotated version of Shakespeare’s first folio was recently discovered hiding in plain site at the Free Library of Philadelphia, an identification that may prove invaluable to scholars trying to understand the influence of one genius on another. Then there are Vladimir Nabokov’s drawings within Franz Kafka’s The Metamorphosis, the committed Peabody Museum affiliated amateur entomologist trying to flesh out the segments and exoskeleton of poor Gregor Samsa. Being able to see a fertile brain in flux is one of the exquisite joys of marginalia in the hand of celebrated authors. Writing in his column entitled “Marginalia,” Edgar Allan Poe enthused that in “getting my books, I have been always solicitous of an ample margin…penciling suggested thoughts, agreements and differences of opinion, or brief critical comments in general.” A brilliant writer not alone in that pose. Consider that old curmudgeon Mark Twain’s notation in the margins of his copy of Darwin’s The Voyage of the HMS Beagles Around the World when he wrote “Can any plausible excuse be furnished for the crime of creating the human race?,” presumably whether ex nihilo or by primordial soup. The character of Jack Kerouac as both reader and writer is on display in an edition of his fellow New Englander Henry David Thoreau’s A Week on the Concord and Merrimack Rivers, pilfered from a Lowell, Mass., library in 1949, a little under a decade until the writing of his most famous book. There Kerouac underlined an observation of Thoreau’s: “The traveler must be born again on the road.”

Ever is the case, for it’s not a coincidence that Thoreau’s language has such evangelical connotations to it. Reading does have something of the religious in it, and not just all of the transcendent hoopla either. With considerations of faith, prayer is not just a matter of the soul, but of the hands as well; reverence not only a subject for the mind, but of the body contorted into kneeling, too; ecstasy fit not only for the spirit, but also as an issue of the body. Such is the same for reading, for even in our supposedly transhumanist digital age there is still the question of how you comport yourself when scanning a page, whether leaning over a desk or sprawled across a couch; of how the book is gripped or carefully opened, of the pencil or pen poised over print. Marginalia can be such a form of material supplication, before the altar of the text’s base physicality. As a method, marginalia remind us that all annotation is allusive, that all literature is connected to everything else, that the reader influences the writer as surely as the other way around, and even if the later has been dead for centuries. Plate writes that margins are “sites of engagement and disagreement: between text and reader and…between author and reader. From Talmudic studies to legal amendments, margins have been the places where texts have been kept alive—alive because they’ve been read and responded to.” Books are otherwise inert things, whereas marginalia turns the moribund page into a seminar, an academy, a disputation, a debate, a temple.

Books are, certainly, often inert things. They can exist as a type of interior decoration, as status symbol, as idol. Think of the unreaderly sentiment parodied by F. Scott Fitzgerald in The Great Gatsby, when Nick comes upon the library filled with classics bonded by their uncut pages. There a drunken admirer of Jay Gatsby, wearing “enormous owl-eyed spectacles,” informs Nick “It’s a bona-fide piece of printed matter. It fooled me…It’s a triumph. What thoroughness! What realism! Knew when to stop, too—didn’t cut the pages. But what do you want? What do you expect?” Certainly not to actually read the books, because they exist not to be interpreted, but admired. “Printed matter” as mere wallpaper. A memorable image of a certain type of crass materialism, of the idolization of the book at the expense of the actual writing, the whole thing drawn to its ultimate logical conclusion. Not only is Gatsby not underlining and marking up his margins, he’s not even going to bother cutting the pages to actually read what’s inside. By contrast, consider the marginalia made by the young poet Sylvia Plath while she was an undergraduate at Smith College first reading The Great Gatsby. Before she’d lived the bulk of her own tragic life—abuse at the hands of her husband, Ted Hughes, and her eventual suicide—Plath read of Daisy Buchanan.

When the narrator leaves Gatsby standing vigil outside of the Buchanan home, his youthful love retiring upstairs with her brutish, privileged, bigoted husband, Nick reflects that “I walked away and left him standing there in the moonlight—watching over nothing.” There in her neat, meticulous, tidy handwriting, Plath recorded nine words in black ink organized into eight lines marked with the caesura of a single hyphen: “knight waiting outside—dragon goes to bed with princess.” Such reading is as if a prayer for intercession, and the physicality of the whole thing is instrumental. Such a method of annotation gives the flesh spirit, reminding us that books are objects—but not entirely. Such is the gravitational power of literature, that every new work alters every other so that the canon as an abstract idea can never be defined, can never be static. Marginalia, as evidence of thought and engagement, is among the synapses of that process. Marginalia is the ash left over, the melted wax of the candle proving that a fire once burnt here.

Image credit: Andrew Measham

A Year in Reading: Ed Simon

My reading year was interrupted by the caesura of an interstate move, as we traded in lobster rolls for Maryland blue crabs, Legal Seafood for Ben’s Chili Bowl, Leonard Bernstein for Duke Ellington, and the shadow of Harvard University Memorial Hall for that of the Capitol dome. Don’t take the last sentence as an obnoxious humble-brag; I didn’t attend Harvard, though I often caught the T near there, as now I regularly commute underneath the Capitol South Metro Station, and that proximity to my “betters” is enough for me to fart a bit higher than my posterior. Now that I’m a proud denizen of the District, as us locals apparently call it, I’m not just a citizen who is constitutionally prohibited from voting for my own congresspeople, but also a resident of America’s unheralded literary capital.

Where else have Americans so often fervently oriented both their dreams and increasingly nightmares? What other hundred square miles (well, with a bite taken out of the bottom of it) has so clearly mapped onto the geography of national aspirations? Who doesn’t basically know the shape of the Mall, the look of the Lincoln Memorial, the feel of the White House? New York is the only other city I’ve lived in to give the same sense of spatial “fame-overload,” as perambulations take you by any number of structures so iconic in their import that you can’t help but develop a continual vertigo.

As with my retrospective last year, I’m going to limit my consideration of books read in 2019 to those I’ve taken out from my local library, whether near Cambridge or in Capitol Hill (also, support your local library). In the interests of dutiful fairness, I’m not mentioning any of the exceptional books that I already reviewed this year. I’m also making one alteration; previously I limited myself to focusing only on novels. This year, with the logic that our social reality is as disturbing and surrealistic as any fabulist gothic, I’ve decided to make an exception for one class of nonfiction by including books on politics. Chief among these was the gorgeous Beautiful Country Burn Again: Democracy, Rebellion, and Revolution by Ben Fountain. Justly celebrated for his brilliant novel Billy Lynn’s Long Half-Time Walk, which smashed American idols from militarism to sports-obsession with a deft empathy (not an attribute often associated with smashing), Beautiful Country Burn Again heralds Fountain’s return to journalism.

Since the 2016 election, certain elite publications have taken to reading the tea-leaves of American malaise, going on what some wags have terms “red-neck safaris” so as to better understand the sentiments of those of us who originally come from “flyover country.” Texas-born Fountain understands that the reality is often far more complicated, and he provides a distressing, heart-breaking, poignant month-by-month reading of the election that saw nascent authoritarianism sweep into Washington. “2016 was the year all the crazy parts of America ran amok over the rest,” Fountain writes, “Screens, memes, fake news, Twitter storms, Russian hackers, pussy grabbers, Hillary’s emails, war, the wall, the wolf call of the alt-right, ‘hand’ size, lies upon lies upon lies and moneymoneymoney—the more money, the more likes, is this politics’ iron rule?—they all combined for a billion-dollar stink of an election.”

Disorienting as well as disturbing to read the account of recent history which all of us lived. Fountain has somehow defamiliarized it, however, and the rhetoric of retrospective history strikes us in its sheer nightmarish surrealism. Turning to historical and economic analyses, but filtered through the consciousness of a poet, Fountain’s account isn’t that of other classic campaign works like Hunter S. Thompson’s Fear and Loathing on the Campaign Trail, ’72 or Matt Taibbi’s Spanking the Donkey. Fountain isn’t embedded with any campaign; he doesn’t eat barbecue at Iowa state fairs or whoopie pies in New Hampshire. He’s an observer like the rest of us, and somehow Beautiful Country Burn Again is all the more powerful because of it.

William Carlos Williams wrote that “It is difficult / to get the news from poems / yet men die miserably every day / for lack / of what is found there.” If I can stretch my amendment that allowed for political non-fiction to include poetry as an example therein, holding to the position that poetry may not be factual in the same way as journalism, but it is often more truthful, than the most powerful book on current events that I read this year was Terrance Hayes’s collection American Sonnets for My Past and Future Assassin.

Because Hayes, currently a professor at New York University and the poetry editor at The New York Times Magazine, was on the faculty of Carnegie Mellon when I got my Masters there, I sometimes like to pretend that I actually know him, though the extent of our discourse was me saying hello to him once on the winding, red-bricked stairwell of Baker Hall. Hayes had a mohawk then; the haircut has changed, but in the meantime, he’s gotten a National Book Critics Circle Award, the TS Elliot Prize, a Hurston/Wright Legacy Award, and a Macarthur Fellowship. No doubt he’ll one day soon (deservedly) get a position as the Library of Congress’ national Poet Laurette of the United States. When I pretended to know Hayes, he was simply a brilliant poet, but since then he’s announced himself as a potentially canonical one. American Sonnets for My Past and Future Assassin was in part Hayes’s reaction to the election of you-know-who, but more than that it’s his grappling as a black man with America’s legacy of violent institutional racism. Writing in a poetic form that goes back to Petrarch and defined by Wyatt, Surrey, Shakespeare, and Wordsworth, Hayes intones, “I lock you in an American sonnet that is part prison, / Part panic closet, a little room in a house set aflame.” If it’s true that “Poetry is news that stays news,” as Ezra Pound once claimed, then American Sonnets for My Past and Future Assassin has distressingly been news for a long time, in 1619, in 1776, in 1860, in 1960, in 2019.

So upside down is our current moment that politics must of course be explored by that engine of empathy which literary critics long ago deigned to call the “Novel.” Some of these considerations are in the form of historical fiction, some through the vagaries of science fiction, but if poetry like Hayes’s is at one pole of human expression then surely the very opposite must be that of dry, government report. That’s the genre chosen by the political scientist Jeffrey Lewis, who moonlights as director of the James Martin Center for Nonproliferation Studies of the Middlebury Institute of International Studies at Monterey. Lewis’s first “novel” is the surprisingly engaging and pants-shittingly terrifying The 2020 Commission Report on the North Korean Nuclear Attacks Against the United States. Borrowing the form, feel, and language of actual governmental documents from the “Warren Commission,” the “9/11 Commission Report”, and the “Mueller Report,” Lewis imagines a series of miscalculations, blunders, strategic missteps, and plain political idiocy (in part due to you-know-who) that leads to a brief nuclear exchange that sees the destruction of Seoul, Tokyo, Yokohama, and the virtual obliteration of North Korea. Added to such horror are the detonation of nuclear warheads over Honolulu, Palm Beach (Mar-a-Lago is a target), Manhattan, and northern Virginia when a missile intended for Washington is a few miles off course. Lewis writes with eerie and prescient verisimilitude that “We present this final report and the recommendations that flow from it mindful that our nation is more divided than ever before, particularly over the question of responsibility for the chain of events that led to the first use of nuclear weapons in more than eight decades—and their first use against the United States of America.” Evoking other examples of “official document” fiction, from Robert Sobel’s textbook from a parallel universe For Want of a Nail: If Burgoyne Had Won at Saratoga and Max Brooks’s pastiche of Studs Terkel’s reporting World War Z: An Oral History of the Zombie War, Lewis novel is among one of the most disturbing I read this year, in part because of its dispassionate, objective tone.

Speculative fiction was also the chosen genre for Leni Zumas’s startling, upsetting, and unnervingly realistic Red Clocks. Yet another representative example of a novel written as part of our ongoing golden age of feminist science fiction, Zumas joins Naomi Alderman, Louise Erdrich and (of course) Margaret Atwood in examining trends regarding reactionary gender relations, reproductive rights, and institutional misogyny by extrapolating out from our current moment to a possible (and believable) near-future. Red Clocks is science fiction for a post-Kavanaugh era, taking place sometime in the next decade or so after Roe v. Wade has been overturned, LGBTQ and single Americans have been denied the right to adopt, and creeping theocratic logic infects even the liberal environs of the Pacific Northwest where the novel is set. The novel is focalized through four major characters: a single high-school teacher and historian approaching middle-age who wants a child but is infertile and is running up against the government’s bans on IVF and adoption by the unmarried; her pregnant teenage student who wants to get an abortion; the wife of one of the teacher’s colleagues who finds herself in a stultifying marriage; and a local midwife with witchy affectations who runs afoul of the increasingly draconian state. One of the strengths of Red Clocks is how deftly it shows the lie that pro-choice politics are anti-pregnancy, and how what lies at the center of any defense of reproductive rights is the freedom to make the best decision for yourself. At the core of Red Clocks is the conviction that women must have their right to bodily autonomy be recognized, and that we don’t have to be living in Gilead to admit that things can get just a little bit worse every day.

If Zumas imagines a not-so-distant future to explore her political themes, then Joshua Furst takes us to the not-so-distant past in Revolutionaries. Evoking recent novels such as Nathan Hill’s The Nix, Furst’s second novel is arguably part of a trend of millennial writers attracted to the political radicalism of the ‘60s and ‘70s, while refusing to simply embrace the mythology of the Woodstock Generation as being the primogeniture of all that is just and free. Revolutionaries is narrated by Fred (ne “Freedom”) Snyder, the put-upon, manipulated, emotionally abused, and often ignored son of notorious countercultural radical Lenny Snyder.

“Call me Fred,” the narrator says, “I hate Freedom. That’s some crap Lenny dreamed up to keep people like you talking about him.” If Revolutionaries were in need of a subtitle, I’d suggest “OK Boomer.” Snyder is a not-so-thinly veiled version of Abbie Hoffman, founder of the Youth International Party (or Yippies), jailed member of the Chicago Seven, and arguably the anarchic spiritual ancestor of the Dirtbag Left. As with Hoffman, Snyder organizes trollish pranks against the establishment, such as raining dollar bills down on the New York Stock Exchange to demonstrate the petty greed of the brokers who scramble after literal change, or in his demonstration against the Pentagon in which a group of warlocks and witches attempts to levitate the massive structure. He’s idealistic, utopian, and committed to freedom, equality, and justice. Snyder is also occasionally cruel, narcissistic, self-indulgent, and unequivocally a terrible father. Revolutionaries neither condemns nor celebrates Snyder, taking him with all of his complexities while asking how any radical is able to be committed equally to both family and their movement.

Recent political history was also the theme of Jennifer duBois’s The Spectators, and as with Furst she excavates the previous decades to give intimations of what the genealogy of our current age might be. The Spectators isn’t interested in hippie hagiography and its discontents, however, preferring rather to toggle between the gritty, dystopian world of New York City in the ’70s when the Bronx was burning and Gerald Ford proverbially told the five boroughs that they could drop dead, and the belle epoque of the mid-’90s when Americans took their first hit of mass marketed infotainment. DuBois’s central, mysterious, almost Gatsby-esque character is Matthew Miller (born Mathias Milgrom), who in the 1993 present of the novel is the host of a day-time talk-show with shades of Jerry Springer. Before his current iteration of peddling shock television—all baby-daddy reveals and Satan-worshiping teens encouraged to brawl in front of a live studio audience—Miller was an idealistic city councilman in New York between the Stonewall uprising and the AIDS pandemic. His ex-lover Semi recollects that Miller “radiated a subtle electricity—something slight and untraceable that kinectified the air around him—and it was easy to mistake this, then, for the particular dynamism of compassion.” Like the actual Springer, Miller was an idealistic, progressive, crusading politician; unlike the actual Springer he was also a closeted gay man. The Spectators’ attention shifts between Semi in the ’70s and ’80s and his publicist Cel in the ’90s, their two stories converging in the novel’s present as Miller faces a reckoning after it has been revealed that a midwestern school shooter was a fan of his show. DuBois writes with a tremendous humanity, a novelistic consciousness whereby she almost magically occupies with equal aplomb both the experience of young gay men on the Lower East Side in the early ’70s and an anxious career woman who grew up dirt-poor in New England. Within The Spectators something else emerges, a portrait of a nation obsessed with violence, spectacle, and ratings, but where sometimes there may still be something noble, since “compassion took work, he always said, and anyone who told you otherwise wasn’t really trying to be good at it.”

Furst and duBois have written historical fiction of a kind, but they’re just two examples of what’s been a growing crescendo of excellent examples of that often-forlorn genre. Like all of the genres that are too often condescended to or ghettoized, historical fiction has been critically disparaged, passed over as the purview of petticoats and carriages. Yet the last few years have seen an explosion of the form, from Francis Spufford’s Golden Hill: A Novel of New York to Colson Whitehead’s The Underground Railroad. What these titles share is a sense of playfulness within the dungeon that is history, as well as a reverential imitation of the often-labyrinthine prose of the 18th and 19th centuries. Such historical fiction isn’t written as a palliative for the contemporary moment, but rather as an excavation of our fallen, modern age.

Edward Carey’s achingly melancholic Little takes as its subject Marie Grosholtz, an 18th-century Alsatian peasant girl adopted by an esteemed physician who mentors her in the art and science of making realistic wax sculptures of humans. Marie’s autobiography, exemplary and talented as she is, is still from the perspective of one of us commoners, even as she Zelig-like intersects with the great personages and events of her age. Brief appearances of Enlightenment luminaries punctuate Little (as do Carey’s own delightful line drawings), including cameos by Jean-Jacques Rousseau, Voltaire, Benjamin Franklin, Robespierre, Diderot, and Marat, Napoleon and Josephine (and the latter’s pug), and by the very end, as if to demonstrate the sheer scope of her life, a young writer named Charles Dickens. So begins her account that “In the same year that the five-year-old Wolfgang Amadeus Mozart wrote his Minuet for Harpsichord, in the precise year when the British captured Pondicherry in India from the French, in the exact year in which the melody for ‘Twinkle, Twinkle, Little Star’ was first published, in that very year, which is to say 1761…was born a certain undersized baby.” By the conclusion of Little, Marie is known by her married name of Madame Tussaud, and while her children encourage her to embrace a new technology invented by Louis-Jacques-Mande Daguerre, she believes that nothing as ephemeral as photography can replace the warm fleshiness of molded wax.

Across the English Channel from France, and Imogen Hermes Gower describes a fantastic 18th-century world marked by exploration, trade, and mystery, but also by exploitation and cruelty, in her humane and beautiful The Mermaid and Mrs. Hancock. Gower’s maximalist door-stopper of a book tells the tale of Jonah Hancock, comfortable merchant and member of London’s rising bourgeoise, who finds himself in possession of a “mermaid” brought back by one of his sailors from the sundry regions of the globe. Hancock’s London is no less enraptured by spectacle than Matthew Miller’s New York, and so the “mermaid” becomes the linchpin of various schemes, even while the bumbling, good-nature, and fundamentally conservative financier finds himself falling in love with Angelica Neal, a courtesan and adept student of the School of Venus, as if a character right out of Daniel Defoe’s Moll Flanders. London in The Mermaid and Mrs. Hancock is described by Gower with almost supernatural precision, “The white-sailed ships strain upon it, and the watermen have gathered their bravado to steer their little crafts away from the bank and race across the current… the winking glass of the Southwark melon farms; the customs house, the tiered spire of St Bride’s the milling square of Seven Dials, and eventually… Soho.” A mermaid of sorts does eventually arrive in Jonah and Angelica’s life, but she is neither symbol nor synecdoche, metaphor or metonymy, but something else, with the whiff of ineffability about her.

Across the Atlantic Ocean from Great Britain, and Esi Edugyan imagines a different 18th-century world, though perhaps no less wondrous, even if similarly marked by exploitation and cruelty in her equally humane and beautiful Washington Black. Since her stunning debut Half-Blood Blues, which imagines the fate of a biracial jazz musician living through the Nazi regime and the Holocaust, the Canadian novelist has become one of the most lyrical interpreters of race, identity, and the troubled legacies of history. Washington Black arrives as one of the greatest fictional accounts of slavery’s too-oft ignored role in the establishment of the “New World,” recalling both Ishmael Reed’s Flight to Canada and Charles Johnson’s Middle Passage, if choosing to hew away from those books’ parodic sentiments towards a more baroque, quasi-magical realism.

Edugyan’s titular George Washington Black is born enslaved on the Caribbean island of Barbados, witness to the unspeakable cruelties of a sugar plantation overseen by a British master. When Washington is indentured to the master’s brother, an aspiring scientist with an interest in hot-air balloon transportation, as well as being a secret abolitionist, it provides him with a means of acquiring his freedom, which propels the narrative of Edugyan’s ingenious picaresque. Washington, in a manner that made him more deserving of his name than the man whom his master had ironically christened him after, was “of an ancient faith rooted in the high river lands of Africa, and in that faith that the dead were reborn, whole, back in their homelands, to walk again free.” Washington Black, never content to obscure the evils which marked the emergence of the modern world, also revels in the wide-roaming nature of freedom itself. Edugyan takes her characters from Barbados to Virginia, the Maritime Provinces of Canada, west Africa, the Sahara, and even an aquarium which Washington constructs in London (perhaps Jonah’s mermaid could live there). Throughout Washington Black a tension is brilliantly held: ours is a fallen world which sometimes can still produce such wonders.    

Taking place during the same time period as Washington Black, but a few thousand miles north of sweltering Barbados, is Carys Davies’s minimalist novella West. Pennsylvania farmer Cy Bellman reads an account of giant fossilized bones discovered on the Kentucky frontier, and though the recent accounts of Lewis and Clarke returning from the west tell no tale of massive monsters roaming the American plains and mountains, the gentle widower assumes some remnant of the megafauna must still live beyond the horizon, and so compelled by an obsessive sense of wonder he journeys to find them.

“He paced about every half hour, he took the folded paper from his shirt pocket and smoothed it flat on top of the table and read it again: there no illustrations, but in his mind they resembled a ruined church, or a shipwreck of stone—the monstrous bones, the prodigious tusks, uncovered where they lay, sunk in the salty Kentucky mud,” Davies writes. Bellman’s heart is set on both his dead wife, and the dinosaurs he imagines foraging in a fantastic American west, but he leaves his daughter behind with a long-suffering sister, the young girl both pining for her father’s affections and struggling to survive her approaching adolescence in a young nation not amenable to any weakness. West alternates between the accounts of young Bess, and Cy and his teenage Indian guide as they fruitlessly search for the creatures. As a British author, Davies has an ear for American weirdness that can sometimes elude domestic novelists, and West functions as a parable of lost innocence in the era of bunkum, of medicine shows and tent revivals. Davies writes with the clarity of a fairy-tale, but West never reduces its visceral characters to the level of mere allegory.

Sharma Shields tells tale of a different loss of American innocence, not the terra incognita of Manifest Destiny and all that was projected onto an already occupied west, but what the United States did with that land and by proxy all of humanity well into the twentieth-century. Set in the same Pacific Northwest country as Red Clocks, Shields’s novel takes us to the most pertinent Year Zero in human history of 1945, when the United States first unleashed the power of matter, when atomic fission possibly set the world towards the inevitable tragedy of nuclear annihilation. The Cassandra is Shields’s retelling of the ancient Greek myth about a woman condemned to prophesize the future, but to never be believed by those in power.

In Shields’s novel, the role of the oracular Sibylline is played by Mildred Groves, a secretary at the Hanford Research Center on Washington’s Columbia River, an instrumental laboratory in the Manhattan Project. Mildred is preternaturally odd, prone to strange trances, visions, and fits, and with a heartbreaking ability to charitably misinterpret her family’s abuse in a benevolent light, as a means of preserving her fractured psyche. One of the most engaging narrators I encountered in my past year of reading, Mildred is simultaneously innocent and terrifying; Shields performs a deft alchemy that makes her protagonist seem both unreliable and omniscient. The Cassandra is at its heart a book about violence in all of its myriad forms—the violence of the natural world, the violence of emotional abuse, sexual violence, and the annihilating nuclear violence to end all violence. In prose that recalls Patmos, Shields intersperses the narratives with Mildred’s terrifying visions, of “dark forests, wild dogs, long-clawed hags, cottages with candy-coated exteriors belying menacing contents: cages, skeletal remains, a hot stove reeking of burnt flesh, cutting boards strewed with bloodied fingers.” With language that owes so much to the vocabulary of nightmare, The Cassandra is commensurate with the bottled violence of potential nuclear holocaust. What makes the novel all the more terrifying is when you realize that Mildred’s visions are of an event that has yet to happen.

Taylor Jenkins Reid’s titular protagonist in Daisy Jones & the Six is a radically different kind of oracle from Mildred Groves, but an oracle all the same. Reid’s novel is a brilliant and ridiculously entertaining account of a fictional rock band in the ’70s with shades of Fleetwood Mac, with the beautiful, troubled, brilliant Daisy Jones a stand-in for Stevie Nicks, who has “got an incredible voice that she doesn’t cultivate, never takes a lesson.” Written as if it were the transcripts of an MTV Behind the Music-style documentary, Reid’s characters include bandmates, roadies, producers, and family, switching off between perspectives and dramatizing the variability of memory, with effects both poignant and funny. All of the rock and roll stations of the cross are visited—the combustive bandmates, the groupies, the addictions, and the inevitable rehab—but the result is anything but cliched, rather reminding us why we don’t change the dial when something from Rumors comes on the classic rock dial.

The overall effect of Daisy Jones & the Six recalls classic rock journalism, such as Legs McNeil and Gillian McCain’s Please Kill Me: An Uncensored Oral History of Punk, and Reid’s obvious encyclopedic knowledge of the singer-songwriter tradition of that decade, combined with her love of musicians like Fleetwood Mac, Carly Simon, Carol Kane and so on, creates the uncanny familiarity where you almost remember the music of Daisy Jones as if it were real. In a gambit that almost seems like bragging about her incredible talent, Reid includes as an appendix the lyrics to every song on Daisy Jones & the Six’s seminal album. “When you look in the mirror / Take stock of your soul / And when you hear my voice, remember / You ruined me whole.” Just like the white-winged dove you’d swear you heard that track before. To reduce Daisy Jones & the Six to being a mere roman a clef about Stevie Nicks would be an error, because what Reid provides is nothing less than history from an alternative universe, a collaborative, polyvocal, multitudinous rock epic—it’s an experimental masterpiece.  

Ottessa Moshfegh explores self-destruction as well, in My Year of Rest and Relaxation which reads a little as if Fyodor Dostoevsky’s Notes from the Underground were written by a terminally depressed, beautiful, wealthy Gen-X orphan living in New York at the turn of the millennium. Moshfegh’s unnamed narrator lives in an Upper East Side penthouse, and ostensibly works as an assistant for a gallery owner downtown, but her days are spent endlessly watching the same discount VHS tapes over and over and moldering away in her hermetically sealed apartment. My Year of Rest and Relaxation’s protagonist reads like an Aubrey Plaza character scripted by Albert Camus, and part of the novel’s freshness and misanthropic joy comes from encountering a woman who embodies all of the existential ennui of those masculine characters of twentieth-century modernism.

Rather than a French Algerian smoking in a café or a Russian dissident wondering what the meaning of life is, Moshfegh’s narrator is a Columbia graduate with model good looks who is able to be as much of an antisocial anti-hero as Camus’s Meursault in The Stranger. “I watched movies and ate animal crackers and took trazodone and Ambien and Nembutal until I fell asleep again. I lost track of time in this way. Days passed. Weeks.” Her narrator suffers from an almost terminal case of sleep irregularity, between insomnia and somnolence, culminating in a performance art piece that in the hands of a lesser author could read as parody, but in Moshfegh’s novel becomes a metaphysical exploration. My Year of Rest and Relaxation, by giving us a woman who can behave as badly as a man, has its own type of transgressive power. But to reduce it to a Ghostbusters reboot of a J.G. Ballard novel is to miss that My Year of Rest and Relaxation, not in spite of but because of the jaded affect, is a potent novel about depression and grief.

Cofounder of the site N+1 and brother to the LGBTQ activist, political commentator, and Russian dissident Masha Gessen, Keith Gessen’s A Terrible Country explores the chimerical Russia of the last decade. The novel is categorizable among the same tradition that led to fiction by first-generation Russian immigrants to the United States who arrived right before the fall of the Berlin Wall, such as in Gary Shteyngart’s The Russian Debutants Handbook or Ellen Litman’s The Last Chicken in America. Gessen’s novel is similar to those precursors in that the nation actually under scrutiny in the title is arguably the United States. A Terrible Country focuses on New York comparative literature graduate student Andrei Kaplan, who has absconded to the Moscow of his youth as dissertation funding begins to dry up, ostensibly to assist his shady oligarch-adjacent brother Dima in the care of their grandmother with dementia.

“My parents and my brother and I left the Soviet Union in 1981,” Andrei says, “I was six and Dima was sixteen, and that made all the difference. I became an American, whereas Dima remained essentially Russian.” The differences between those two cultures, as with Shteyngart and Litman’s writing, is the tension of A Terrible Country; the novel reading as a sort of fictional companion piece to journalist Peter Pomerantsev’s chilling Nothing Is True and Everything Is Possible: The Surreal Heart of the New Russia. Set during the 2008 financial collapse, Gessen’s novel traces the gloaming period between the dawn of the Soviet Union’s collapse and the current midnight of Vladimir Putin. In A Terrible Country Putin’s regime is not yet exactly a “regime,” the authoritarian tendencies of the former KGB officer still tangibly “Western” if you’re drunk and squinting, but one of the things Gessen does so well is dramatize the myopia of the individual before history. “I pictured myself protesting the Putin regime in the morning, playing hockey in the afternoon, and keeping my grandmother company in the evening,” Andrei says, though of course the reality of history is that it rarely keeps to our neat schedules.

No novel from the past few years quite so clearly provides a map of the terrain of national divisions, and what it means to simply try and lives life for yourself and your family in light of those divisions, as much as Lydia Kiesling’s first novel The Golden State. Former editor for The Millions, Kiesling’s novel is an engaging, empathetic, and honest exploration of the stresses of motherhood, professional life, family, and regional identity. Much to the benefit of this beautiful novel, The Golden State relegates current events to the role that they actually play in our lives, as a distant vibrational hum, even when those events can and do have profound personal effects on us. New mother Daphne is a low-level administrator for an Islamic studies program at a school that appears very much like UC-Berkeley, while her Turkish husband has been denied reentry into the United States after harassment by the Department of Homeland Security. While her husband attempts to disentangle his visa situation (while Daphne wonders how hard he is really trying), she absconds with her daughter Honey from San Francisco to her grandparent’s former home of Altavista located deep within the dusty, brown interior of the state. The Golden State explores a California not often revealed to outsiders; it’s not the brie and merlot set of the Bay area, nor the quinoa and avocado bowl folks of L.A., but a different place entirely, accessed through “nearly four hundred miles of road, leading up to the high desert.”

Altavista bears more similarity to Idaho or Nevada than Palo Alto or Malibu, a place beyond the “top of Donner Pass and some kind of geological divide, [where] suddenly the forest are gone and the land is brown and stretching out for miles and miles.” Daphne’s interactions with the locals, specifically a woman named Cindy who is a leader in a quixotic secession movement not dissimilar to right-wing survivalist militias, provides a perspective on national splits more potent than the typical “bubble” discourse favored by the aforementioned major newspapers. The Golden State is the most accurate portrayal of the red-state/blue-state dichotomy published since the election of you-know-who, and all without mentioning you-know-who. Kiesling’s portrayal of that split never pretends it isn’t real, there is no rapprochement or understanding with Cindy, but there is an awareness that none of us are as sheltered as the New York Times editorial page pretends. A denizen of San Francisco can be totally aware of what lay off 400 miles down the road. What’s even more crucial in Kiesling’s novel is the wisdom that politics is always personal, that more than what appears on 24-hour news it’s expressed in the fear of a wife waiting for her husband’s safe-return, or in a mother’s tender love for her daughter.

For reasons not even totally clear to myself, I’d always thought that successful, local restaurants providing accessible food to a large number of people could be material for a great American tragedy. When I lived in small-town eastern Pennsylvania, there was a regional chain of restaurants, only three or four of them, owned by these Greek brothers. The food was basically Applebee’s redux, but I was obsessed with the chain, not least of which because I thought there must be so much drama between the siblings; who got to manage which restaurants, vying for the affection of their immigrant parents, even arguing over the composition of the slick, laminated menus—for so much depends on the jalapeño poppers. Lillian Li basically wrote that novel for me, transposed from the Lehigh Valley to suburban Washington, D.C., with a sports bar replaced with a once high-end Chinese restaurant undergoing increasingly hard times.

Complicated family arrangements are at the heart of Li’s engrossing Number One Chinese Restaurant, a novel which peels back the jade-green curtain at the institution which is the mid-century Chinese-American eatery to provide an epic narrated by a chorus. Manager Jimmy Han, prodigal son of the Beijing Duck House, hopes to close the restaurant down in favor of opening an elegant, hipper location on the Potomac waterfront, but he’s set between the machinations of his perfectionist, professional brother Johnny, his calculating mother, and the underworld figure “Uncle” Pang whose investments had saved the restaurant since its founding. Johnny’s restaurant, to his disdain, is a place of “gaudy, overstuffed décor,” defined by a “deep, matte red colored everything, from the upholstered chairs to the floral carpet to the Chinese knots hanging off the lantern lightning, their tassels low enough to graze the heads of taller customers.” Rockville, Maryland’s Beijing Duck House is the sort of restaurant omnipresent at one time, the affordable, quasi-sophisticated repository of Yankified Mandarin cuisine, all chop suey, and egg foo young, moo goo gai pan, and of course the crispy, greasy, delicious duck which gives the establishment its name. Li interrogates questions of ethnic identity and food, class and food, and family drama and food.  What elevates Number One Chinese Restaurant to greatness is that Li never forgets the humanity of these characters, from the long-repressed love of the elderly kitchen staff to Johnny’s vices and hubris.

Patrick deWitt knows that family is complicated in French Exit: A Tragedy of Manners, which bears less similarity to Number One Chinese Restaurant than it does a novelization of Charles Addams’s The New Yorker cartoons, or as if a Wes Anderson movie produced by Tim Burton. Author of the under-heralded (though filmed!) post-modern western The Sisters Brothers, deWitt is a master minimalist for whom every comma is cutting, every semicolon a scythe. French Exit initially takes place in a seemingly timeless Upper East Side, all jackets with crests and loafers, inhabited by the wealthy widow Frances Price, a “moneyed, striking woman of sixty-five years, easing her hands into black calfskin gloves on the steps of a brownstone” and her adult son Malcolm, “looking his usual broody and unkempt self,” who become Parisian expats after their wealth evaporates. Joining the Prices is Frances’s cat Small Frank, whom she (correctly) maintains is the reincarnation of her despised husband. Frances would seem to be a role made for Jessica Walter, even as Wikipedia dutifully informs me that Michelle Pfeiffer has been cast in the adaptation being developed by deWitt himself. French Exit is a delicious mint-flavored green-pastel macaron of a novel, with just a hint of sweet arsenic.   

A benefit to being a nonfiction essayist reading and reviewing novels is that there is a degree or personal distance that you can affect to avoid pangs of professional jealousy which sometimes accompany reading great writing, and which any honest scribbler would have to cop to. When I read something as tender as The Golden State, as astute as A Terrible Country, as innovative as My Year of Rest and Relaxation, or as wondrous as Washington Black, I can console my envious conscience with the mantra that “Well, I’m not a novelist.” With K. Chess’s mind-blowing, psychedelic Famous Men Who Never Lived I can’t quite do that, because her narrative conceit is so brilliant, it’s so good, that I can’t help feeling jealousy at having not conceived of the story first.

Famous Men Who Never Lived gives account of Hel and Vikram, two refugees from a parallel universe who alongside thousands of others are in exile in our own reality (or at least a version which seems nearly similar) after their world was destroyed, living in a New York City that diverged in the earliest years of the twentieth-century. These refugees between universes remembered their “world history… the rumors about forced labor at America Unida’s hidden education camps, about what the Power Brothers in Ceylon had done in the jungles to city-dwelling elites. And she’d remembered the KomSos clearing the shtetls of the Pale from east to west.” As with those dislocated by history in her world, Hel and Vikram are dislocated from the very idea of history itself, where you must “Leave what you own behind.” The result is a novel with not just a clever science fiction conceit, but also one which is a moving meditation on loss and dislocation. Hel comes to believe that the point of divergence involved Ezra Sleight, who died in childhood in our universe but grew to be a popular science fiction author in her and Vikram’s reality, with the later an expert on his The Pyronauts. Chess’s ingenious nesting stories recall Emily St. John Mandel’s similar speculative fiction masterpiece Station Eleven, with Famous Men Who Never Lived giving voice to the dislocations of exile, whether in our world or between our worlds. What Chess accomplishes is nothing less than a demonstration of how literature creates new universes, while expressing that which is consistent for humans regardless of which reality we may be living in.  

Ten Ways to Live Forever

1.Before ISIS toppled the minaret of the Great Mosque of al-Nuri in Anbar Province, or threaded the Mosul tombs of Daniel and Jonah with incendiary, Utnapishtim was somewhere in the desert. He was there before the Americans with their hubristic occupation, in some cave while soldiers in Kevlar patrolled the banks of the Tigris, M1 tanks of the Third Infantry rolling toward Baghdad and F-22s of the 101st Airborne cutting across the skies of Karbala. Utnapishtim survived Saddam’s reign, with his mustard gas, torture chambers, and the invasion of Kuwait; he’d seen men burnt alive on Highway 80 by the Americans; he’d endured the brutal war with Iran, when tanks got stuck in the mud of Dezful and Khorramshahr was turned into a city of blood; he was there when the Ba’athists overthrew the Hashemite monarchy.

Utnapishtim lived through the British Mandate of Mesopotamia, when Sir Percy Cox drank G&T’s at the officer’s club; he’d lived when Iraq was a backwater of the Ottomans, and he saw Mamluks, Jalayirids, and Mongols steer their horses across the desert; he witnessed Genghis Khan in Khwarizmi, more centaur than man. He snuck unseen into the Baghdad of the Abbasids, city of gardens and astrolabes, where he discussed Hadith with the humane Mu’tazila and parsed Aristotle with Ibn Sina. Prior to the Islamic Golden Age, Utnapishtim was in Ctesiphon when Yazdegerd III fled as the Arabs marched into the Sasanian Empire, the Zoroastrian mages unable to prevent the course of history (true of all of us). He was there when Trajan marched columns of iron-armored Roman centurions into Parthia, and when Alexander the Great established Seleucid.

Witness to when Cyrus the Great freed the Jews of Babylon, and when Hammurabi’s scribes chiseled the law into stone. Utnapishtim had endured Chaldeans, Babylonians, Assyrians, Akkadians. Our primogeniture, the oldest of men, born in Sumer; as old as cuneiform pressed into wet clay, as old as sunbaked cities and the farming of wheat on the Euphrates’s banks, as old as the words themselves. Enki of the stars and An of the sky, Enlil of the wind and Ninhursag of the mountains molded Sumer, and by the banks of Eden birthed humans like Utnapishtim. Our only refugee of that before-time, the only person to survive when the fickle gods conspired to destroy the world by flood shortly after having created it.

He dwelled when Iraq was Uruk, before civilization’s keystone was set, when the firmament was new. Utnapishtim survived leaders and conquerors, presidents and dictators. Breathing before Abu Bakr al-Baghdadi, Muqtada al-Sadr, George W. Bush, Saddam Hussein, and the Ayatollah Khomeini; talking before King George V and Kaiser Wilhelm II; walking before Mehmed II, the Abbasid caliphs, and Genghis Khan; older than the Prophet Muhammad; older even than Yazdegerd III, Alexander the Great, Cyrus the Great, Darius II, Sargon of Akkad, and Ashurbanipal. He witnessed the inundation of ziggurats, the collapse of towers, the immolation of temples. For the thousands of reigns he lived through, the kings innumerable and emperors forgotten, only one had ever sought his counsel. Despite being two-thirds divine, a king who would ultimately die like the rest of us; a fearsome ruler named Gilgamesh.

2.The tale of a righteous man visited by a god who warned him of rising waters—who in response builds an ark, venturing forth when a dove that he’s released confirms that dry land has reemerged—may strike you as a story that you’ve heard before. Norman Cohn parses the purpose of these stories, writing in Noah’s Flood: The Genesis Story in Western Thought that “large areas of what used to be Mesopotamia…were frequently devastated by flood. When torrential rain combined with the melting of the snows… the Tigris and Euphrates could burst their banks.” Cohn explains that in “ancient times this phenomenon gave rise to a powerful tradition: it was believed that here had once been a flood so overwhelming that nothing was ever the same again.” But if Genesis focuses on sin and punishment, the anonymously written Epic of Gilgamesh has no moral, save for a brief on why we must die at all (or at least why most of us must).  

Unlike Noah, who even with the antediluvian extremes of 950 years did ultimately die, Utnapishtim was gifted (or cursed) by the gods with immortality. Some other differences with the Bible’s account, for Genesis records nothing of having to fight scorpion-monsters to reach Utnapishtim. Gilgamesh was stricken over the death of his best (and arguably only) friend, Enkidu, the wild man domesticated by the priestess Shamhat with sex and beer (as so many people are). For the ruler of Uruk, Utnapishtim promised something that can’t be purchased in gold, the possibility of Enkidu’s resurrection and Gilgamesh’s immortality. In Stephen Mitchell’s reimagining Gilgamesh: A New English Version, Utnapishtim queries the ruler: “who will assemble/the gods for your sake? Who will convince them/to grant you the eternal life that you seek?”

Utnapishtim tasks Gilgamesh, the man who defeated the mighty ogre Humbaba, that immortality is his if he simply stays awake indefinitely. Despite quasi-divinity, powers temporal and physical, authority and prestige, Gilgamesh can’t defeat slumber. Utnapishtim mocks the king, “Look at this fellow! He wanted to live / forever, but the very moment he sat down, / sleep swirled over him, like a fog.” All of human weakness and desires—our need to eat, our need to shit and piss, our need to fuck—signal that our lot is not that of Utnapishtim or of the gods who created him. Even if you can defeat Humbaba, the Epic of Gilgamesh reminds us, sooner or later you’ll nod off.

Finally, Gilgamesh is informed that the only means of living forever is to acquire a magic plant growing at the bottom of all rivers’ sources, which the ruler promptly finds, only to have the wily serpent (at the start of an auspicious career) snatch the fruit away from him. Enkidu’s death has left the king raw and lonely, but Utnapishtim’s example is illusory and dangerous, for it “postpones Gilgamesh’s necessary acceptance until a time when he is more ready for it,” as Mitchell writes. Gilgamesh realizes that immortality is not literal; one does not live forever at the world’s eastern edge, but rather in deeds, memories, and in words. We’re told by less mature voices to rage against the dying of the light, but the earliest story has Gilgamesh confront immortality’s mirage, understanding how “now that I stand / Before you, now that I see who you are, / I can’t fight.”

Ironically, Utnapishtim’s story was forgotten for millennia (if filtered through other myths). “Though it is one of the earliest explorations of these perennial themes,” writes David Damrosch in The Buried Book: The Loss and Rediscovery of the Great Epic of Gilgamesh, “this haunting poem isn’t a timeless classic.” Hidden just as surely as Utnapishtim in his orchard, the influence of The Epic of Gilgamesh is subliminal in our cultural memory. Preserved on a few broken kiln-burnt tablets strewn about the floor of the Akkadian king Ashurbanipal’s library, The Epic of Gilgamesh wasn’t rediscovered until the 19th century by British archeologists. Of that, Utnapishtim’s discourse on eternity occupied only a few lines on the 11th tablet.

Literary historian Michael Schmidt in Gilgamesh: The Life of a Poem describes the epic as constituting “the first road novel, the first trip to hell, the first Deluge.” So much has come after what that nameless scribe wrote; it predates Homer and Virgil, Dante and John Milton, William Shakespeare and Jane Austen, Anne Bradstreet and Emily Dickinson. Even though ignorant of Gilgamesh, they worked and aspired to the same timelessness, for as Schmidt writes, it “prefigures almost every literary tone and trope and suggests all genres, from dramatic to epic, from lament to lyrics and chronicle, that have followed it.”

The Epic of Gilgamesh reminds us that there have been many floods, many apocalypses, many deaths, and virtually nobody has ever come out the other side alive. To live forever may be a myth, yet for our lack of eternity, even after all these millennia, we are still “Wandering, always eastward, in search / of Utnapishtim, whom the gods made immortal.” 

3.Sex evolved before death. Arguably the former was a prerequisite for the latter. Sexual reproduction, genetic material exchange resulting in a new individual, was first practiced among simple prokaryotes—unicellular organisms lacking membrane and nuclei—about two billion years ago. Birds do it, bees do it, educated fleas do it, and apparently even prokaryotes do it. Such hobbies introduce beneficial genetic variations that the date-night loneliness of asexual reproduction simply doesn’t allow for. When celibate organisms reproduce through mitosis, they’re cloning themselves—the individual is the species. If you squish an asexual prokaryote, there are millions more just like it—death is meaningless; it is fundamentally immortal. But once sex exists, the loss of any one thing can be considered the irretrievable death of something completely unique, no matter how simple it may be. As the old joke at my alma matter has it, “Sex kills. If you want to live forever, go to Carnegie Mellon.”

Immunologist William R. Clark explains in Sex and the Origins of Death that “Obligatory death as a result of senescence—natural aging—may not have come into existence for more than a billion years after life…programmed death seems to have arisen at about the same time that cells began experimenting [with] sex…It may be the ultimate loss of innocence.” From the first orgasm came the first death gasp—at least proverbially. Human culture has subsequently been one long reaction to that reality, debating whether it was a fall or a Felix culpa.

That sex precedes death isn’t just a biological fact, but it has the gloss of theological truth about it as well. Such is the chronology as implied by Genesis; though there was debate as to if Adam and Eve did have sex in Eden, there seemed little doubt that they could have (though St. Augustin said it could only be facilitated through pure rationality, and not fallen passion). That all changes once Utnapishtim’s wily serpent makes a reappearance, and God tells Eve that He will “greatly multiply thy sorrow and thy conception; in sorrow thou shalt bring forth children.” Only three verses later, and God tells Adam that “dust thou art, and unto dust shalt thou return.” Sent beyond Eden’s walls to live out their finite days in the desert, their only consolations are sex and death.

Eros and Thanatos endures in the human psyche. Since the 16th century, the French have referred to orgasm as la petite mort, the “little death.” When that phrase first appeared, Europe was in the midst of syphilitic panic. A disease of replacement: silver noses pressed into the viscus putty of a rancid face, and of the tics and mutterings of those who’ve gone insane. Anthropologist Jared Diamond writes in Guns, Germs, and Steel: The Fates of Human Societies that when syphilis “was first definitely recorded…in 1495, its pustules often covered the body from the head to the knees, caused flesh to fall from people’s faces, and led to death within a few months.” If scolds were looking for the connection between sex and death, syphilis was a ready-made villain. Always a moralizing faith, Christianity was made a bit more so with the arrival of syphilis; in 15th-century Florence the fanatical Dominican Girolamo Savonarola taught that it was God’s punishment for decadent humanism; a generation later and the Protestant Martin Luther would concur with his Catholic forebear.

In Naples they called it the “French disease,” and in Paris it was an Italian one, but epidemiologists have configured it as American, noting its arrival shortly after Christopher Columbus’s return from the Caribbean. Syphilis was an export alongside potatoes and tomatoes; an unwitting revenge for the smallpox introduced into the Western Hemisphere. If modernity signals its own fall, than syphilis was perhaps an indiscriminate punishment, for as historian Roy Porter writes in The Greatest Benefit to Mankind: A Medical History of Humanity, syphilis “should be regarded as typical of the new plagues of an age of conquest and turbulence, one spread by international warfare, rising population density…[and] the migrations of soldiers and traders.” Sacrificed immortality was the price that living creatures paid for the possibility of connection, for if eternity was once a biological process, then its opposite was as well.

4.Ponce de Leon looked for immortality in Florida, it’s true. Somewhere near where tourists stroll eroding Miami Beach, soccer moms pick their children up from Broward County strip malls, or hearty adventurers visit Pensacola gator-parks, the conquistador had obsessively searched for the Fountain of Youth. According to (apocryphal) legend, de Leon was fixated on stories told by Native Americans of a mythic source water whose curative properties would restore men to youth—indefinitely. Immortality by water fountain if you will. “Ponce de Leon went down in history as a wishful graybeard seeking eternal youth, like so many Floridians today,” quips Tony Horowitz in A Voyage Long and Strange: On the Trail of Vikings, Conquistadors, Lost Colonists, and Other Adventures in Early America.

Arawak and Taino spoke of a spring named “Bimini,” located everywhere from the Bahamas to the Yucatan. De Leon looked for it in the future Sunshine State, though you’ll note that he did not find it. If you’ve ever visited the Old Town of San Juan, Puerto Rico with its charming, crooked stone streets that meander by colonial buildings painted in pinks and blues, you’ll find that far from achieving immortality, de Leon is buried inside of the white-walled Cathedral of San Juan Bautista. In 1521, somewhere between Florida’s hidden Fountain of Youth and immortality, de Leon found himself in the way of a manchineel poisoned arrow wielded by a Calusa warrior.  

When de Leon envisioned a bubbling creek that could restore him to lost youth (probably better to have just enjoyed the original more), by what mechanism did he see such a thing as working? Chemical or alchemical, natural or supernatural? Horowitz writes that a “Spanish historian later claimed that Ponce de Leon [searched]…as a cure for his impotence,” which gives us an emotional register for the conquistador’s obsession, if not the pharmaceutical specifics. Since no such fountain actually exists, the question of whether it’s magic or science is easier to answer—it’s neither. Or, perhaps, better to think of it as a magical belief about science; the idea that the fountain’s waters have mineralogical or medical properties is fundamentally just a mask for our supernatural inclinations, the fountain not so different from Gilgamesh’s restorative plant.

As early as the fifth century before the Common Era, and Herodotus would declaim in The Histories that the mythic Macrobians living on the Horn of Africa could live as long as 120 years with the assistance of a particularly pure spring. He writes of a “fountain, wherein when they had washed, they found their flesh all glossy and sleek, as if they had bathed in oil—and a scent came from the spring like that of violets…their constant use of the water from it [is] what makes them so long-lived” (gerontologists agree that 120 years seems to be the upper-limit natural expiration date for humans, if free of disease and accident).

Alexander the Great and the imaginary Christian king Prester John, whose realm was supposedly somewhere deep in either pagan Asia or Africa, are associated with the myth. The 14th-century faux-exploration narrative The Travels of Sir John Mandeville, a post-modern picaresque or autofiction written before postmodernism and autofiction were things, claims that a similar spring exists in India, “a beautiful well, whose water has a sweet taste and smell, as if of different kinds of spices.” Some of the author’s accounts of Egypt and China conform to what we know about those places during the Middle Ages, and yet Mandeville’s claims about whole tribes that have Cynocephaly (they’re dog-headed) or of groups of Epiphagi (people with heads in their chests) strain credulity. How are we to trust such a source about the Fountain of Youth?

What these examples should demonstrate is that de Leon had a ready-made script. Not a dissimilar process to how Spanish colonists saw the ancient Greek legend of the Amazons in South America, or the Portuguese fable of the Seven Cities of Cibola in the red-baked American southwest. Theirs was never a process of discovery, but of the endless use and reuse of their own dusty legends. Who knows what Bimini really was? The conquistadors had their imaginings from Herodotus and Mandeville, and they were going to impose such a concept onto America. Immortality, regardless of whether it’s to be found in India, the Horn of Africa, or south Florida, is an obsession. Yet what the obsessive obsesses over tells us more about the obsessed than the obsessee.

5.Virginia Woolf didn’t wish to acquire immortality, far from it. A disservice to Woolf, not to mention the millions of those who suffer from depression, to reduce her 1941 suicide to allegory or symbol. When weighted down with rocks she walked into the Ouse near her East Sussex home, Woolf was not providing us with gloss on life and death. “I am doing what seems the best thing to do,” Woolf writes in the note left for her husband, “I don’t think two people could have been happier till this terrible disease came.” That, in its straightforwardness, says the most important thing about Wolfe’s suicide—that it was the result of her disease. Depression is not allegory, it is a disease, and oftentimes it kills people. Woolf did, however, supply her thoughts on immortality some 13 years earlier, in one of the greatest meditations ever supplied on the subject, her novel Orlando: A Biography.

Woolf depicts an English courtier born during the reign of the Tudors, who, despite the limitations of having a finite body, is somehow able to will himself (and then herself) into immortality. Orlando may be a subject of Queen Elizabeth I, but by the end of the novel she’s able to fly over her manor house in an airplane. In the interim, the character experiences the Frost Faire of 1608 (when the Thames froze over and merchants plied candied apples and roasted walnuts on its surface), an affair with a beautiful Russian noblewoman, placement in an English embassy in Constantinople, adoption by a group of Roma, and marriage to an eccentric gender nonconforming sea captain with the unimaginably great name of Marmaduke Bonthrop Shelmerdine. And then around the age of 30, Orlando transforms from being a man into a woman as she slept one night.

No magic plants or springs, rather a sense that Orlando’s personality is so overabundant that it can’t be constrained through either sexuality or time. “Orlando had become a woman,” Woolf writes simply, “there’s no denying it.” A masterpiece of both temporal and gender ambiguity, in Orlando the author doesn’t desire immortality for herself, but she imagines it for her character, what her biographer Hermione Lee describes in Virginia Woolf as its quality of being a “masterpiece of playful subterfuge.” Unlike Gilgamesh with his overweening bloodlust, or de Leon with his immature obsession, Woolf envisions immortality as a radical, subversive, creative state; as Hill puts it, a “magnificent, surrealist erection.” For Woolf, immortality is better understood as an aesthetic act, living one’s life so fully, with such pure, unmitigated, wondrous agency that the contours of normal years and decades simply must expand to fit our enormity. With relativistic insight Woolf observes that “An hour, once it lodges in the queer element of the human spirit, may be stretched to fifty or a hundred times its clock length; on the other hand, an hour may be accurately represented on the timepiece of the mind by one second,” a phenomenon that she cheekily claims “is less known than it should be and deserves fuller investigation.”

Orlando’s great negative capability is that Woolf describes depression without the novel losing sight of a certain wondrous enchantment. She writes that “At the age of thirty…this young nobleman had not only had every experience that life has to offer, but had seen the worthlessness of them all. Love and ambition, women and poets were all equally vain.” Such emptiness and disinterestedness—the sheer fact of being so tired—is the medium of depression. But the narrative itself is what demonstrates the illusoriness of such emotions, even if in the moment they feel to us to be inviolate. Orlando thinks that they have experienced everything, but have they been to Constantinople? Have they flown over Sussex in a plane? Not yet—and therein lay the rub. Life has a charged abundance, even if brain chemistry and circumstance sometimes deny us that.

Orlando was a roman à clef upon the great love of Woolf’s life—Vita Sackville-West. The character shares Sackville-West’s androgynous beauty, her poetic brilliance, and her aristocratic forbearance. Most of all, Orlando and Sackville-West are united in having lived their lives with such jouissance, with such unbridled life, that death itself seems to indefinitely pause to take them. An existence where we can observe in the Thames “frozen to a depth of some twenty fathoms, a wrecked wherry boat… lying on the bed of the river where it had sunk last autumn, overladen with apples,” the frozen corpse of the saleswoman visible in her blue-lipped magnificence at the bottom. What a strange, terrible, and beautiful thing this life is, that if we were to fully inhabit every single blessed second of it, we’d be as eternal as it were ever possible to be, within the very universe of a moment. How fortunate we are.  

6.On the road from Santiago de Compostela in 1378, the French alchemist Nicholas Flamel and his wife Perenelle met eternity. According to almost certainly fabricated accounts written in the 17th century, the wealthy Parisian bookseller’s study of magic helped him to (among other things) derive the philosopher’s stone, and thus generate the so-called “Elixir of Life,” which granted him immortality. Flamel had journeyed to Spain, that liminal place between east and west, Europe and Africa, Christian, Jewish, and Islamic, under the assumption that some mage could interpret a manuscript he purchased in Paris. Flamel wasn’t so fortunate in finding assistance while actually in Spain, but on the road home a Jewish converso recognized the occult text for what it was—an original copy of the powerful grimoire The Book of Abramelin.

As with all such guides, The Book of Abramelin makes big promises. Within there are “the actual rules to acquire this Divine and Sacred Magic…therein find certain examples and other matters which be none the less useful and profitable unto thee.” It parsed the intricacies of summoning your guardian angel, how to bind demons, and how to walk underwater (which as impressive as it is, doesn’t really match the first two). There are a lot of magic squares scattered throughout. And, of course, there is the recipe for the philosopher’s stone. As with the stories about Flamel himself, The Book of Abramelin seems to be another 17th-century invention. The author is supposedly the German Kabbalist Abraham of Worms, who traveled to Egypt to confer with the reputed mystical master Abramelin himself. “And having written this with mine own hand,” writes the author (since we’re unsure of whose hand penned that line), “I have placed it within this casket, and locked it up, as a most precious treasure; in order that when thou hast arrived at a proper age thou mayest be able to admire, to consider, and to enjoy the marvels.”

No version of the text has been found that predates 1608, and the earliest copies are in German rather than Hebrew; all of which seems to indicate that just as with Flamel, the reputation of The Book of Abramelin is a fantasy borrowing authority from medieval exoticism. A fashionable Hebraism developed during the Italian Renaissance, and quickly spread throughout Europe, so that references to the Kabbalah could impart a degree of authenticity for Christian occultists. Gershom Scholem writes in Alchemy and Kabbalah that the “name of this arcane discipline became a popular catchword in Renaissance and Baroque theosophical and occult circles, having been declared and revered as the guardian of the oldest and highest mystical wisdom of mankind by its fist Christian mediators.” All the greatest stories about Kabbalah may be set in the Middle Ages, but for the Christian occultists who appropriated it, the subject was very much a Renaissance affair.

For alchemists and occultists like Paracelsus, Johann Reuchlin, or John Dee, Flamel was an instructive example. Knowledge was supposed to be the road to eternity, as surely as a Parisian scribe could return from Compostela, and perhaps the bookseller was somewhere wandering like the converso who gave him the secret to never dying. Could Flamel be glimpsed, in the court of the occult emperor Rudolf II in red-roofed Prague, discoursing on astronomy with Johannes Keppler and Kabbalah with Rabbi Judah ben Lowe? Would he be found on those curving stones of dark Cambridge with Thomas Vaughan, or among the sun-dappled courtyards of Florence with Giordano Bruno?

In reality, Flamel was moldering underneath the nave of the Church of Saint-Jacques-de-la-Boucherie in the fourth arrondissement. His tombstone now sits in the Musée de Cluny; the year of Flamel’s expiration was 1418. By all actual accounts, he was a loving husband and a successful merchant. Flamel’s fate, in all of its beauty, was the same as everybody’s. The psychoanalyst Marie-Louise von France gives a charitable reading of the Renaissance theorists of immortality, explaining in Alchemy: An Introduction to the Symbolism and Psychology that the desire for this knowledge “was actually the search for an incorruptible essence in man which would survive death, an essential part of the human being which could be preserved.” An accurate definition for poetry.

7.Enoch’s story is recounted across only four lines in Genesis. The King James Version of the Bible sets the cryptic tale of a man who ascended bodily to heaven, presumably having never died and still living immortally somewhere in the astral realm, in just 53 words. Father of Methuselah, who was himself so remarkably long-lived that his name has long been conflated with extreme seniority, Enoch simply never died. We’re told at Genesis 5:24 that “Enoch walked with God: and he was not; for God took him.” Such is the entire explanation of what happened to this man of the seventh generation. What an odd bit of poetry this is. For. God. Took. Him.

Even if the Bible is tight-lipped about Enoch, the copious fan-fic about him (which scholars call “apocrypha”) lacked a similar reticence. From the mystics of antiquity to the occultists of today, Enoch achieved not just immortality but actual apotheosis, seated next to a God who liked him so much that he transformed the mortal into a “lesser Yahweh.” Such is a description of his career change from a pseudographical rabbinic text called 3 Enoch, which is dated to the fifth century after the Common Era. Scholem provides gloss on this unusual book in his Major Trends in Jewish Mysticism, writing that “[his] flesh was turned to flame, his veins to fire, his eye-lashes to flashes of lightning, his eye-balls to flaming torches, and who God has placed on a throne next to the throne of glory, received after this heavenly transformation the name Metatron.” There is a venerable occult tradition that holds that Enoch become immortal, was elevated above even the archangels, became the very voice of the Lord, and was given a name that sounds like that of a Transformer.

Enochian literature can be traced back to three apocryphal texts from the first centuries of the Common Era that all elaborated on the terse passage from Genesis. 3 Enoch (also amazingly called The Revelation of Metatron) is joined by the Book of Enoch, written in Ge’ez and still held canonical by the Orthodox Tewahedo Church in Ethiopia, and the Second Book of Enoch which only survives in Old Bulgarian (I’m not making that up). The last book’s alternate title is actually even better than The Revelation of Metatron, it is often referred to as The Book of Secrets. From translator Willis Barnstone’s version of The Book of Secrets, as included in his incredible anthology The Other Bible, Enoch speaks in the first person, telling us that “I know all things and have written them into books concerning the heavens and their end, their plentitude, their armies, and their marching. I have measured and described the stars, their great and countless multitude. What man has seen their revolutions and entrances?”  

Metatron was the amanuenses of God’s thoughts, the librarian of reality who noted all that had, would, or could be done. A scribe as surely as Flamel was—Metatron was a writer. Enoch was the first figure in scripture to ascend to heaven, though he was not the first. Midrash actually records eight people as having achieved immortality this way, including the prophet Elijah who is consumed entirely up into a whirlwind; Sarah, whom is blessed by her grandfather Jacob with “May you live forever and never die;” and Ebed-Melech the Ethiopian who saved the prophet Jeremiah’s life during the Siege of Jerusalem. Catholics believe that the Virgin Mary ascended bodily, though after her death on Earth (a minority claim she was taken while still alive). Christianity more generally teaches that Christ rose to heaven, though he also died first, of course; while Muslims teach that both Muhammad and Jesus ascended.

Immortality, these accounts remind us, is a miracle. Perhaps no more so than with Enoch, for those other examples concern the ascension of prophets and the messiah, but the lowly man of the seventh generation was just some guy. A quiet beauty to the account, for why did Enoch walk with God? What about Enoch was so agreeable to the Lord that He would take him? What cracked beauty is there in a human gifted the ability to see the universe in its resplendence, so that as concerns the stars, Enoch speaks in a voice of awe from the Book of Secrets that “Not even the angels see their number, yet I have recorded all their names.”

8.While writing theater reviews for the Dublin Evening Mail, a 28-year-old graduate of Trinity University named Abraham Stoker would be the unlikely author of a gushing fan letter sent on Valentine’s Day 1876 to an American poet with an address in the distinctly unglamorous locale of Camden, New Jersey. That wasn’t Stoker’s first attempt at writing to Walt Whitman; he’d penned an effusive, borderline-erotic missive some four years earlier but kept the epistle in his desk out of embarrassment, before finally sending the original with a new note of introduction.

“Do not think me cheeky for writing this,” Stoker, who now went by “Bram,” wrote in the new letter, but “I believe you will like it,” he said regarding his original message. Whitman is a “man who can write, as you have written, the most candid words that ever fell from the lips of a mortal man.” For Stoker, only 9 when Leaves of Grass was first printed (and as of then completely unknown in Britain or Ireland), Whitman had “shaken off the shackles and your wings are free.” With a tragic pathos still clear more than a century later, Stoker confesses (which has afforded no shortage of literary gossip) that “I have the shackles on my shoulders still—but I have no wings.”

As obsequious as Renfield, Stoker tells Whitman that “You are a true man, and I would like to be one myself, and so I would be towards you as a brother and as a pupil to his master.” Perhaps he was, as there is something almost vampiric in Whitman’s 1891 revision of his poem “Trickle Drops,” done a year before his death and six before his protégé would publish his most famous novel. “Trickle drops! my blue veins leaving!…drip bleeding drops, / From wounds made to free you when you were prison’d / From my face, from my forehead and lips, / my breast…Stain every page, stain every song I sing, every word I saw, bloody drops,” Whitman enthuses. Stoker’s titular character in Dracula concurs with the American bard: “The blood is the life!”

Strange to think of the consummate rugged individualist with his broad shoulders and his Old Testament beard as influencing Stoker, but as an unconventional bohemian, Whitman may have shared more with Dracula than has been supposed. Biographer Barbara Belford notes in Bram Stoker and the Man Who Was Dracula that “the vampire at times resembles Whitman. Each has long white hair, a heavy moustache, great height and strength, and a leonine bearing.” Perhaps less superficially, “Whitman’s poetry celebrates the voluptuousness of death and the deathlike quality of love.” Whitman, with the gleam of the vampire in his eyes, promises in his preface to Leaves of Grass that the “greatest poet…drags the dead out of their coffins and stands them again on their feet.”

Leaves of Grass is a work that enthuses about immortality, albeit more in the transcendentalist sense than in the vampiric one. “The smallest sprout shows there is really no death,” Whitman writes, “And if there was it led forward life, and does not wait at the end to arrest it…All goes onward and outward, nothing collapses, / And to die is different from what anyone supposed.” Whitman fully expected a metaphysical immortality whereby his very atoms mingle into the streams and stones, the rocks and the rambles. Admittedly a different type of immortality than that surmised by Stoker, yet he borrowed from Whitman the poet’s charged, fully realized, erotic, bohemian persona. The Irishman noted that Whitman was the “quintessential male,” and its hard not to see some of that projection onto Dracula.

The immediate historical influence for Dracula was the 15th-century Wallachian prince Vlad Tepes, more popularly known as the “Impaler” after his favored pastime. Eros and Thanatos again, a bit of sex and death in that nickname. Radu Florescu and Raymond T. McNally note in their classic In Search of Dracula that the “ruler notorious for mass impalements of his enemies…was in fact called Dracula in the fifteenth century, and we found that he even signed his name that way.” From Whitman, Stoker took the transcendent nature of immortality, and from Vlad the blessed violence, bound together in the transgressive, bohemian personality of the aesthete. Literary scholars Joanna Levin and Edward Whitley write in Whitman Among the Bohemians that from the “bohemians to contemporary hipsters, Whitman still commands center stage, providing an ever-magnetic focal point for countercultural self-fashionings,” something that any goth can tell you is true of Dracula as well. As a reader, Stoker is able to comprehend that Whitman’s celebration of immortality must by necessity also have its drawbacks, that the vampiric can’t help but pulse through any conception of life beyond the grave.

With the smallest sprout in mind, Stoker writes that it’s a “strange world, a sad world, a world full of miseries, and woes, and troubles.” Yet we can “all dance to the tune…Bleeding hearts, and dry bones of the churchyard, and tears that burns as they fall—all dance together to the music.” Immortality kindled in the space of human connection, our lives able to exist indefinitely through others. Dracula does this literally, sucking upon the blood of innocents, but we ideally all do it when we ingest the words of others, and respond in kind. Whitman wrote back to Stoker. “I am up and dress’d, and get out every day a little, live here quite lonesome, but hearty, and good spirits.” He concluded the letter with, “Write to me again.”

9.Many lines are on the CV of the biomedical gerontologist Aubrey de Grey: graduate of Trinity College Cambridge with a Ph.D. awarded for his dissertation The Mitochondrial Free Radical Theory of Aging; Fellow of the Gerontological Association of America, Institute for Ethics and Emerging Technology, and the American Aging Institute; adjunct professor at the Moscow Institute of Physics and Technology, and most famously the chief science officer at the California-based Strategies for Engineered Negligible Senescence. Added to that, under “Skills,” could be “Still Alive.” Don’t knock it as an entry; the vast majority of people who have ever lived can no longer claim the same. De Grey, whose name is almost ridiculously on the nose, would argue that “Still Alive” could be easily translated into “(Effectively) Immortal,” for the researcher claims that death is a terminal illness that will one day be preventable, and that any dismissiveness to that is a case of sublimated religious nostalgia.

He looks the part of an immortal, more prophet than scientist. With a long, tangled, greying auburn beard that is positively druidic, de Grey appears as if he were Merlin or Galahad, some Arthurian immortal. If anything, that epic beard calls to mind those who’ve joined us already—the good, grey bearded ruddy complexioned poet Whitman, and de Leon with his face burnt from the Florida sun with unshorn hair poking out from his metal cap; Enoch’s cascading white mane (or so one imagines) and Utnapishtim’s curled black beard hanging in plaits from his gaunt, severe face.

De Gray has an advantage over all of these men, and that’s that he is still alive (or even exists in the first place). That may, however, be his ultimate disadvantage, for unreality has a type of immortality that biology can’t approach. Of no concern to de Grey, writing alongside Michael Rae in Ending Aging: The Rejuvenation Breakthroughs that Could Reverse Human Aging in Our Lifetime, he argues that his field is “inhibited by the deeply ingrained belief that aging was ‘natural’ and ‘inevitable,’ biogerentologists had set themselves apart from the rest of the biomedical community by allowing themselves to be overawed by the complexity of the phenomenon that they were observing.” Not without some justification, de Grey argues that aging and death are biological problems and thus have biological solutions.

Utnapishtim had his magic plant and de Leon his spring of rejuvenation, but for de Grey immortality was an issue of his “own idea for eliminating intercellular garbage like lipofuscin [combined with]…making mitochondrial mutations harmless…for addressing glycation, amyloid accumulation, cell loss, senescent cells and cancer.” When it comes to the hodgepodge of techno-utopians who fall under the broad term of “transhumanism,” de Grey is positively a traditionalist in that he’s still focused on these meat bags filled with blood, piss, shit, and phlegm. More radical transhumanists have gone digital, arguing that consciousness could be downloaded to computers, the eternal soul an issue of making sure that your files are backed up.

Engineer Ray Kurzweil is one such evangelist for the coming of robot-Jesus, when Artificial Intelligence will be able to assist in the downloading of your mind, and the resurrection of those who’ve already passed before us (through purely material, scientific, technological means of course). He writes in The Singularity Is Near: When Humans Transcend Biology that when that eschaton arrives (always in just a few decades), it will “allow us to transcend these limitations of our biological bodies and brains. We will gain power over our fates. Our mortality will be in our own hands. We will be able to live as long as we want.” Apparently, such a project is easier than halting climate change, or at least the hyper libertarian funders of such transhumanist schemes, from Elon Musk to Peter Thiel, would have you believe such. The desire for immortality is a deeply human one, but with the irony that its achievement would serve to eliminate the human entirely. Ask not for whom the computer chimes, simply upload your soul to the cloud.

10.About two weeks ago from the time of my writing, Speaker of the House Nancy Pelosi announced that the legislature was “moving forward with an official impeachment inquiry” of Donald J. Trump. Pelosi’s announcement was broadcast on all major networks, on PBS and MSNBC, CNN and (even) FOX. In a vacuum, electromagnetic radiation travels at 186,000 miles per second; Albert Einstein’s theory of special relativity tells us that nothing may go faster. That means that this happy bit of news can be heard as far as 225,327,187,473.16 miles away, and counting, though unfortunately what’s in that space is mostly dust and rock. The closest star system to us is Alpha Centauri, which is a positively minuscule 4.37 light years away, meaning that for any lucky extraterrestrials there Barack Obama is still president.

In EZ Aquarii, they just heard President Obama’s acceptance address in Grant Park; any planets near Procyon will have just been informed of the 2008 financial collapse, and at LPP 944-020 they’re leaning of the invasion of Iraq. At MU Arae they’ve discovered that humans made the puny jump to the Moon (as well as listening to Abbey Road for the first time), HR 4864 just heard Walter Cronkite deliver the sad news about President John F. Kennedy’s assassination, and Zeta Virginis is now aware that the Second World War is over. In just a little less than a decade, assuming that such weak electromagnetic waves hadn’t been absorbed by the dust and rock that reigns supreme in interstellar space, Guglielmo Marconi’s first transatlantic radio broadcast of the letter “s” repeatedly tapped out in Morse code would be arriving at K2-18b, a massive “super-earth” exoplanet some 120 light years away.

Earth is surrounded by an electromagnetic halo, our missives in radio and light that grow ever weaker with distance, but which send our thoughts ever further into interstellar space with every passing year. Music, entertainment, news, communication, all of it sent out like so many dandelion spores into the reaches of the black cosmos. The continual thrum of that pulsating meaning—what Whitman could have described as “Out of the cradle endlessly rocking, / Out of the mocking-bird’s throat, the musical shuttle”—a record of our having been here that can never be erased, though it’s ambiguous if there is anyone out there to listen. “O rising stars!” Whitman wrote, “Perhaps… [I] will rise with some of you,” an offering made up in a frequency between 88MHz-108Mhz. There is an immortality, disembodied and ethereal, that does turn to be out in the heavens—just in a form that may have been difficult for Enoch to imagine.

A harder chunk of our finality exists out there as well, the Golden Record included on both of the Voyager 1 and Voyager II spacecraft launched by NASA, which having passed into interstellar space beyond our solar system in respectively 2012 and 2018 are the furthest things that have ever been touched by human hands. Conceived of by the astrophysicist Carl Sagan, the Golden Record is a phonographic LP encoded with both images and a little under six and a half hours of sounds, meant to express our sheer enormity. For any extraterrestrials that should happen to find the record—Voyager 1 is about 40,000 years out from Gliese 445—Sagan and his committee’s record may serve as the only tangible example of our eternity, our only vehicle for immortality. In being able to select the contents of such a canon, Sagan is arguably the most influential human to ever live.

Any future listeners will be able to hear pianist Glen Gould’s transcendent interpretation of Johan Sebastian Bach’s mathematically perfect Brandenburg Concerto, the mournful cry of blues-singer Blind Willie Johnson’s “Dark was the Night, Cold was the Ground,” the Bavarian State Orchestra playing Wolfgang Amadeus Mozart’s The Magic Flute, and Chuck Berry absolutely shredding it on “Johnny B. Goode.” Sagan reminisces in Pale Blue Dot: A Vision of the Human Future in Space that “any alien ship that finds it will have another standard by which to judge us.” Astronomer Jim Bell writes in The Interstellar Age: The Story of the NASA Men and Women Who Flew the Forty-Year Voyager Mission that “If the messages aboard the Voyagers ended up being the last surviving artifacts of our world, they would signify the brighter sigh of human nature…[we] wanted to send out sign of our hopes, not our regrets.” Unless Voyager 1 or 2 should slam into some random star or fall into a hidden blackhole, unless some bit of flotsam should smash it up or some wayward creature should use it for target practice, both probes will continue unimpeded for a very long time. Space being mostly empty, their lifespans will be such that they’re effectively immortal.

Fifty-thousand years from now, after climate change renders us extinct, the interglacial period will end and a new ice age will descend on Earth. In two million years, the coral reefs of the world will have had time to recover from ocean acidification. Sixty million years from now, and the Canadian Rockies will have eroded away. Geologists predict that all of the Earth’s continents will coalesce into a supercontinent 250 million years from now. Five hundred million years in the future they’ll have separated again. A little more than a billion years from now, and stellar fluctuations will increase temperatures so that the oceans will be boiled away. In 1.6 billion years the last of our friends the prokaryotes will most likely be extinct. By 7.59 billion years, the sun will reach its Red Giant phase, and what remains of the Earth will most likely fall into our star. Through all of that, slowly moving along in blackness, will be the Golden Record. “Better was it to go unknown and leave behind you an arch, a potting shed, a wall where peaches ripen, than to burn like a meteor and leave no dust,” Woolf wrote. Voyager, however, leaves dust, no matter how scant.

Our solar system will be dead, but somewhere you’ll still be able to hear Ludwig von Beethoven’s Symphony Number 5 in C Minor. Sagan’s satellite is as if Ashurbanipal’s library was buried in the desert of space. As these things have moved us, perhaps somehow, someway they will move minds that have yet to exist. In a lonely universe, the only immortality is in each other, whomever we may be. Included within the Golden Record are a series of sentences in various languages, including Akkadian. Only 19 seconds into the record, and for billions of years listeners may hear a human speaking in the language of Utnapishtim, delivering the benediction that “May all be very well.”

Images: Mark Tegethoff, Greg Rakozy, Kristopher Roller, Aron Visuals, Ewan Robertson, NASA, Franck V., Drew Graham, Maksym Gryshchenko

The Greeks Aren’t Done with Us: Simon Critchley on Tragedy

We know that ghosts cannot speak until they have drunk blood; and the spirits which we evoke demand the blood of our hearts.—Ulrich von Wilamowitz-Moellendorff, Greek Historical Writing, and Apollo (1908)

Thirteen years ago, when I lived briefly in Glasgow, I made it a habit to regularly attend the theater. An unheralded cultural mecca in its own right, overshadowed by charming, medieval Edinburgh to the east, the post-industrial Scottish capitalI was never lacking in good drama. Also, they let you drink beer during performances. Chief among those plays was a production of Sophocles’s Antigone, the final part of his tragic Theban Cycle, and one of the most theorized and staged of dramas from that Athenian golden age four centuries before the Common Era, now presented in the repurposed 16th-century Tron Church. Director David Levin took the Attic Greek of Sophocles and translated it into the guttural brogue of Lowlands Scotts, and in a strategy now deployed almost universally for any production of a play older than a century, the chitons of the ancient world were replaced with business suits, and the decrees of Creon were presented on television screen, as the action was reimagined not in 441 BCE but in 2007.

Enough to remind me of that headline from The Onion which snarked: “Unconventional Director Sets Shakespeare Play in Time, Place that Shakespeare Intended.” The satirical newspaper implicitly mocks adaptations like Richard Loncraine’s Richard III which imagined the titular character (devilishly performed by Ian McKellen) as a sort of Oswald Mosley-like fascist, and Derek Jarman’s masterful version of Christopher Marlowe’s Edward II, which makes a play about the Plantagenet line of succession into a parable about gay rights and the Act Up movement. By contrast, The Onion quips that its imagined “unconventional” staging of The Merchant of Venice is one in which “Swords will replace guns, ducats will be used instead of the American dollar or Japanese yen, and costumes, such as…[the] customary pinstripe suit, general’s uniform, or nudity, will be replaced by garb of the kind worn” in the Renaissance. The dramaturgical perspective behind Levin’s Antigone was definitely what the article parodied; there was nary a contorted dramatic mask to be found, no Greek chorus chanting in dithyrambs, and, as I recall, lots of video projection. The Onion aside, British philosopher Simon Critchley would see no problem with Levin’s artistic decisions, writing in his new book Tragedy, the Greeks, and Us that “each generation has an obligation to reinvent the classics. The ancients need our blood to revise and live among us. By definition, such an act of donation constructs the ancients in our image.”

Antigone, coming from as foreign a culture as it does, still holds our attention for some reason. The story of the titular character—punished by her uncle Creon for daring to defy his command that her brother Polynices’s corpse be left to fester as carrion for the buzzards and worms in the field where he died because he has raised arms against Thebes—would seem to have little to do with Tony Blair’s United Kingdom. When a Glaswegian audience hears Sophocles’s words, however, that “I have nothing but contempt for the kind of governor who is afraid, for whatever reason, to follow the course the he knows is best for the State; and as for the man who sets private friendship above the public welfare—I have no use for him either” a bit more resonance may be heard. Critchley argues that at the core of Greek tragedy is a sublime ambivalence, an engagement with contradiction that classical philosophy can’t abide;as distant as Antigone’s origins may be, its exploration of the conflict between the individual and the state, terrorism and liberation, surveillance and freedom seemed very of the millennium’s first decade. Creon’s countenance of the unthinkable punishment of his niece, to be bricked up behind a wall, was delivered in front of a camera as if George W. Bush announcing the bombing of Iraq from the Oval Office on primetime television. “Evil sometimes seems good / To a man whose mind / A god leads to destruction,” Sophocles wrote. This was a staging for the era of the Iraq War and FOX News, of the Patriot Act and NSA surveillance, and of the coming financial collapse. Less than a year later, and I’d be back in my apartment stateside watching Barack Obama deliver his Grant Park acceptance speech. It was enough to make one think of Antigone’s line: “Our ship of fate, which recent storms have threatened to destroy, has come to harbor at last.” I’m a bad student of the Greeks; I should have known better than to embrace that narcotic hope that pretends tragedy is not the omnipresent condition of humanity.

What could Sophocles, Euripides, and Aeschylus possibly have to say in our current, troubled moment? Tragedy, the Greeks, and Us is Critchley’s attempt to grapple with those disquieting 32 extant plays that whisper to us from an often-fantasized collective past. What survives of Greek tragedy is four less plays than all of those written by Shakespeare; an entire genre of performance for which we have titles referenced by philosophers like Plato and Aristotle, with only those three playwrights’ words enduring, and where often the most we can hope for are a few fragments preserved on some surviving papyri. Critchley emphasizes how little we know about plays like Antigone, or Aeschylus’s Oresteia, or Euripides’s Medea; that classicists often hypothesized that they were born from the Dionysian rituals, or that they focused on satyr psalms, the “song of the goats,” giving tragedy the whiff of the demonic, of the demon Azazel to whom sacrifices of the scapegoat must be made in the Levantine desert.

Beyond even tragedy’s origin, which ancient Greek writers themselves disagreed about, we’re unsure exactly how productions were staged or who attended. What we do have are those surviving 32 plays themselves and the horrific narratives they recount—Oedipus blinded in grief over the patricide and incest that he unknowingly committed but prophetically ensured because of his hubris; Medea slaughtering her children as a revenge on the unfaithfulness of her husband; Pentheus ripped apart by her frenzied Maenads in ecstatic thrall to Dionysius because the Theban ruler couldn’t countenance the power of irrationality. “There are at least thirteen nouns in Attic Greek for words describing grief, lamentation, and mourning,” Critchley writes about the ancients; our “lack of vocabulary when it comes to the phenomenon of death speaks volumes about who we are.” Tragedy, the Greeks, and Us is Critchley’s attempt to give us a bit of their vocabulary of excessive lamentation so as to better approach our predicament.

Readers shouldn’t mistake Tragedy, the Greeks, and Us as a conservative defense of the canon; this is no paean to the superior understanding of the ancients, nor is its highfalutin’ self-help. Critchley’s book isn’t Better Living Through Euripides. Easy to misread the (admittedly not great) title as an advertisement for a book selling the snake-oil of traditionalist cultural literacy, that exercise in habitus that confuses familiarity with the “Great Books” as a type of wisdom. Rather, Critchley explores the Greek tragedies in all of their strange glory, as an exercise in aesthetic rupture, where the works of Sophocles, Aeschylus, and Euripides configure a different type of space that renders a potent critique against oppressive logic. His task is thus the “very opposite of any and all kinds of cultural conservatism.” Critchley sees the plays not as museum pieces, or as simple means of demonstrating that you went to a college with diplomas written in Latin, but rather as a “subversive traditionalism” that helps us to critique “ever more egregious forms of cultural stupefaction that arise from being blinded by the myopia of the present.” This is all much larger than either celebrating or denouncing the syllabi of St. John’s College; Critchley has no concern for boring questions about “Western Civilization” or “Defending the Canon,” rather he rightly sees the tragedies as an occasion to deconstruct those idols of our current age—of the market, of society, of law, of religion, of state. He convincingly argues that any honest radical can’t afford to ignore the past, and something primal and chthonic calls to us from those 32 extant plays, for “We might think we are through with the past, but the past isn’t through with us.”

Critchley explains that the contemporary world, perhaps even more so than when I watched Antigone in Glasgow, is a “confusing, noisy place, defined by endless war, rage, grief, ever-growing inequality. We undergo a gnawing moral and political uncertainty in a world of ambiguity.” Our moment, the philosopher claims, is a “tragicomedy defined by war, corruption, vanity, and greed,” for if my Antigone was of its moment, then Tragedy, the Greeks, and Us could only have been written after 2016. That year, and the characters it ushered into our national consciousness, can seem a particular type of American tragedy, but Critchley’s view (even while haunted by a certain hubristic figure with a predilection for the misspelled tweet) is more expansive than that. In his capable analysis, Critchley argues that tragedy exists as a mode of representing this chaos; a type of thinking at home with inconsistency, ambiguity, contradiction, and complexity. It’s those qualities that have made the form suspicious to philosophers.

Plato considered literature in several of his dialogues, concluding in Gorgias that the “effect of speech upon the structure of the soul / Is as the structure of drugs over the nature of bodies” (he wasn’t wrong), and famously having his puppet Socrates argue in The Republic that the just city-state would ban poets and poetry from their affairs for the aforementioned reason. Plato’s disgruntled student Aristotle was more generous to tragedy, content rather to categorize and explain its effects in Poetics, explaining that performance is the “imitation of an action that is serious, and also, as having magnitude, complete in itself…with incidents arousing pity and fear, wherewith to accomplish its catharsis of such emotions.” Aristotle’s view has historically been interpreted as a defense of literature in opposition to Plato, whereby that which the later found so dangerous—the passions and emotions roiled by drama—were now justified as a sort of emotional pressure gauge that helped audiences purge their otherwise potentially destructive emotions. By the 19th century a philosopher like Friedrich Nietzsche would anticipate Critchley (though the latter might chaff at that claim) when he exonerated tragedy as more than mere moral instruction, coming closer to Plato’s claim about literature’s dangers while ecstatically embracing that reality. According to Nietzsche, tragedy existed in the tension between “Apollonian” and “Dionysian” poles; the first implies rationality, order, beauty, logic, and truth; the second signifies the realm of chaos, irrationality, ecstasy, and intoxication. Nietzsche writes in The Birth of Tragedy that the form “sits in sublime rapture amidst this abundance of life, suffering and delight, listening to a far-off, melancholy song…whose names are Delusion, Will, Woe.” For the German philologist that’s a recommendation, to “join me in my faith in this Dionysiac life and the rebirth of tragedy.”

As a thinker, Critchley Agonistes is well equipped in joining these predecessors in systematizing what he argues is the unsystematizable. Faculty at the New School for Social Research,and coeditor for The New York Times philosophy column “The Stone” (to which I have contributed), Critchley has proven himself an apt scholar who engages the wider conversation. Not a popularizer per se, for Critchley’s goal isn’t the composition of listicles enumerating whacky facts about Hegel, but a philosopher in the truest sense of being one who goes into the Agora and grapples with the circumstances of meaning as they manifest in the punk rock venue, at the soccer stadium, and in the movie theater. Unlike most of his countrymen who recline in the discipline, Critchley is a British scholar who embraces what’s called “continental philosophy,” rejecting the arid, logical formulations of analytical thought in favor of the Parisian profundities of thinkers like Jacques Derrida, Emanuel Levinas, and Martin Heidegger. Critchley has written tomes with titles like The Ethics of Deconstruction: Derrida and Levinas and Ethics-Politics-Subjectivity: Essays on Derrida, Levinas, & Contemporary French Thought, but he’s also examined soccer in What We Think About When We Think About Football (he’s a Liverpool fan) and in Bowie he analyzed, well, Bowie. Add to that his provocative take on religion in Faith of the Faithless: Experiments in Political Theology and on death in The Book of Dead Philosophers (which consists of short entries enumerating the sometimes bizarre ways in which philosophers died, from jumping into a volcano to love potion poisoning) and Critchley has announced himself as one of the most psychedelically mind-expanding of people to earn their lucre by explaining Schopenhauer and Wittgenstein to undergraduates.  

What makes Critchley such an engaging thinker about the subjects he examines is both his grounding in continental philosophy (which asks questions about being, love, death, and eternity, as opposed to its analytical cousin content to enumerate all the definitions of the word “is”) and his unpretentious roots in working class Hertfordshire, studying at the glass-and-concrete University of Essex as opposed to tony Oxbridge. Thus, when Critchley writes that “there is an ancient quarrel between philosophy and poetry,” it seems pretty clear that he’s a secret agent working for the latter against the former. He rejects syllogism for stanza and embraces poetics in all of its multitudinous and glorious contradictions. The central argument of Tragedy, the Greeks, and Us is that the form “invites its audience to look at such disjunctions between two or more claims to truth, justice, or whatever without immediately seeking a unifying ground or reconciling the phenomena into a higher unity.” What makes Antigone so devastating is that the title character’s familial obligation justifies the burial of her brother, but the interests of the state validates Creon’s prohibition of that same burial. The tragedy arises in the irreconcilable conflict of two right things, with Critchley explaining that Greek drama “presents a conflictually constituted world defined by ambiguity, duplicity, uncertainty, and unknowability, a world that cannot be rendered rationally fully intelligible through some metaphysical first principles or set of principles, axioms, tables of categories, or whatever.”

This is the central argument: that the “experience of tragedy poses a most serious objection to that invention we call philosophy.” More accurately, Critchley argues that tragedy’s comfort with discomfort, its consistent embrace of inconsistency, its ordered representation of disorder, positions the genre as a type of radical critique of philosophy, a genre that expresses the anarchic rhetoric of the sophists, rather than their killjoy critic Socrates and his dour student Plato. As a refresher, the sophists were the itinerant and sometimes fantastically successful rhetoricians who taught Greek politicians a type of disorganized philosophy that, according to Socrates, had no concern with the truth, but only with what was convincing. Socrates supposedly placed “Truth” at the core of his dialectical method, and, ever since, the discipline has taken up the mantle of “a psychic and political existence at one with itself, which can be linked to ideas of self-mastery, self-legislation, autonomy, and autarchy, and which inform the modern jargon of authenticity.” Tragedy is defined by none of those things; where philosophy strives for order and harmony, tragedy dwells in chaos and division; where syllogism strives to eliminate all contradiction as irrational, poetry understands that it’s in the complexity of inconsistency, confusion, and even hypocrisy that we all dwell. Sophistry and tragedy, to the recommendation of both, are intimately connected; both being methods commensurate with the dark realities of what it means to be alive. Critchley claims that “tragedy articulates a philosophical view that challenges the authority of philosophy by giving voice to what is contradictory about us, what is constricted about us, what is precarious about us, and what is limited about us.”

Philosophy is all arid formulations, dry syllogisms, contrived Gedankenexperiments; tragedy is the knowledge that nothing of the enormity of what it means to be alive can be circumscribed by mere seminar argument. “Tragedy slows things down by confronting us with what we do not know about ourselves,” Critchley writes. If metaphysics is contained by the formulations of the classroom, then the bloody stage provides a more accurate intimation of death and life. By being in opposition to philosophy, tragedy is against systems. It becomes both opposite and antidote to the narcotic fantasy that everything will be alright. Perhaps coming to terms with his own discipline, Critchley argues that “it is necessary to try and think theatrically and not just philosophically.” Tragedy, he argues, provides an opportunity to transcend myths of progress and comforts of order, to rather ecstatically enter a different space, an often dark, brutal, and subterranean place, but one which demonstrates the artifice of our self-regard.

A word conspicuous in its absence from Tragedy, the Greeks, and Us is that of the “sacred.” If there is any critical drawback to Critchley’s argument, it seems to be in the hesitancy, or the outright denial, that what he claims in his book has anything to do with something quite so wooly as the noumenal. Critchley gives ample space to argue that, “Tragedy is not some Dionysian celebration of the power of ritual and the triumph of myth over reason,” yet a full grappling with his argument seems to imply the opposite. The argument that tragedy stages contradiction is one that is convincing, but those sublime contradictions are very much under the Empire of Irrationality’s jurisdiction. Critchley is critical of those that look at ancient tragedy and “imagine that the spectators…were in some sort of prerational, ritualistic stupor, some intoxicated, drunken dumbfounded state,” but I suppose much of our interpretation depends on how we understand ritual, religion, stupor, and intoxication.

His claims are invested in an understanding of the Greeks as not being fundamentally that different from us, writing that “there is a lamentable tendency to exoticize Attic tragedy,” but maybe what’s actually called for is a defamiliarization of our own culture, an embrace of the irrational weirdness at the core of what it means to be alive 2019, where everything that is solid melts into air (to paraphrase Marx). Aeschylus knew the score well; “Hades, ruler of the nether sphere, / Exactest auditor of human kind, / Graved on the tablet of his mind,” as he describes the prince of this world in Eumenides. Critchley, I’d venture, is of Dionysius’s party but doesn’t know it. All that is argued in Tragedy, the Greeks, and Us points towards an awareness, however sublimated, of the dark beating heart within the undead cadaver’s chest. “To resist Dionysius is to repress the elemental in one’s own nature,” writes the classicist E.R. Dodds in his seminal The Greeks and the Irrational, “the punishment is the sudden complete collapse of the inward dykes when the elemental breaks through…and civilization vanishes.”

Absolutely correct that tragedy is in opposition to philosophy; where the latter offers assurances that reason can see us through, the former knows that it’s never that simple. The abyss is patient and deep, and no amount of analysis, of interpretation, of calculation, of polling can totally account for the hateful tragic pulse of our fellow humans. Nietzsche writes “what changes come upon the weary desert of our culture, so darkly described, when it is touched by…Dionysius! A storm seizes everything decrepit, rotten, broken, stunted; shrouds it in a whirling red cloud of dusty and carries it into the air like a vulture.” If any place best exemplifies that experience, and this moment, it’s Euripides’s The Bacchae, to which Critchley devotes precious little attention. That play depicts the arrival of that ambiguous god Dionysius to Thebes, as his followers thrill to the divine and irrational ecstasies that he promises. It ends with a crowd of those followers, the Maenads, mistaking the ruler Pentheus for a sacrificial goat and pulling him apart, his bones from their sockets, his organs from their cavities. Until his murder, Pentheus simultaneously manifested a repressed thrill towards the Dionysian fervor and a deficiency in taking the threat of such uncontained emotion seriously. “Cleverness is not wisdom,” Euripides writes, “And not to think mortal thoughts is to see few days.” If any didactic import comes from The Bacchae, it’s to give the devil as an adversary his due, for irrationality has more power than the clever among us might think.

Circling around the claims of Critchley’s book is our current political situation, alluded to but never engaged outright. In one sense, that’s for the best; those demons’ names are uttered endlessly all day anyhow. It’s desirable to at least have one place where you need not read about them. But in another manner, fully intuiting the Dionysian import of tragedy becomes all the more crucial when we think about what that dark god portends in our season of rising authoritarianism. “Tragedy is democracy turning itself into a spectacle,” and anyone with Twitter will concur with that observation of Critchley’s. Even more important is Critchley’s argument about those mystic chords of memory connecting us to a past that we continually reinvent; the brilliance of his claim about why the Greeks matter to us now, removing the stuffiness of anything as prosaic as canonicity, is that tragedy encapsulates the way in which bloody trauma can vibrate through the millennia and control us as surely as the ancients believed fate controlled humans. Critchley writes that “Tragedy is full of ghosts, ancient and modern, and the line separating the living from the dead is continually blurred. This means that in tragedy the dead don’t stay dead and the living are not fully alive.” We can’t ignore the Greeks, because the Greeks aren’t done with us. If there is anything that hampers us as we attempt to extricate the Dionysian revelers in our midst, it’s that many don’t acknowledge the base, chthonic power of such irrationality, and they refuse to see how violence, hate, and blood define our history in the most horrific of ways. To believe that progress, justice, and rationality are guaranteed, that they don’t require a fight commensurate with their worthiness, is to let a hubris fester in our souls and to court further tragedy among our citizens.

What Medea or The Persians do is allow us to safely access the Luciferian powers of irrationality. They present a more accurate portrayal of humanity, based as we are in bloodiness and barbarism, than the palliatives offered by Plato in The Republic with his philosopher kings. Within that space of the theater, Critchley claims that at its best it “somehow allows us to become ecstatically stretched out into another time and space, another way of experiencing things and the world.” Far from the anemic moralizing of Aristotelian catharsis—and Critchley emphasizes just how ambiguous that word actually is—that is too often interpreted as referring to a regurgitative didacticism, tragedy actually makes a new world by demolishing and replacing our world, if only briefly. “If one allows oneself to be completely involved in what is happening onstage,” Critchley writes, “one enters a unique space that provides an unparalleled experience of sensory and cognitive intensity that is impossible to express purely in concepts.” I recall seeing a production of Shakespeare’s Othello at London’s National Theatre in 2013, directed by Nicholas Hytner and starring Adrian Lester as the cursed Moor and Rory Kinear as a reptilian Iago. Dr. Johnson wrote that Othello’s murder of Desdemona was the single most horrifying scene in drama, and I concur; the play remains the equal of anything by Aeschylus or Euripides in its tragic import.

When I watched Lester play the role, lingering over the dying body of his faithful wife, whispering “What noise is this? Not dead—not yet quite dead?” I thought of many things. I thought about how Shakespeare’s play reflects the hideous things that men do to women, and the hideous things that the majority do to the marginalized. I thought about how jealousy noxiously fills every corner, no matter how small, like some sort of poison gas. And I thought about how unchecked malignancy can shatter our souls. But mostly what I thought wasn’t in any words, but was better expressed by Lester’s anguished cry as he confronted the evil he’d done. If tragedy allows for an audience to occasionally leave our normal space and time, then certainly I felt like I was joined with those thousand other spectators on that summer night at South Bank’s Olivier Theatre. The audience’s silence after Othello’s keening subsided was as still as the space between atoms, as empty as the gap between people.

King, God, Smart-Ass Author: Reconsidering Metafiction

“Whoever is in charge here?” -Daffy Duck, Merrie Melodies (1953)

Like all of us, Daffy Duck was perennially put upon by his Creator. The sputtering, stuttering, rageful water fowl’s life was a morass of indignity, embarrassment, anxiety, and existential horror. Despite all of the humiliation Daffy had to contend with, the aquatic bird was perfectly willing to shake his wings at the unfair universe. As expertly delivered by voice artist Mel Blanc, Daffy could honk “Who is responsible for this? I demand that you show yourself!” In animator Chuck Jones’s brilliant and classic 1953 episode of Merrie Melodies titled “Duck Amuck,” he presents Daffy as a veritable Everyduck, a sinner in the hands of a smart-assed illustrator. “Duck Amuck” has remained a canonical episode in the Warner Brothers cartoon catalog, its postmodern, metafictional experimentation heralded for its daring and cheekiness. Any account of what critics very loosely term “postmodern literature”—with its playfulness, its self-referentiality, and it’s breaking of the fourth wall—that considers Italo Calvino, Jorge Luis Borges, Vladimir Nabokov, and Paul Auster but not Jones is only telling part of the metafictional story.  Not for nothing, but two decades ago, “Duck Amuck” was added to the National Film Registry by the Library of Congress as an enduring piece of American culture.

Throughout the episode, Jones depicts increasingly absurd metafictional scenarios involving Daffy’s sublime suffering. Jones first imagines Daffy as a swordsman in a Three Musketeers parody, only to have him wander into a shining, white abyss as the French Renaissance background fades away. “Look Mac,” Daffy asks, never one to let ontological terror impinge on his sense of personal justice, “what’s going on here?” Jones wrenches the poor bird from the musketeer scenery to the blinding whiteness of the nothing-place, then to a bucolic pastoral, and finally to a paradisiacal Hawaiian beach. Daffy’s admirable sense of his own integrity remains intact, even throughout his torture. Pushed through multiple parallel universes, wrenched, torn, and jostled through several different realities, Daffy shouts “All right wise guy, where am I?”  

But eventually not even his own sense of identity is allowed to continue unaffected, as the God-animator turns him into a country-western singer who can only produce jarring sound effects from his guitar, or as a transcendent paintbrush recolors Daffy blue. At one point the animator’s pencil impinges into Daffy’s world, erasing him, negating him, making him nothing. Daffy’s very being, his continued existence depends on the whims of a cruel and capricious God; his is in the world of Shakespeare’s King Lear, where the Duke of Gloucester utters his plaintive cry, “As flies are to wanton boys are we to th’ gods; / They kill us for their sport.” Or at least they erase us. Finally, like Job before the whirlwind, Daffy implores, “Who is responsible for this? I demand that you show yourself!” As the view pans upward, into that transcendent realm of paper and ink where the animator-God dwells, it’s revealed to be none other than the trickster par excellence, Bugs Bunny. “Ain’t I a stinker?” the Lord saith.

Creation, it should be said, is not accomplished without a certain amount of violence. According to one perspective, we can think of Daffy’s tussling with Bugs as being a variation on that venerable old Aristotelian narrative conflict of “Man against God.” If older literature was focused on the agon (as the Greeks put it) between a human and a deity, and modernist literature concerned itself with the conflict that resulted as people had to confront the reality of no God, then the wisdom goes that our postmodern moment is fascinated with the idea of a fictional character searching out his or her creator. According to narrative theorists, that branch of literary study that concerns itself with the structure and organization of story and plot (not synonyms incidentally), such metafictional affectations are technically called metalepsis. H. Porter Abbot in his invaluable The Cambridge Introduction to Narrative explains that such tales involve a “violation of narrative levels” when a “storyworld, is invaded by an entity or entities from another narrative level.”

Metalepsis can be radical in its execution, as when an “extradiegetic narrator” (that means somebody from outside the story entirely) enters into the narrative, as in those narratives where an “’author appears and starts quarreling with one of the characters,” Abbot writes. We’ll see that there are precedents for that sort of thing, but whether interpreted as gimmick or deep reflection on the idea of literature, the conceit that has a narrator enter into the narrative as if theophany is most often associated with something called, not always helpfully, “postmodernism.” Whatever that much-maligned term might mean, in popular parlance it has an association with self-referentiality, recursiveness, and metafictional playfulness (even if readers might find cleverness such as that exhausting). The term might as well be thought of as referring to our historical preponderance of literature that knows that it is literature.

With just a bit of British disdain in his critique, The New Yorker literary critic James Wood writes in his pithy and helpful How Fiction Works that “postmodern novelists… like to remind us of the metafictionality of all things.” Think of the crop of experimental novelists and short story writers from the ’60s, such as John Barth in his Lost in the Funhouse, where one story is to be cut out and turned into an actual Moebius strip; Robert Coover in the classic and disturbing short story “The Babysitter,” in which a variety of potential realities and parallel histories exist simultaneously in the most mundane of suburban contexts; and John Fowles in The French Lieutenant’s Woman, in which the author also supplies multiple “forking paths” to the story and where the omniscient narrator occasionally appears as a character in the book. Added to this could be works where the actual first-person author themselves becomes a character, such as Auster’s New York Trilogy, or Philip Roth’s Operation Shylock (among other works where he appears as a character). Not always just as a character, but as the Creator, for if the French philosopher Roland Barthes killed off the idea of such a figure in his seminal 1967 essay “The Death of the Author,” then much of the period’s literature resurrected Him. Wood notes, perhaps in response to Barthes, that “A certain kind of postmodern novelist…is always lecturing us: ‘Remember, this character is just a character. I invented him.’” Metafiction is when fiction thinks about itself.

Confirming Wood’s observation, Fowles’s narrator writes in The French Lieutenant’s Woman, “This story I am telling is all imagination. These characters I create never existed outside my own mind…the novelist stands next to God. He may not know all, yet he tries to pretend that he does.” Metafictional literature like this is supposed to interrogate the idea of the author, the idea of the reader, the very idea of narrative. When the first line to Calvino’s If on a Winter’s Night a Traveler is “You are about to begin reading Italo Calvino’s new novel, If on a Winter’s Night a Traveler,” it has been signaled that the narrative you’re entering is supposed to be different from those weighty tomes of realism that supposedly dominated in previous centuries. If metalepsis is a favored gambit of our experimental novelists, then it’s certainly omnipresent in our pop culture as well, beyond just “Duck Amuck.”

A list of sitcoms that indulge the conceit includes 30 Rock, Community, Scrubs, and The Fresh Prince of Bel-Air. The final example, which after all was already an experimental narrative about a wholesome kid from West Philly named Will played by a wholesome rapper from West Philly named Will Smith, was a font of avant-garde fourth-wall breaking deserving of Luigi Pirandello or Bertolt Brecht. Prime instances would include the season five episodes “Will’s Misery,” which depicts Carlton running through the live studio audience, and “Same Game, Next Season,” in which Will asks “If we so rich, why we can’t afford no ceiling,” with the camera panning up to show the rafters and lights of the soundstage. Abbot writes that metafiction asks “to what extent do narrative conventions come between us and the world?” which in its playfulness is exactly what The Fresh Prince of Bel-Air is doing, forcing its audience to consider how “they act as invisible constructors of what we think is true, shaping the world to match our assumptions.”

Sitcoms like these are doing what Barth, Fowles, and Coover are doing—they’re asking us to examine the strange artificiality of fiction, this illusion in which we’re asked by a hidden author to hallucinate and enter a reality that isn’t really there. Both audience and narrator are strange, abstracted constructs; their literal corollaries of reader and writer aren’t much more comprehensible. When we read a third-person omniscient narrator, it would be natural to ask “Who exactly is supposed to be recounting this story?” Metafiction is that which does ask that question. It’s the same question that the writers of The Office confront us with when we wonder, “Who exactly is collecting all of that documentary footage over those nine seasons?”

Far from being simply a postmodern trick, metalepsis as a conceit and the metafiction that results have centuries’ worth of examples. Interactions between creator and created, and certainly author and audience, have a far more extensive history than both a handful of tony novelists from the middle of the 20th century and the back catalog of Nick at Nite. For those whose definition of the novel doesn’t consider anything written before 1945, it might come as a shock that all of the tricks we associate with metafiction thread so deep into history that realist literature can seem the exception rather than the rule. This is obvious in drama; the aforementioned theater term “breaking the fourth wall” attests to the endurance of metalepsis in literature. As a phrase, it goes back to Molière in the 17th century, referring to when characters in a drama acknowledge their audience, when they “break” the invisible wall that separates the action of the stage from that of the observers in their seats. If Molière coined the term, it’s certainly older than even him. In all of those asides in Shakespeare—such as that opening monologue of Richard III when the title villain informs all of us who are joining him on his descent into perdition that “Now is the winter of our discontent”—we’re, in some sense, to understand ourselves as being characters in the action of the play itself.  

As unnatural as Shakespearean asides may seem, they don’t have the same sheer metaleptic import of metafictional drama from the avant-garde theater of the 20th century. Pirandello’s classic experimental play Six Characters in Search of an Author is illustrative here, a high-concept work in which unfinished and unnamed characters arrive at a Pirandello production asking their creator to more fully flesh them out. As a character named the Father explains, the “author who created us alive no longer wished…materially to put us into a work of art. And this was a real crime.” A real crime because to be a fictional character means that you cannot die, even though “The man, the writer, the instrument of the creation will die, but his creation does not die.” An immaculate creation outliving its creator, more blessed than the world that is forever cursed to be ruled over by its God. But first Pirandello’s unfortunates must compel their God to grant them existence; they need a “fecundating matrix, a fantasy which could rise and nourish them: make them live forever!” If this seems abstract, you should know that such metaleptic tricks were staged long before Pirandello, and Shakespeare for that matter. Henry Medwall’s 1497 Fulgens and Lucrece, the first secular play in the entire English canon, has two characters initially named “A” and “B” who argue about a play only to have it revealed that the work in question is actually Medwall’s, which the audience is currently watching. More than a century later, and metafictional poses were still explored by dramatists, a prime and delightful example being Shakespeare’s younger contemporary and sometimes-collaborator Francis Beaumont’s The Knight of the Burning Pestle. In that Jacobean play of 1607, deploying a conceit worthy of Thomas Pynchon’s The Crying of Lot 49, Beaumont imagines the production of a play-within-a-play entitled The London Merchant. In the first act, two characters climb the stage from the audience, one simply called “Citizen” and the other “Wife,” and begin to heckle and critique The London Merchant, and its perceived unfairness to the rapidly ascending commercial class. The Knight of the Burning Pestle allows the audience to strike back, the Citizen cheekily telling the actor reading the prologue, “Boy, let my wife and I have a couple of stools / and then begin; and let the grocer do rare / things.”

Historical metalepsis can also be seen in what are called “frame tales,” that is, stories-within-stories that nestle narratives together like Russian dolls. Think of the overreaching narrative of Geoffrey Chaucer’s 14th-century The Canterbury Tales with its pilgrims telling each other their stories as they make their way to the shrine of Thomas Becket, or of Scheherazade recounting her life-saving anthology to her murderous husband in One Thousand and One Nights, as compiled from folktales during the Islamic Golden Age from the eighth to 14th centuries. Abbot describes frame tales by explaining that “As you move to the outer edges of a narrative, you may find that it is embedded in another narrative.” Popular in medieval Europe, and finding their structure from Arabic and Indian sources that go back much further, frame tales are basically unified anthologies where an overreaching narrative supplies its own meta-story. Think of Giovanni Boccaccio’s 14th-century Decameron, in which seven women and three men each tell 10 stories to pass the time while they’re holed up in a villa outside of Florence to await the passage of the Black Death through the city. The 100 resulting stories are ribald, earthy, and sexy, but present through all of their telling is an awareness of the tellers, this narrative about a group of young Florentines in claustrophobic, if elegant, quarantine. “The power of the pen,” one of Boccaccio’s characters says on their eighth day in exile, “is far greater than those people suppose who have not proved it by experience.” Great enough, it would seem, to create a massive sprawling world with so many stories in it. “In my father’s book,” the character would seem to be saying of his creator Boccaccio, “there are many mansions.”

As metaleptic as frame tales might be, a reader will note that Chaucer doesn’t hitch up for that long slog into Canterbury himself, nor does Boccaccio find himself eating melon and prosciutto while quaffing chianti with his aristocrats in The Decameron. But it would be a mistake to assume that older literature lacks examples of the “harder” forms of metalepsis, that writing before the 20th century is devoid of the Author-God appearing to her characters as if God on Sinai. So-called “pre-modern” literature is replete with whimsical experimentation that would seem at home in Nabokov or Calvino; audiences directly addressed on stage and books speaking as themselves to their readers, authors appearing in narratives as creators, and fictions announcing their fictionality.

Miguel de Cervantes’s 17th-century Don Quixote plays with issues of representation and artificiality when the titular character and his trusty squire, Sancho Panza, visit a print shop that is producing copies of the very book you are reading, the errant knight and his sidekick then endeavoring to prove that it is an inferior plagiarism of the real thing. Cervantes’s narrator reflects at an earlier point in the novel about the novel itself, enthusing that “we now enjoy in this age of ours, so poor in light entertainment, not only the charm of his veracious history, but also of the tales and episodes contained in it which are, in a measure, no less pleasing, ingenious, and truthful, than the history itself.” Thus Cervantes, in what is often considered the first novel, can lay claim to being the primogeniture of both realism and metafictionality.

Following Don Quixote’s example could be added other metafictional works that long precede “postmodernism,” including Laurence Sterne’s 18th-century The Life and Opinions of Tristram Shandy, Gentleman, where the physical book takes time to mourn the death of a central character (when an all-black page is printed); the Polish count Jan Potocki’s underread late-18th-century The Manuscript Found in Saragossa, with not just its fantastic caste of Iberian necromancers, kabbalists, and occultists, but its intricate frame structure and forking paths (not least of which include reference to the book that you’re reading); James Hogg’s Satanic masterpiece The Private Memoirs and Confessions of a Justified Sinner, in which the author himself makes an appearance; and Jane Austen’s Northanger Abbey, in which the characters remark on how it feels as if they’re in a gothic novel (or perhaps a parody of one). Long before Barthes killed the Author, writers were conflating themselves as creator with the Creator. As Sterne notes, “The thing is this. That of all the several ways of beginning a book which are now in practice throughout the known world, I am confident my own way of doing it is the best—I’m sure it is the most religious—for I begin with writing the first sentence—and trusting to Almighty God for the second.”

Sterne’s sentiment provides evidence as to why metafiction is so alluring and enduring, despite its minimization by some critics who dismiss it as mere trick while obscuring its long history. What makes metalepsis such an intellectually attractive conceit goes beyond simply that it makes us question how literature and reality interact, but rather what it implies about the Author whom Sterne gestures toward—“Almighty God.” The author of Tristram Shandy understood, as all adept priests of metafiction do (whether explicitly or implicitly), that at its core, metalepsis is theological. In questioning and confusing issues of characters and writers, narrators and readers, actors and audience, metafiction experiments with the very idea of creation. Some metafiction privileges the author as being the supreme-God of the fiction, as in The French Lieutenant’s Woman, and some castes its lot with the characters, as in The Knight of the Burning Pestle. Some metafiction is “softer” in its deployment, allowing the characters within a narrative to give us stories-within-stories; other is “harder” in how emphatic it is about the artifice and illusion of fiction, as in Jones’s sublime cartoon. What all of them share however, is an understanding that fiction is a strange thing, an illusion whereby whether we’re gods or penitents, we’re all privy to a world spun from something as ephemeral as letters and breath. Wood asks, “Is there a way in which all of us are fictional characters, parented by life and written by ourselves?” And the metaphysicians of metafiction answer in the affirmative.

As a final axiom, to join my claim that metafiction is when literature thinks about itself and that metalepsis has a far longer history than is often surmised, I’d finally argue that because all fiction—all literature—is artifice, that all of it is in some sense metafiction. What defines fiction, what makes it different from other forms of language, is that quality of metalepsis. Even if not explicitly stated, the differing realms of reality implied by the very existence of fiction imply something of the meta. Abbot writes “World-making is so much a part of most narratives that some narrative scholars have begun to include it as a defining feature of narrative,” and of that I heartily concur. Even our scripture is metafictional, for what else are we to call the Bible in which Moses is both author and character, and where his death itself is depicted? In metafiction perspective is confused, writer turns to reader, narrator to character, creator to creation. No more apt a description of metafiction, of fiction, of life than that which is offered by Prospero at the conclusion of The Tempest: “Our revels now are ended. These our actors, / As I foretold you, were all spirits and / Are melted into air, into thin air.” For Prospero, the “great globe itself…all which it inherit, shall dissolve / And, like this insubstantial pageant faded…We are such stuff / As dreams are made on, and our little life / Is rounded with a sleep.” Nothingness before and nothingness after, but with everything in between, just like the universe circumscribed by the cover of a book. Metafiction has always defined literature; we’ve always been characters in a novel that somebody else is writing.

Judging God: Rereading Job for the First Time

“And can you then impute a sinful deed/To babes who on their mothers’ bosoms bleed?” —Voltaire, “Poem on the Lisbon Disaster” (1755)

“God is a concept/By which we measure our pain.” —John Lennon, “God” (1970)

The monks of the Convent of Our Lady of Mount Carmel would have shortly finished their Terce morning prayers of the Canonical hours, when an earthquake struck Lisbon, Portugal, on All Saints Day in 1755. A fantastic library was housed at the convent, more than 5,000 volumes of St. Augustin and St. Thomas Aquinas, Bonaventure and Albertus Magnus, all destroyed as the red-tiled roof of the building collapsed. From those authors there had been endless words produced addressing theodicy, the question of evil, and specifically how and why an omniscient and omnibenevolent God would allow such malevolent things to flourish. Both Augustin and Aquinas affirmed that evil had no positive existence in its own right, that it was merely the absence of good in the way that darkness is the absence of light. The ancient Church Father Irenaeus posited that evil is the result of human free will, and so even natural disaster was due to the affairs of women and men. By the 18th century, a philosopher like Gottfried Leibnitz (too Lutheran and too secular for the monks of Carmo) would optimistically claim that evil is an illusion, for everything that happens is in furtherance of a divine plan whereby ours is the “best of all possible worlds,” even in Lisbon on November 1, 1755. On that autumn day in the Portuguese capital, the supposedly pious of the Carmo Convent were faced with visceral manifestations of that question of theodicy in a city destroyed by tremor, water, and flame.

No more an issue of scholastic abstraction, of syllogistic aridness, for in Lisbon perhaps 100,000 of the monks’ fellow subjects would die in what was one of the most violent earthquakes ever recorded. Death marked the initial collapse of the houses, palaces, basilicas, and cathedrals, in the tsunami that pushed in from the Atlantic and up the Tagus River, in the fires that ironically resulted from the preponderance of votive candles lit to honor the holy day, and in the pestilence that broke out among the debris and filth of the once proud capital. Lisbon was the seat of a mighty Catholic empire, which had spread the faith as far as Goa, India, and Rio de Janeiro, Brazil; its inhabitants were adherents to stern Iberian Catholicism, and the clergy broached no heresy in the kingdom. Yet all of that faith and piety appeared to make no difference to the Lord; for the monks of the Carmo Convent who survived their home’s destruction, their plaintive cry might as well have been that of Christ’s final words upon the cross in the Book of Matthew: “My God, my God, why hast thou forsaken me?”

Christ may have been the Son of God, but with his dying words he was also a master of intertextual allusion, for his concluding remarks are a quotation of another man, the righteous gentile from the land of Uz named Job. If theodicy is the one insoluble problem of monotheism, the viscerally felt and empirically verifiable reality of pain and suffering in a universe supposedly divine, then Job remains the great brief for those of us who feel like God has some explaining to do. Along with other biblical wisdom literature like Ecclesiastes or Song of Songs, Job is one of those scriptural books that can sometimes appear as if some divine renegade snuck it into the canon. What Job takes as its central concern is the reality described by journalist Peter Trachtenberg in his devastating The Book of Calamities: Five Questions about Suffering, when he writes that “Everybody suffers: War, sickness, poverty, hunger, oppression, prison, exile, bigotry, loss, madness, rape, addiction, age, loneliness.” Job is what happens when we ask about why those things are our lot with an honest but broken heart.

I’ve taught Job to undergraduates before, and I’ve sometimes been surprised by their lack of shock when it comes to what’s disturbing about the narrative. By way of synopsis, the book tells the story of a man whom the poem makes clear is unassailably righteous, and how Satan, in his first biblical appearance (and counted as a son of God to boot), challenges the Creator, maintaining that Job’s piety is only a result of his material and familiar well-being. The deity answers the devil’s charge by letting the latter physically and psychologically torture blameless Job, so as to demonstrate that the Lord’s human servant would never abjure Him. In Bar-Ilan University bible professor Edward Greenstein’s masterful Job: A New Translation, the central character is soberly informed that “Your sons and your daughters were eating and drinking wine in/the house of their brother, the firstborn,/When here: a great wind came from across the desert;/it affected the four corners of the house;/it fell on the young people and they died.”—and the final eight words of the last line convey in their simplicity the defeated and personal nature of the tragedy. Despite the decimation of Job’s livestock, the death of his children, the rejection of his wife, and finally the contraction of an unsightly and painful skin ailment (perhaps boils), “Job did not commit-a-sin – /he did not speak insult” against God.

Job didn’t hesitate to speak against his own life, however. He bemoans his own birth, wishing the very circumstances that his life could be erased, declaring “Let the day disappear, the day I was born, /And the night that announced: A man’s been conceived!” Sublimely rendered in both their hypocrisy and idiocy are three “friends” (a later interpolation that is the basis of the canonical book adds a fourth) who come to console Job, but in the process they demonstrate the inadequacy of any traditional theodicy in confronting the nature of why awful things frequently happen to good people. Eliphaz informs Job that everything, even the seemingly evil, takes part in God’s greater and fully good plan, that the Lord “performs great things too deep to probe, /wondrous things, beyond number.” The sufferer’s interlocutor argues, as Job picks at his itchy boils with a bit of pottery, perhaps remembering the faces of his dead children when they were still infants, that God places “the lowly up high, /So the stooped attain relief.” Eliphaz, of whom we know nothing other than that he speaks like a man who has never experienced loss, is the friend whom counsels us that everything works out in the end even when we’re at our child’s funeral.

Bildad, on the other hand, takes a different tact, arguing that if Job’s sons “committed a sin against him, /He has dispatched them for their offense,” the victim-blaming logic that from time immemorial has preferred to ask what the raped was wearing rather than why the rapist rapes. Zophar angrily supports the other two, and the latecomer Elihu emphasizes God’s mystery and Job’s impudence to question it. To all of these defenses of the Lord, Job responds that even “were I in the right, his mouth would condemn me. /(Even) were I innocent, he would wrong me… It is all the same. /And so, I declare:/The innocent and the guilty he brings to (the same) end.” In an ancient world where it’s taken as a matter of simple retributive ethics that God will punish the wicked and reward the just, Job’s realism is both more world-weary and more humane than the clichés offered by his fair-weather friends. “Why do the wicked live on and live well,/Grow old and gain in power and wealth?” asks Job, and from 500 BCE unto 2019 that remains the central question of ethical theology. As concerns any legitimate, helpful, or moving answer from his supposed comforters, Greenstein informs us that “They have nothing to say.”

Finally, in the most dramatic and figuratively adept portion of the book, God Himself arrives from a whirlwind to answer the charges of Job’s “lawsuit” (as Greenstein renders the disputation). The Lord never answers Job’s demand to know why he has been forced to suffer, rather answering in non-sequitur about his own awesomeness, preferring to rhetorically answer the pain of a father who has buried his children by asking “Where were you when I laid earth’s foundations?/Tell me – if you truly know wisdom!/Who set its dimensions? Do you know? /Who stretched the measuring line?… Have you ever reached the sources of Sea, /And walked on the bottom of Ocean?” God communicates in the rhetoric of sarcastic grandeur, mocking Job by saying “You must know… Your number of days is so many!” Of course, Job had never done any of those things, but an unempathetic God also can’t imagine what it would be like to have a Son die—at least not yet. That gets ahead of ourselves, and a reader can’t help but notice that for all of His poetry from the whirlwind, and all of His frustration at the intransigent questioning of Job, the Lord never actually answers why such misfortune has befallen this man. Rather God continually emphasizes His greatness to Job’s insignificance, His power to Job’s feebleness, His eternity to Job’s transience.

The anonymous author’s brilliance is deployed in its dramatic irony, for even if Job doesn’t know why he suffers, we know. Greenstein explains that readers “know from the prologue that Job’s afflictions derive from the deity’s pride, not from some moral calculus.” Eliphaz, Bildad, Zophar, and Elihu can pontificate about the unknowability of God’s reasons while Job is uncertain as to if he’s done anything wrong that merits such treatment, but the two omniscient figures in the tale—God and the reader—know why the former did what He did. Because the Devil told him to. Finally, as if acknowledging His own culpability, God “added double to what Job had had,” which means double the livestock, and double the children. A cruelty in that, the grieving father expected to have simply replaced his dead family with a newer, shinier, fresher, and most of all alive brood. And so, both Job and God remain silent for the rest of the book. In the ordering of the Hebrew scriptures, God remains silent for the rest of the Bible, so that when “Job died old and sated with days” we might not wonder if it isn’t the deity Himself who has expired, perhaps from shame. The wisdom offered in the book of Job is the knowledge that justice is sacred, but not divine; so that justice must ours, meaning that our responsibility to each other is all the more important.

Admittedly an idiosyncratic take on the work, and one that I don’t wish to project onto Greenstein. But the scholar does argue that a careful philological engagement with the original Hebrew renders the story far more radical than has normally been presented. Readers of Job have normally been on the side of his sanctimonious friends, and especially the Defendant who arrives out of His gassy whirlwind, but the author of Job is firmly on the side of the plaintiff. If everyone from medieval monks to the authors of Midrash, from Sunday school teachers to televangelists, have interpreted Job as being about the inscrutability of God’s plans and the requirement that we passively take our undeserved pain as part of providence, than Greenstein writes that “there is a very weak foundation in biblical parlance for the common rendering.” He argues that “much of what has passed as translation of Job is facile and fudged,” having been built upon the accumulated exegetical detritus of centuries rather than returning to the Aleph, Bet, and Gimmel of Job itself.

Readers of a more independent bent have perhaps detected sarcasm in Job’s response, or a dark irony in God’s restitution of new and better replacement children for the ones that He let Satan murder. For my fellow jaundiced and cynical heretics who long maintained that Job still has some fight in him even after God emerges from the whirlwind to chastise Him for daring to question the divine plan, Greenstein has good news. Greenstein writes that the “work is not mainly about what you thought it was; it is more subversive than you imagined; and it ends in a manner that glorifies the best in human values.” He compares a modern translation of Job 42:6, where Job speaks as a penitent before the Lord, exclaiming “Therefore I despise myself/and repent in dust and ashes.” Could be clearer–>Such a message seems straightforward—because he questions the divine plan, Job hates himself, and is appropriately humbled. Yet Greenstein contrasts the modern English with a more accurate version based in an Aramaic text found in the Dead Sea Scrolls, where the man of Uz more ambiguously says “Therefore I am poured out and boiled up, and I will become dust.” This isn’t a declaration of surrender; this is an acknowledgement of loss. Greenstein explains that while both the rabbis and the Church wished to see Job repentant and in surrender, that’s not what the original presupposes. Rather, “Job is parodying God, not showing him respect. If God is all about power and not morality and justice, Job will not condone it through acceptance.” Thus, when we write that the book’s subject is judging God, we should read the noun as the object and not the subject of that sentence.

As a work, the moral universe of Job is complex; when compared to other ancient near Eastern works that are older, even exemplary ones like Gilgamesh, its ambiguities mark it as a work of poetic sophistication. Traditionally dated from the period about a half-millennium before the Common Era, Job is identified with the literature of the Babylonian Exile, when the Jews had been conquered and forcibly extricated to the land of the Euphrates. Such historical context is crucial in understanding Job’s significance, for the story of a righteous man who suffered for no understandable reason mirrored the experience of the Jews themselves, while Job’s status as a gentile underscored a dawning understanding that God was not merely a deity for the Israelites, but that indeed his status was singular and universal. When the other gods are completely banished from heaven, however, and the problem of evil rears its horned head, for when the Lord is One, who then must be responsible for our unearned pain?

Either the most subversive or the most truthful of scriptural books (maybe both), Job has had the power to move atheist and faithful alike, evidence for those who hate God and anxious warning for those that love Him. Former Jesuit Jack Miles enthuses in God: A Biography, that “exegetes have often seen the Book of Job as the self-refutation of the entire Judeo-Christian tradition.” Nothing in the canon of Western literature, be it Sophocles’s Oedipus Rex or William Shakespeare’s Hamlet, quite achieves the dramatic pathos of Job. Consider its terrors, its ambiguities, its sense of injustice, and its impartation that “our days on earth are a shadow.” Nothing written has ever achieved such a sense of universal tragedy. After all, the radicalism of the narrative announces itself, for Job concerns itself with the time that God proverbially sold his soul to Satan in the service of torturing not just an innocent man, but a righteous one. And that when questioned on His justification for doing such a thing, the Lord was only able to respond by reiterating His own power in admittedly sublime but ultimately empty poetry. For God’s answer of theodicy to Job is not an explanation of how, but not an explanation of why—and when you’re left to scrape boils off your self with a pottery shard after your 10 children have died in a collapsing house, that’s no explanation at all.

With Greenstein’s translation, we’re able to hear Job’s cry not in his native Babylonian, or the Hebrew of the anonymous poet who wrote his tale, or the Aramaic of Christ crucified on Golgotha, or even the stately if antiquated early modern English of the King James Version, but rather in a fresh, contemporary, immediate vernacular that makes the tile character’s tribulations our own. Our Job is one who can declare “I am fed up,” and something about the immediacy of that declaration makes him our contemporary in ways that underscore the fact that billions are his proverbial sisters and brothers. Greenstein’s accomplishment makes clear a contention that is literary, philosophical, and religious: that the Book of Job is the most sublime masterpiece of monotheistic faith, because what its author says is so exquisitely rendered and so painfully true. Central to Greenstein’s mission is a sense of restoration. This line needs to be clearer: Job is often taught and preached as simply being about humanity’s required humility before the divine, and the need to prostrate ourselves before a magnificent God whose reasons are inscrutable.

By restoring Job its status as a subversive classic, Greenstein does service to the worshiper and not the worshiped, to humanity and not our oppressors. Any work of translation exists in an uneasy stasis between the original and the
adaptation, a one-sided negotiation across millennia where the original author has no say. My knowledge of biblical Hebrew is middling at best, so I’m not suited to speak towards the transgressive power of whomever the anonymous poet of Job was, but regardless of what those words chosen by her or him were, I can speak to Greenstein’s exemplary poetic sense. At its core, part of what makes this version of Job so powerful is how it exists in contrast to those English versions we’ve encountered before, from the sublime plain style of King James to the bubblegum of the Good News Bible. Unlike those traditional translations, the words “God” and “Lord,” with their associations of goodness, appear nowhere in this translation. Rather Greenstein keeps the original “Elohim” (which I should point out is plural), or the unpronounceable,
vowelless tetragrammaton, rendered as “YHWH.” Job is made new through the deft use of the literary technique known as defamiliarization, making that which is familiar once again strange (and thus all the more radical and powerful).

Resurrecting the lightning bolt that is Job’s poetry does due service to the original. Job’s subject is not just theodicy, but the failures of poetry itself. When Job defends himself against the castigation of Eliphaz, Bildad, Zophar, and Elihu, it’s not just a theological issue, but a literary critical one as well. The suffering man condemns their comforts and callousness, but also their clichés. That so much of what his supposed comforters draw from are biblical books like Psalms and Proverbs testifies to Job’s transgressive knack for scriptural self-reflection. As a poet, the author is able to carefully display exactly what she or he wishes to depict, an interplay between the presented and hidden that has an uncanny magic. When the poet writes that despite all of Job’s tribulations, “Job did not commit-a-sin with his lips,” it forces us to ask if he committed sin in his heart and his head, if his soul understandably screamed out at God’s injustice while his countenance remained pious. This conflict between inner and outer appearance is the logic of the novelist, a type of interiority that we associate more with Gustave Flaubert or Henry James than we do with the Bible.

When it comes to physical detail, the book is characteristically minimalist. Like most biblical writing, the author doesn’t present much of a description of either God or Satan, though that makes their presentation all the more eerie and disturbing. When God asks Satan where he has been, the arch-adversary responds “From roving the earth and going all about it.” Satan’s first introduction in all of Western literature, for never before was that word used for the singular character, thus brings with it those connotations of ranging, stalking, and creeping that have so often accrued about the reptilian creature who is always barely visible out of the corner of our eyes. Had we been given more serpentine exposition on the character, cloven hooves and forked tales, it would lack the unsettling nature of what’s actually presented. But when the author wants to be visceral, she or he certainly can be. Few images are as enduring in their immediacy than how Job “took a potsherd with which to scratch himself as he sits in/the ash-heap.” His degradation, his tribulation, his shame still resonates 2,500 years later.

The trio (and later the quartet) of Job’s judgmental colleagues render theodicy as a poetry of triteness, while Job’s poetics of misery is commensurate with the enormity of his fate. “So why did you take me out of the womb?” he demands of God, “Would I had died, with no eye seeing me!/Would I had been as though I had been;/Would I had been carried from womb to tomb”—here Greenstein borrowing a favored rhyming congruence from the arsenal of English’s Renaissance metaphysical poets. Eliphaz and Elihu offer maudlin bromides, but Job can describe those final things with a proficiency that shows how superficial his friends are. Job fantasizes about death as a place “I go and do not return, /To a land of darkness and deep-shade;/A land whose brightness is like pitch-black, /Deep-shade and disorder;/That shines like pitch-black.” That contradictory image, of something shining in pitch-black, is an apt definition of God Himself, who while He may be master of an ordered, fair, and just universe in most of the Bible, in Job He is creator of a “fundamentally amoral world,” as Greenstein writes.

If God from the whirlwind provides better poetry than his defenders could, His theology is just as empty and callous. Greenstein writes that “God barely touches on anything connected to justice or the providential care of humanity,” and it’s precisely the case that for all of His literary power, the Lord dodges the main question. God offers description of the universe’s creation, and the maintenance of all things that order reality, he conjures the enormities of size and time, and even provides strangely loving description of his personal pets, the fearsome hippopotamus Behemoth and the terrifying whale/alligator Leviathan. Yet for all of that detail, exquisitely rendered, God never actually answers Job. Bildad or Elihu would say that God has no duty to explain himself to a mere mortal like Job, that the man of Uz deserves no justification for his punishment in a life that none of us ever chose to begin. That, however, obscures the reality that even if Job doesn’t know the reasons behind what happened, we certainly know.

Greenstein’s greatest contribution is making clear that not only does God not answer Job’s pained query, but that the victim knows that fact. And he rejects it. Job answers God with “I have spoken once and I will not repeat…I will (speak) no more.” If God answers with hot air from the whirlwind, the soon-to-be-restored Job understands that silent witness is a more capable defense against injustice, that quiet is the answer to the whims of a capricious and cruel Creator. God justifies Himself by bragging about His powers, but Job tells him “I have known you are able to do all;/That you cannot be blocked from any scheme.” Job has never doubted that God has it within his power to do the things that have been done, rather he wishes to understand why they’ve been done, and all of the beautiful poetry marshaled from that whirlwind can’t do that. The Creator spins lines and stanzas as completely as He once separated the night from the day, but Job covered in his boils could care less. “As a hearing by ear I have heard you,” the angry penitent says, “And now my eye has seen you. That is why I am fed up.”

Traditional translations render the last bit so as to imply that a repentant Job has taken up dust and ashes in a pose of contrition, but the language isn’t true to the original. More correct, Greenstein writes, to see Job as saying “I take pity on ‘dust and ashes,’” that Job’s sympathy lay with humanity, that Job stands not with God but with us. Job hated not God, but rather His injustice. Such a position is the love of everybody else. Miles puts great significance in the fact that the last time the deity speaks (at least according to the ordering of books in the Hebrew Scriptures) is from the whirlwind.  According to Miles, when the reality of Job’s suffering is confronted by the empty beauty of the Lord’s poetry, a conclusion can be drawn: “Job has won. The Lord has lost.” That God never speaks to man again (at least in the ordering of the Hebrew Scriptures) is a type of embarrassed silence, and for Miles the entirety of the Bible is the story of how humanity was able to overcome God, to overcome our need of God. Job is, in part, about the failure of beautiful poetry when confronted with loss, with pain, with horror, with terror, with evil. After Job there can be no poetry. But if Job implies that there can be no poetry, it paradoxically still allows for prayer. Job in his defiance of God teaches us a potent form of power, that dissent is the highest form of prayer, for what could possibly take the deity more seriously? In challenging, castigating, and criticizing the Creator, Job fulfills the highest form of reverence that there possibly can be.

Image credit: Unsplash/Aaron Burden.

The Universe in a Sentence: On Aphorisms

“A fragment ought to be entirely isolated from the surrounding world like a little work of art and complete in itself like a hedgehog.” —Friedrich Schlegel, Athenaeum Fragments (1798)

“I dream of immense cosmologies, sagas, and epics all reduced to the dimensions of an epigram.” —Italo Calvino, Six Memos for the Next Millennium (1988)

From its first capital letter to the final period, an aphorism is not a string of words but rather a manifesto, a treatise, a monograph, a jeremiad, a sermon, a disputation, a symposium. An aphorism is not a sentence, but rather a microcosm unto itself; an entrance through which a reader may walk into a room the dimensions of which even the author may not know. Our most economic and poetic of prose forms, the aphorism does not feign argumentative completism like the philosophical tome, nor does it compel certainty as does the commandment—the form is cagey, playful, and mysterious. To either find an aphorism in the wild, or to peruse examples in a collection that mounts them like butterflies nimbly held in place with push-pin on Styrofoam, is to have a literary-naturalist’s eye for the remarkable, for the marvelous, for the wondrous. And yet there has been, at least until recently, a strange critical lacuna as concerns aphoristic significance. Scholar Gary Morson writes in The Long and Short of It: From Aphorism to Novel that though they “constitute the shortest [of] literary genres, they rarely attract serious study. Universities give courses on the novel, epic, and lyric…But I know of no course on…proverbs, wise sayings, witticisms and maxims.”

An example of literary malpractice, for to consider an aphorism is to imbibe the purest distillation of a mind contemplating itself. In an aphorism every letter and word counts; every comma and semicolon is an invitation for the reader to discover the sacred contours of her own thought. Perhaps answering Morson’s observation, critic Andrew Hui writes in his new study A Theory of the Aphorism: From Confucius to Twitter that the form is “Opposed to the babble of the foolish, the redundancy of bureaucrats, the silence of mystics, in the aphorism nothing is superfluous, every word bear weight.” An aphorism isn’t a sentence—it’s an earthquake captured in a bottle. It isn’t merely a proverb, a quotation, an epigraph, or an epitaph; it’s fire and lightning circumscribed by the rules of syntax and grammar, where rhetoric itself becomes the very stuff of thought. “An aphorism,” Friedrich Nietzsche aphoristically wrote, “is an audacity.”

If brevity and surprising disjunction are the elements of the aphorism, then in some ways we’re living in a veritable renaissance of the form, as all of the detritus of our fragmented digital existence from texting to Twitter compels us toward the ambiguous and laconic (even while obviously much of what’s produced is sheer detritus). Hui notes the strangely under-theorized nature of the aphorism, observing that at a “time when a presidency can be won and social revolutions ignited by 140-character posts…an analysis of the short saying seems to be crucial as ever” (not that we’d put “covfefe” in the same category as Blaise Pascal).

Despite the subtitle to Hui’s book, A Theory of the Aphorism thankfully offers neither plodding history nor compendium of famed maxims. Rather Hui presents a critical schema to understand what exactly the form does, with its author positing that the genre is defined by “a dialectical play between fragments and systems.” Such a perspective on aphorism sees it not as the abandoned step-child of literature, but rather the very thing itself, a discursive and transgressive form of critique that calls into question all received knowledge. Aphorism is thus both the substance of philosophy and the joker that calls the very idea of literature and metaphysics into question.  The adage, the maxim, and the aphorism mock completism. Jesus’s sayings denounce theology; Parmenides and Heraclitus deconstruct philosophy. Aphoristic thought is balm and succor against the rigors of systemization, the tyranny of logic. Hui writes that “aphorisms are before, against, and after philosophy.” Before, against, and after scripture too, I’ll add. And maybe everything else as well. An aphorism may feign certainty, but the very brevity of the form ensures it’s an introduction, even if its rhetoric wears the costume of conclusion.

Such is why the genre is so associated with gnomic philosophy, for any accounting of aphoristic origins must waylay itself in both the fragments of the pre-Socratic metaphysicians, the wisdom literature of the ancient Near East, and the Confucian and Taoist scholars of China. All of this disparate phenomenon can loosely be placed during what the German philosopher Karl Jaspers called the “Axial Age,” a half-millennium before the Common Era when human thought began to move towards the universal and abstract. From that era (as very broadly constituted) we have Heraclitus’s “Nature loves to hide,” Lao-Tzu’s “The spoken Tao is not the real Tao,” and Jesus Christ’s “The kingdom of God is within you.” Perhaps the aphorism was an engine for that intellectual transformation, the sublimities of paradox breaking women and men out of the parochialism that marked earlier ages. Regardless of why the aphorism’s birth coincides with that pivotal moment in history, that era was the incubator for some of our earliest (and greatest) examples.

From the Greek philosophers who pre-date Socrates, and more importantly his systematic advocate Plato, metaphysics was best done in the form of cryptic utterance. It shouldn’t be discounted that the gnomic quality of thinkers like Parmenides, Heraclitus, Democritus, and Zeno might be due to the disappearance of their full corpus over the past 2,500 years. Perhaps erosion of stone and fraying of papyrus has generated such aphorisms. Entropy is, after all, our final and most important teacher. Nevertheless, aphorism is rife in the pre-Socratic philosophy that remains, from Heraclitus’s celebrated observation that “You can’t step into the same river twice” to Parmenides’s exactly opposite contention that “It is indifferent to me where I am to begin, for there shall I return again.” Thus is identified one of the most difficult qualities of the form—that it’s possible to say conflicting things and that by virtue of how you say them you’ll still sound wise. A dangerous form, the aphorism, for it can confuse rhetoric for knowledge. Yet perhaps that’s too limiting a perspective, and maybe its better to think of the chain of aphorisms as a great and confusing conversation; a game in which both truth and its opposite can still be true.

A similar phenomenon is found in wisdom literature, a mainstay of Jewish and Christian writing from the turn of the Common Era, as embodied in both canonical scripture such as Job, Ecclesiastes, Proverbs, and Song of Songs, as well as in more exotic apocryphal books known for their sayings like The Gospel of Thomas. Wisdom literature was often manifested in the form of a listing of sayings or maxims, and since the 19th century, biblical scholars who advocated the so-called “two-source hypothesis,” have argued that the earliest form of the synoptic New Testament gospels was a (as yet undiscovered) document known simply as “Q,” which consisted of nothing but aphorisms spoken by Christ. This conjectured collection of aphorisms theoretically became the basis for Matthew and Luke whom (borrowing from Mark) constructed a narrative around the bare-bones sentences. Similarly, the eccentric, hermetic, and surreal Gospel of Thomas, which the book of John was possibly written in repost towards, is an example of wisdom literature at its purest, an assemblage of aphorisms somehow both opaque and enlightening. “Split a piece of wood; I am there,” cryptically says Christ in the hidden gospel only rediscovered at Nag Hamadi, Egypt, in 1945, “Lift up the stone, and you will find me there.”

From its Axial-Age origins, the aphorism has a venerable history. Examples were transcribed in the self-compiled Commonplace Books of the Middle Ages and the Renaissance, in which students were encouraged to record their favorite fortifying maxims for later consultation. The logic behind such exercises was, as anthologizer John Gross writes in The Oxford Book of Aphorisms, that the form does “tease and prod the lazy assumptions lodged in the reader’s mind; they wars us how insidiously our vices can pass themselves off as virtues; they harp shamelessly on the imperfections and contradictions which we would rather ignore.” Though past generations were instructed on proverbs and maxims, today “Aphorisms are often derided as trivial,” as Aaron Haspel writes in the introduction to Everything: A Book of Aphorisms, despite the fact that “most people rule their lives with four or five of them.”

The last five centuries have seen no shortage of aphorists who are the originators of those four or five sayings that you might live your life by, gnomic authors who speak in the prophetic utterance of the form like Desiderius Erasmus, François de La Rochefoucauld, Pascal, Benjamin Franklin, Voltaire, William Blake, Nietzsche, Ambrose Bierce, Oscar Wilde, Gustave Flaubert, Franz Kafka, Ludwig Wittgenstein, Dorothy Parker, Theodor Adorno, and Walter Benjamin. Hui explains that “Aphorisms are transhistorical and transcultural, a resistant strain of thinking that has evolved and adapted to its environment for millennia. Across deep time, they are vessels that travel everywhere, laden with fraught yet buoyant.” In many ways, modernity has proven an even more prodigious environment for the digressive and incomplete form, standing both in opposition to the systemization of knowledge that has defined the last half-millennia, while also embodying the aesthetic of fragmented bricolage that sometimes seems as if it was our birthright. Contemporary master of the form Yahia Lababidi writes in Where Epics Fail: Meditations to Live By that “An aphorist is not one who writes in aphorisms but one who thinks in aphorisms.” In our fractured, fragmented, disjointed time, where we take wisdom where we find it, we’re perhaps all aphorists, because we can’t think in any other way.

Anthologizer James Lough writes in his introduction to Short Flights: Thirty-Two Modern Writers Share Aphorisms of Insight, Inspiration, and Wit that “Our time is imperiled by tidal waves of trivia, hectored by a hundred emails a day, colored by this ad slogan or that list of things to do, each one sidetracking our steady, focused awareness.” What might seem to be one of the most worrying detriments of living in post-modernity—our digitally slackening attention spans and inattention to detail—is the exact same quality that allows contemporary aphorists the opportunity to dispense arguments, insight, enlightenment, and wisdom in a succinct package, what Lough describes as a “quickly-digested little word morsel, delightful and instructive, that condenses thought, insight, and wordplay.”

Our century has produced brilliant aphorists who have updated the form while making use of its enduring and universal quality of brevity, metaphor, arresting detail, and the mystery that can be implied by a few short words that seem to gesture towards something slightly beyond our field of sight, and who embody Gross’s description of the genre as one which exemplifies a “concentrated perfection of phrasing which can sometimes approach poetry in its intensity.” Authors like Haspel, Lababidi, Don Paterson in The Fall at Home: New and Selected Aphorisms, and Sarah Manguso in 300 Arguments may take as their subject matter issues of current concern, from the Internet to climate change, but they do it in a form that wouldn’t seem out of place on a bit of frayed pre-Socratic papyrus.

Consider the power and poignancy of Manguso’s maxim “Inner beauty can fade, too.” In only five words, and one strategically placed comma that sounds almost like a reserved sigh, Manguso demonstrates one of the uncanniest powers of the form. That it can remind you of something that you’re already innately aware of, something that relies on the nature of the aphorism to illuminate that which we’d rather obscure. Reading 300 Arguments is like this. Manguso bottles epiphany, the strange acknowledgment of encountering that which you always knew but could never quite put into words yourself, like discovering that the gods share your face. Lababidi does something similar in Where Epics Fail, giving succinct voice to the genre’s self-definition, writing that “Aphorisms respect the wisdom of silence by disturbing it, but briefly.” Indeed, self-referentiality is partially at the core of modern aphorisms; many of Paterson’s attempts are of maxims considering themselves like an ouroboros biting its tail. In the poet’s not unfair estimation, an aphorism is “Hindsight with murderous purpose.”

Lest I be accused of uncomplicated enthusiasms in offering an encomium for the aphorism, let it be said that the form can be dangerous, that it can confuse brevity and wit with wisdom and knowledge, that rhetoric (as has been its nature for millennia) can pantomime understanding as much as express it. Philosopher Julian Baggini makes the point in Should You Judge This Book by Its Cover: 100 Takes on Familiar Sayings and Quotations, writing that aphorisms “can be too beguiling. They trick us into thinking we’ve grasped a deep thought by their wit and brevity. Poke them, however, and you find they ride roughshod over all sorts of complexities and subtleties.” The only literary form where rhetoric and content are as fully unified as the aphorism is arguably poetry proper, but ever fighting Plato’s battle against the versifiers, Baggini is correct that there’s no reason why an expression in anaphora or chiasmus is more correct simply because the prosody of its rhetoric pleases our ears.

Economic statistician Nassim Nicholas Taleb provides ample representative examples in The Bed of Procrustes: Philosophical and Practical Aphorisms of how a pleasing adage need not be an accurate statement. There’s an aesthetically harmonious tricolon in his contention that the “three most harmful addictions are heroin, carbohydrates, and a monthly salary,” and though of those I’ve only ever been addicted to bread, pizza, and pasta, I’ll say from previous addictions that I could add a few more harmful vices to the list made by Taleb. Or, when he opines that “Writers are remembered for their best work, politicians for their worst mistakes, and businessmen are almost never remembered,” I have to object. I’d counter Taleb’s pithy aphorism by pointing out that both John Updike and Philip Roth are remembered for their most average writing, that an absurd preponderance of things in our country are named for Ronald Reagan whose entire political career was nothing but mistakes, and that contra the claim that businessmen are never remembered, I’ll venture that my entire Pittsburgh upbringing where everything is named after a Carnegie, Frick, Mellon, or Heinz demonstrates otherwise. The Bed of Procrustes is like this, lots of sentences that are as if they came from Delphi, but when you spend a second to contemplate Taleb’s claim that “If you find any reason why you and someone are friends, you are not friends” you’ll come to the conclusion that far from experience-tested wise maxim, it’s simply inaccurate.

Critic Susan Sontag made one of the best arguments against aphoristic thinking in her private writings, as published in As Consciousness Is Harnessed to Flesh: Journals and Notebooks, 1964-1980, claiming that “An aphorism is not an argument; it is too well-bred for that.” She makes an important point—that the declarative confidence of the aphorism can serve to announce any number of inanities and inaccuracies as if they were true by simple fiat. Sontag writes that “Aphorism is aristocratic thinking: this is all the aristocrat is willing to tell you; he thinks you should get it fast, without spelling out all the details.” Providing a crucial counter-position to the uncomplicated celebration of all things aphoristic, Sontag rightly observes that “To write aphorisms is to assume a mask—a mask of scorn, of superiority,” which would certainly seem apropos when encountering the claim that if you can enumerate why you enjoy spending time with a friend they’re not really your friend. “We know at the end,” Sontag writes, that “the aphorist’s amoral, light point-of-view self-destructs.”

An irony in Sontag’s critique of aphorisms, for embedded within her prose like any number of shining stones found in a muddy creek are sentences that themselves would make great prophetic adages. Aphorisms are like that though; even with Sontag’s and Baggini’s legitimate criticism of the form’s excesses, we can’t help but approach, consider, think, and understand in that genre for which brevity is the essence of contemplation. In a pose of self-castigation, Paterson may have said that the “aphorism is already a shadow of itself,” but I can’t reject that microcosm, that inscribed reality within a few words, that small universe made cunningly. Even with all that I know about the risks of rhetoric, I can not pass sentence on the sentence. Because an aphorism is open-ended, it is disruptive; as such, it doesn’t preclude, but rather opens; the adage both establishes and abolishes its subject, simultaneously. Hui writes that the true subject of the best examples of the form is “The infinite,” for “either the aphorism’s meaning is inexhaustible or its subject of inquiry—be it God or nature of the self—is boundless.”

In The Aphorism and Other Short Forms, author Ben Grant writes that in the genre “our short human life and eternity come together, for the timelessness of the truth which the aphorism encapsulates can only be measured against our own ephemerality, of which the brevity of the aphorism serves as an apt expression.” I agree with Grant’s contention, but I would amend one thing—the word “truth.” Perhaps that’s what’s problematic about Baggini’s and Sontag’s criticism, for we commit a category mistake when we assume that aphorisms exist only to convey some timeless verity. Rather, I wonder if what Hui describes as the “most elemental of literary forms,” those “scattered lines of intuition…[moving by] arrhythmic leaps and bounds” underscored by “an atomic quality—compact yet explosive” aren’t defined by the truth, but rather by play.

Writing and reading aphorisms is the play of contemplation, the joy of improvisation; it’s the very nature of aphorism. We read such sayings not for the finality of truth, but for the possibilities of maybe. For a short investment of time and an economy of words we can explore the potential of ideas that even if inaccurate, would sink far longer and more self-serious works. That is the form’s contribution. An aphorism is all wheat and no chaff, all sweet and no gaffe.

Image credit: Unsplash/Prateek Katyal.

The Lion and the Eagle: On Being Fluent in “American”

“How tame will his language sound, who would describe Niagara in language fitted for the falls at London bridge, or attempt the majesty of the Mississippi in that which was made for the Thames?” —North American Review (1815)

“In the four quarters of the globe, who reads an American book?” —Edinburgh Review (1820)

Turning from an eastern dusk and towards a western dawn, Benjamin Franklin miraculously saw a rising sun on the horizon after having done some sober demographic calculations in 1751. Not quite yet the aged, paunchy, gouty, balding raconteur of the American Revolution, but rather only the slightly paunchy, slightly gouty, slightly balding raconteur of middle-age, Franklin examined the data concerning the births, immigration, and quality of life in the English colonies by contrast to Great Britain. In his Observations Concerning the Increase of Mankind, Franklin noted that a century hence, and “the greatest Number of Englishmen will be on this Side of the Water” (while also taking time to make a number of patently racist observations about a minority group in Pennsylvania—the Germans). For the scientist and statesman, such magnitude implied inevitable conclusions about empire.


Whereas London and Manchester were fetid, crowded, stinking, chaotic, and over-populated, Philadelphia and Boston were expansive, fresh, and had room to breathe. In Britain land was at a premium, but in America there was the seemingly limitless expanse stretching towards an unimaginable West (which was, of course, already populated by people). In the verdant fecundity of the New World, Franklin imagined (as many other colonists did) that a certain “Merry Old England” that had been supplanted in Europe could once again be resurrected, a land defined by leisure and plenty for the largest number of people. Such thoughts occurred, explains Henry Nash Smith in his classic study Virgin Land: The American West as Symbol and Myth, because “the American West was nevertheless there, a physical fact of great if unknown magnitude.”

As Britain expanded into that West, Franklin argued that the empire’s ambitions should shift from the nautical to the agricultural, from the oceanic to the continental, from sea to land. Smith describes Franklin as a “far-seeing theorist who understood what a portentous role North America” might play in a future British Empire. Not yet an American, but still an Englishmen—and the gun smoke of Lexington and Concord still 24 years away—Franklin enthusiastically prophesizes in his pamphlet that “What an Accession of Power to the British Empire by Sea as well as Land!”

A decade later, and he’d write in a 1760 missive to a one Lord Kames that “I have long been of opinion, that the foundations of the future grandeur and stability of the British empire lie in America.” And so, with some patriotism as a trueborn Englishmen, Benjamin Franklin could perhaps imagine the Court of St. James transplanted to the environs of the Boston Common, the Houses of Parliament south of Manhattan’s old Dutch Wall Street, the residence of George II of Great Britain moved from Westminster to Philadelphia’s Southwest Square. After all, it was the Anglo-Irish philosopher and poet George Berkeley, writing his lyric “Verses on the Prospect of Planting Arts and Learning in America” in his Providence, Rhode Island manse, who could gush that “Westward the course of empire takes its way.”

But even if aristocrats could perhaps share Franklin’s ambitions for a British Empire that stretched from the white cliffs of Dover to San Francisco Bay (after all, christened “New Albion” by Francis Drake), the idea of moving the capital to Boston or Philadelphia seemed anathema. Smith explained that it was “asking too much of an Englishmen to look forward with pleasure to the time when London might become a provincial capital taking orders from an imperial metropolis somewhere in the interior of North America.” Besides, it was a moot point, since history would intervene. Maybe Franklin’s pamphlet was written a quarter century before, but the gun smoke of Lexington and Concord was wafting, and soon the idea of a British capital on American shores seemed an alternative history and a historical absurdity.

Something both evocative and informative, however, in this counterfactual; imagining a retinue of the Queen’s Guard processing down the neon skyscraper canyon of Broadway, or the dusk splintering off of the gold dome of the House of Lords at the crest of Beacon Hill overlooking Boston Common. Don’t mistake my enthusiasms for this line of speculation as evidence of a reactionary monarchism, I’m very happy that such a divorce happened, even if less than amicably. What does fascinate me is the way in which the cleaving of America from Britain affected how we understand each other, the ways in which we become “two nations divided by a common language,” as variously Mark Twain or George Bernard Shaw have been reported as having waggishly once uttered.

Even more than the relatively uninteresting issue that the British spell their words with too many u’s, or say weird things like “lorry” when they mean truck, or “biscuit” when they mean cookie, is the more theoretical but crucial issue of definitions of national literature. Why are American and British literature two different things if they’re mostly written in English, and how exactly do we delineate those differences? It can seem arbitrary that the supreme Anglophiles Henry James and T.S. Eliot are (technically) Americans, and their British counterparts W.H. Auden and Dylan Thomas can seem so fervently Yankee. Then there is what we do with the early folks; is the “tenth muse,” colonial poet Anne Bradstreet, British because she was born in Northampton, England, or was she American because she died in Ipswich, Mass.? Was Thomas More “American” because Utopia is in the Western Hemisphere, was Shakespeare a native because he dreamt of The Tempest? Such divisions make us question how language relates to literature, and how literature interacts with nationality, and what that says vis-à-vis the differences between Britain and America, the lion and the eagle.

A difficulty emerges in separating two national literatures that share a common tongue, and that’s because traditionally literary historians equated “nation with a tongue,” as critic William C. Spengemann writes in A New World of Words: Redefining Early American Literature, explaining how what gave French literature or German literature a semblance of “identity, coherence, and historical continuity” was that they were defined by language and not by nationality. By such logic, if Dante was an Italian writer, Jean-Jacques Rousseau a French one, and Franz Kafka a German author, then it was because those were the languages in which they wrote, despite them respectively being Swiss, Czech, and Florentine.

Americans, however, largely speak in the tongue of the country that once held dominion over the thin sliver of land that stretched from Maine to Georgia, and as far west as the Alleghenies. Thus, an unsettling conclusion has to be drawn; perhaps the nationality of Nathaniel Hawthorne, Herman Melville, and Emily Dickinson was American, but the literature they produced would be English, so that with horror we realize that Walt Whitman’s transcendent “barbaric yawp” would be growled in the Queen’s tongue. Before he brilliantly complicates that logic, Spengemann sheepishly concludes, “writings in English by Americans belong, by definition, to English literature.”

Sometimes misattributed to linguist Noam Chomsky, it was actually the scholar of Yiddish Max Weinreich who quipped that a “language is a dialect with an army and a navy.” If that’s the case, then a national literature is the bit of territory that that army and navy police. But what happens when that language is shared by two separate armies and navies? To what nation does the resultant literature then belong? No doubt there are other countries where English is the lingua franca; all of this anxiety over the difference between British and American literature doesn’t seem so quite involved as regards British literature and its relationship to poetry and prose from Canada, Australia, or New Zealand for that matter.

Sometimes those national literatures, and especially English writing from former colonies like India and South Africa, are still folded into “British Literature.” So Alice Munro, Les Murray, Clive James, J.M. Coetzee, and Salman Rushdie could conceivably be included in anthologies or survey courses focused on “British Literature”—though they’re Canadian, Australian, South African, and Indian—where it would seem absurd to similarly include William Faulkner and Toni Morrison in the same course. Some anthologizers who are seemingly unaware that Ireland isn’t Great Britain will even include James Joyce and W.B. Yeats alongside Gerard Manley Hopkins and D.H. Lawrence as somehow transcendentally “English,” but the similar (and honestly less offensive) audacity of including Robert Lowell or Sylvia Plath as “English” writers is unthinkable.

Perhaps it’s simply a matter of shared pronunciation, the superficial similarity of accent that makes us lump Commonwealth countries (and Ireland) together as “English” literature, but something about that strict division between American and British literature seems more deliberate to me, especially since they ostensibly do share a language. An irony in that though, for as a sad and newly independent American said in 1787, “a national language answers the purpose of distinction: but we have the misfortune of speaking the same language with a nation, who, of all people in Europe, have given, and continue to give us thefewest proofs of love.”

Before embracing English, or pretending that “American” was something different than English, both the Federal Congress and several state legislatures considered making variously French, German, Greek, and Hebrew our national tongue, before wisely rejecting the idea of one preferred language. So unprecedented and so violent was the breaking of America with Britain, it did after all signal the end of the first British Empire, and so conscious was the construction of a new national identity in the following century, that it seems inevitable that these new Federalists would also declare an independence for American literature.

One way this was done, shortly after the independence of the Republic, is exemplified by the fantastically named New York Philological Society, whose 1788 charter exclaimed their founding “for the purpose of ascertaining and improving the American Tongue.” If national literatures were defined by language, and America’s language was already the inheritance of the English, well then, these brave philologists would just have to transform English into American. Historian Jill Lepore writes in A Is for American: Letters and Other Characters in the Newly United States that the society was created in the “aftermath of the bloody War for Independence” in the hope that “peacetime America would embrace language and literature and adopt…a federal, national language.”

Lepore explains the arbitrariness of national borders; their contingency in the time period that the United States was born; and the manner in which, though language is tied to nationality, such an axiom is thrust through with ambiguity, for countries are diverse of speech and the majority of a major language’s speakers historically don’t reside in the birthplace of that tongue. For the New York Philological Society, it was imperative to come up with the solution of an American tongue, to “spell and speak the same as one another, but differently from people in England.” So variable were (and are) the dialects of the United States, from the non-rhotic dropped “r” of Boston and Charleston, which in the 18th century was just developing in imitation of English merchants to the guttural, piratical swagger of Western accents in the Appalachians, that this lexicographer hoped to systematize such diversity into an unique American language. As one of those assembled scholars wrote, “Language, as well as government should be national…America should have her own distinct from all the world.” His name was Noah Webster and he intended his American Dictionary to birth an entirely new language.

It didn’t of course, even if “u” fell out of “favour.” Such was what Lepore describes as an orthographic declaration of independence, for “there iz no alternativ” as Webster would write in his reformed spelling. But if Webster’s goal was the creation of a genuine new language, he inevitably fell short, for the fiction that English and “American” are two separate tongues is as political as is the claim that Bosnian and Serbian, or Swedish and Norwegian, are different languages (or that English and Scots are the same, for that matter). British linguist David Crystal writes in The Stories of English that “The English language bird was not freed by the American manoeuvre,” his Anglophilic spelling a direct rebuke to Webster, concluding that the American speech “hopped out of one cage into another.”  

Rather, the intellectual class turned towards literature itself to distinguish America from Britain, seeing in the establishment of writers who surpassed Geoffrey Chaucer or Shakespeare a national salvation. Melville, in his consideration of Hawthorne and American literature more generally, predicted in 1850 that “men not very much inferior to Shakespeare are this day born on the banks of the Ohio.” Three decades before that, and the critic Sydney Smith had a different evaluation of the former colonies and their literary merit, snarking interrogatively in the Edinburgh Review that “In the four quarters of the globe, who reads an American book?” In the estimation of the Englishmen (and Scotsmen) who defined the parameters of the tongue, Joel Barlow, Philip Freneau, Hugh Henry Brackenridge, Washington Irving, Charles Brockden Brown, and James Fenimore Cooper couldn’t compete with the sublimity of British literature.

Yet, by 2018, British critics weren’t quite as smug as Smith had been, as more than 30 authors protested the decision four years earlier to allow Americans to be considered for the prestigious Man Booker Prize. In response to the winning of the prize by George Saunders and Paul Beatty, English author Tessa Hadley told The New York Times “it’s as though we’re perceived…as only a subset of U.S. fiction, lost in its margins and eventually, this dilution of the community of writers plays out in the writing.” Who in the four corners of the globe reads an American novel? Apparently, the Man Booker committee. But Hadley wasn’t, in a manner, wrong in her appraisal. By the 21st century, British literature is just a branch of American literature. The question is how did we get here?

Only a few decades after Smith wrote his dismissal, and writers would start to prove Melville’s contention, including of course the author of Benito Cereno and Billy Bud himself. In the decade before the Civil War there was a Cambrian Explosion of American letters, as suddenly Romanticism had its last and most glorious flowering in the form of Ralph Waldo Emerson, Henry David Thoreau, Hawthorne, Melville, Emily Dickinson, and of course Walt Whitman. Such was the movement whose parameters were defined by the seminal scholar F.O. Matthiessen in his 1941 classic American Renaissance: Art and Expression in the Age of Emerson and Whitman, who included studies of all of those figures saving (unfortunately) Dickinson.

The Pasadena-born-but-Harvard-trained Matthiessen remains one of the greatest professors to ever ask his students to close-read a passage of Emerson. American Renaissance was crucial in discovering American literature as something distinct and great in its own right from British literature. Strange to think now, but for most of the 20th century the teaching of American literature was left for either history classes, or the nascent (State Department funded) discipline of American Studies. English departments, true to the very name of the field, tended to see American poetry and novels as beneath consideration in the study of “serious” literature. The general attitude of American literary scholars about their own national literature can be summed up by Victorian critic Charles F. Richardson, who, in his 1886 American Literature, opined that “If we think of Shakespeare, Bunyan, Milton, the seventeenth-century choir of lyrists, Sir. Thomas Browne, Jeremy Taylor, Addison, Swift, Dryden, Gray, Goldsmith, and the eighteenth-century novelists…what shall we say of the intrinsic worth of most of the book written on American soil?” And that was a book by an American about American literature!

On the eve of the World War II and Matthiessen approached his subject in a markedly different way, and while scholars had granted the field a respectability, American Renaissance was chief in radically altering the manner in which we thought about writing in English (or “American”). Matthiessen surveyed the diversity of the 1850s from the repressed Puritanism of Hawthorne’s The Scarlet Letter to the Pagan-Calvinist cosmopolitan nihilism of Moby-Dick and the pantheistic manual that is Thoreau’s Walden in light of the exuberant homoerotic mysticism of Whitman’s Leaves of Grass. Despite such diversity, Matthiessen concluded that the “one common denominator of my five writers…was their devotion to the possibilities of democracy.” Matthiessen was thanked for his discovery of American literature by being hounded by the House Un-American Activities Committee over his left-wing politics, until the scholar jumped from the 12th floor of Boston’s Hotel Manger in 1950. The critic’s commitment to democracy was all the more poignant in light of his death, for Matthiessen understood that it’s in negotiation, diversity, and collaboration that American literature truly distinguished itself. Here was an important truth, what Spengemann explains when he argues that American literature is defined not by the language in which it is written (as with British literature), but rather “American literature had to be defined politically.”

When considering the American Renaissance, an observer might be tempted to whisper “Westward the course of empire,” indeed. Franklin’s literal dream of an American capital to the British Empire never happened. Yet by the decade when the United States would wrench itself apart in an apocalyptic civil war, ironically America would become figurative capital of the English language. From London the energy, vitality, and creativity of the English language would move westward to Massachusetts in the twilight of Romanticism, and after a short sojourn in Harvard and Copley Squares, Concord and Amherst, it would migrate towards New York City with Whitman, which if not the capital of the English became the capital of the English language.  From the 1850s onward, British literature became a regional variation of American literature, a branch on the latter’s tree, a mere phylum in its kingdom.

English literary critics of the middle part of the 19th century didn’t note this transition; arguable if British reviewers in the mid part of the 20th century did either, but if they didn’t, it’s an act of critical malpractice. For who would trade the epiphanies in ballad-meter that are the lyrics of Dickinson in favor of the arid flatulence of Alfred Lord Tennyson? Who would reject the maximalist experimentations of Melville for the reactionary nostalgia of chivalry in Walter Scott? Something ineffable crossed the Atlantic during the American Renaissance, and the old problem of how we could call American literature distinct from British since both are written in English was solved—the later is just a subset of the former.

Thus, Hadley’s fear is a reality, and has been for a while. Decades before Smith would mock the pretensions of American genius, and the English gothic novelist (and son of the first prime minister) Horace Walpole would write, in a 1774 letter to Horace Mann, that the “next Augustan age will dawn on the other side of the Atlantic. There will, perhaps, be a Thucydides at Boston, a Xenophon at New York, and, in time, a Virgil at Mexico, and an Isaac Newton at Peru. At last, some curious traveler from Lima will visit England and give a description of the ruins of St. Paul’s.” Or maybe Walpole’s curious traveler was from California, as our language’s literature has ever moved west, with Spengemann observing that if by the American Renaissance the “English-speaking world had removed from London to the eastern seaboard of the United States, carrying the stylistic capital of the language along with it…[then] Today, that center of linguistic fashion appears to reside in the vicinity of Los Angeles.” Franklin would seem to have gotten his capital, staring out onto a burnt ochre dusk over the Pacific Palisades, as westward the course of empire has deposited history in that final location of Hollywood.

John Leland writes in Hip: The History that “Three generations after Whitman and Thoreau had called for a unique national language, that language communicated through jazz, the Lost Generation and the Harlem Renaissance…American cool was being reproduced, identically, in living rooms from Paducah to Paris.” Leland concludes it was “With this voice, [that] America began to produce the popular culture that would stamp the 20th century as profoundly as the great wars.” What’s been offered by the American vernacular, by American literature as broadly constituted and including not just our letters but popular music and film as well, is a rapacious, energetic, endlessly regenerative tongue whose power is drawn not by circumscription but in its porous and absorbent ability to draw from a variety of languages that have been spoken in this land. Not just English, but languages from Algonquin to Zuni.

No one less than the great grey poet Whitman himself wrote, “We American have yet to really learn our own antecedents, and sort them, to unify them. They will be found ampler than has been supposed, and in widely different sources.” Addressed to an assembly gathered to celebrate the 333th anniversary of Santa Fe, Whitman surmised that “we tacitly abandon ourselves to the notion that our United States have been fashion’d from the British Islands only, and essentially form a second England only—which is a very great mistake.” He of the multitudinous crowd, the expansive one, the incarnation of America itself, understood better than most that the English tongue alone can never define American literature. Our literature is Huck and Jim heading out into the territories, and James Baldwin drinking coffee by the Seine; Dickinson scribbling on the backs of envelopes and Miguel Piñero at the Nuyorican Poets Café. Do I contradict myself? Very well then. Large?—of course. Contain multitudes?—absolutely.

Spengemann glories in the fact that “if American literature is literature written by Americans, then it can presumably appear in whatever language an American writer happens to use,” be it English or Choctaw, Ibo or Spanish. Rather than debating what the capital of the English language is, I advocate that there shall be no capitals. Rather than making borders between national literatures we must rip down those arbitrary and unnecessary walls. There is a national speech unique to everyone who puts pen to paper; millions of literatures for millions of speakers. How do you speak American? By speaking English. And Dutch. And French. And Yiddish. And Italian. And Hebrew. And Arabic. And Ibo. And Wolof. And Iroquois. And Navajo. And Mandarin. And Japanese. And of course, Spanish. Everything can be American literature because nothing is American literature. Its promise is not just a language, but a covenant.

Image credit: Freestock/Joanna Malinowska.

From Father Divine to Jim Jones: On the Phenomenon of American Messiahs

“And you can call me an egomaniac, megalomaniac, or whatever you wish, with a messianic complex. I don’t have any complex, honey. I happen to know I’m the messiah.” —Rev. Jim Jones, FBI Tape Q1059-1

“There’s a Starman, waiting in the sky/He’d like to come and meet us/But he thinks he’d blow our minds.” —David Bowie

Philadelphia, once the second-largest city in the British Empire, was now the capital of these newly united states when the French diplomat and marquis François Barbé-Marbois attended a curious event held at the Fourth Street Methodist Church in 1782. Freshly arrived in the capital was a person born to middling stock in Cuthbert, R.I., and christened as Jemima Wilkinson, but who had since becoming possessed with the spirit of Jesus Christ in October of that revolutionary year of 1776 been known as “The Comforter,” “Friend of Sinners, “The All-Friend,” and most commonly as “Public Universal Friend,” subsequently wearing only masculine garments and answering only to male pronouns. Such is the description of the All-Friend as given by Adam Morris in American Messiahs: False Prophets of a Damned Nation, where the “dark beauty of the Comforter’s androgynous countenance” appeared as a “well-apportioned female body cloaked in black robes along with a white or purple cravat, topped by a wide-brimmed hat made of gray beaver fur.” Reflecting back on the theophany that was responsible for his adoption of the spirit of Christ, the Public Universal Friend wrote about how the “heavens were open’d And She saw too [sic] Archangels descending from the east, with golden crowns upon there heads, clothed in long white Robes, down to their feet.”

Even though the Quakers, with their reliance on revelation imparted from an “Inner Light,” were already a progressive (and suspect) denomination, heresy such as All-Friend’s earned his rebuke and excommunication from the church of his childhood. But that was of no accordance, as the Public Universal Friend would begin a remarkably successful campaign of evangelization across war-torn New England. Preaching a radical gospel that emphasized gender equality and communalism, All-Friend was the leader of an emerging religious movement and a citizen of the new Republic when he arrived at the very heart of Quakerism that was Philadelphia, where he hoped to convert multitudes towards this new messianic faith.  

Barbé-Marbois was excited to see the new transgendered messiah, writing that “Jemima Wilkinson has just arrived here…Some religious denomination awaited her with apprehension, others with extreme impatience. Her story is so odd, her dogmas so new, that she has not failed to attract general attention.” All-Friend was not impressed by rank or pedigree, however, and having been tipped off to the presence of the marquise among the congregation, he preceded to castigate Barbé-Marbois, declaring that “Do these strangers believe that their presence in the house of the Lord flatter me? I disdain their honors, I scorn greatness and good fortune.” For all of the All-Friend’s anger at the presence of this sinner in his temple, Barbé-Marbois was still charmed by the messiah, writing of how All-Friend “has chosen a rather beautiful body for its dwelling…beautiful features, a fine mouth, and animated eyes,” adding that his “travels have tanned her a little.” Though he also notes that they “would have burned her in Spain and Portugal.”

Such is the sort of spectacular story that readers will find in Morris’s deeply research, well-argued, exhaustive, and fascinating new book. A San Francisco-based translator and freelance journalist whose work has appeared in The Believer, The Los Angeles Review of Books, and Salon, Morris’s first book provides a counter-history of a shadow America whose importance and influence are none-the-less for its oddity. In a narrative that stretches from the All-Friend and his heterodox preaching as the last embers of the First Great Awakening died out, through case-studies that include the 19th-century Spiritualist messiahs of Thomas Lake Harris and Cyrus Teed (the later known to his followers as “Koresh” after the Persian king in the Hebrew Scriptures); the massive real-estate empire of Harlem-based Father Divine and his racially egalitarian Peace Mission; and finally the dashed promise and horror of Jim Jones and the massacre of the Peoples Temple Agricultural Project at Jonestown, Guyana in 1978, which took 900 lives and was the most brutal religiously based murder in American history until 9/11. Morris doesn’t just collect these figures at random, he argues that “a direct lineage connects Anne Lee, the Shaker messiah who arrived to American shores in 1774, to Jim Jones, the holy-rolling Marxist,” making his claim based not just on similar ideological affinities but often times with evidence of direct historical contact as well, a chain of messianic influences starting at the very origin of the nation and functioning as a subversive counter-melody to the twin American idols of the market and evangelicalism.

Often dismissively rejected as “cult leaders,” these disparate figures, Morris argues, are better understood as the founders of new religions, unusual though they may seem to us. In this contention, Morris draws on a generation of religious studies scholars who’ve long chafed at the analytical inexactitude of the claim that some groups are simply “cults” composed of easily brain-washed followers and paranoid charlatan leaders with baroque metaphysical claims; a sentiment that was understandable after the horror of Jonestown, but which neglects the full complexities and diversity of religion as a site for human meaning. America has had no shortage of groups, both malicious and benign, that sought to spiritually transform the world, and denounce the excesses and immoralities of American culture, while sometimes surpassing them and embracing convoluted theological claims.

In new religious movements there is a necessary radical critique of inequity and injustice; as Jonestown survivor Laura Kohl recalls, the utopian impulse that motivated the Peoples’ Temple was originally “at the very cutting edge of the way we wanted society to evolve. So we wanted to be totally integrated and we wanted to have people of every socio-economic level, every racial background, everything, all included under one roof.” When their experiment began, people like Kohl couldn’t anticipate that it would end with mass suicide instigated by an increasingly paranoid amphetamine addict, because in the beginning “we were trying to be role models of a society, of a culture that was totally inclusive and not discriminatory based on education or race or socio-economic level. We all joined with that in mind.”

Such is the inevitable tragedy of messianism—it requires a messiah. Whatever the idealistic import of their motivations, members of such groups turned their hopes, expectations, and faith towards a leader who inevitably would begin to fashion himself or herself as a savior. Whether you slur them as “cults,” or soberly refer to them as “new religions,” such groups stalk the American story, calling to mind not just industrious Shakers making their immaculate wooden furniture in the 19th century, but also Bhagwan Shree Rajneesh looking positively beatified in his follower-funded Mercedes Benz, Marshall Applewhite dead in his Nikes awaiting the Hale Bop Comet to shepherd Heaven’s Gate to astral realms, and David Koresh immolated with 75 of his fellow Branch Davidians after an eight-week standoff with the FBI and the ATF.

While it would be a mistake to obscure the latent (and sometimes not-so-latent) darkness that can lay at the core of some of these groups—the exploitation, megalomania, and extremism—Morris convincingly argues that simply rejecting such groups as cults does no service to understanding them. This sentiment, that some religions are legitimate while others are irretrievably “cults,” is often mouthed by so-called “deprogrammers,” frequently representatives of evangelical Christian denominations or sham psychologists whose charlatanry could compete with that of the founders of these new faiths themselves. Morris claims that part of what’s so threatening about the figures he investigates, and not other equally controlling phenomena from Opus Dei to the Fellowship, is that unlike those groups, leaders like Ann Lee, Father Divine, and even Jones provided a “viable alternative to the alienation of secularized industrial urbanism, and a politico-spiritual antidote to the anodyne mainline Protestantism that increasingly served as a handmaiden to big business.”

For Morris, such figures and groups are genuinely countercultural, and for all of their oftentimes tragic failings, they’ve provided the only genuine resistance to the forward march of capitalism in American history. These groups are seen as dangerous in a way that Opus Dei and the Fellowship aren’t because they so fully interrogate the basic structures of our society in a manner that those more accepted cults don’t, in part because those mainstream groups exist precisely to uphold the ruling class. As Teed wrote, “Christianity… is but the dead carcass of a once vital and active structure. It will rest supine till the birds of prey, the vultures and cormorants of what is falsely called liberalism have picked its bones of its fleshly covering leaving them to dry to bleach and decompose.”

“Far more than their heretical beliefs,” Morris writes, it is the “communistic and anti-family leanings of American messianic movements [that] pose a threat to the prevailing socio-economic order.” Lee may have thought that she was the feminine vessel for the indwelling presence of Christ, but she also rejected the speculation-based industrialization wreaking havoc on the American countryside; Teed believed that the Earth was hollow and that we lived in its interior, but he also penned cognoscente denunciations of Gilded Age capitalism, and modeled ways of organizing cooperative communities that didn’t rely on big business; Father Divine implied that he ascended bodily from heaven, but he also built a communally owned empire that included some of the first integrated businesses and hotels in the United States; and Jones claimed that he was variously the reincarnation of the Pharaoh Akhenaten, the Buddha, Christ, and Vladimir Lenin, but he was also a staunch fighter against segregation and he and his wife were the first Indiana couple to adopt an African-American child. Such inconsistencies don’t invalidate these figures legitimate beliefs, but they make the ultimate conclusion of how many of these movements all the more tragic because of them.

Though not expressly stated this way by Morris, there is a sense in American Messiahs that the titular figures are both the most and least American of us. The least because they embrace a utopian communism anathema to the national character, and the most because what could be more boot-strapping in its rugged individualism than declaring yourself to be a god? Such are the paradoxes inherent in a commune run by a king (or queen). These new religions steadfastly reject the cutthroat avarice that defines American business in favor of a subversive collectivism. At the center of the social theory and economics of every group, from the Universal Friends to the Peoples’ Temple, is a model of communal living, with Morris writing that such organization “represents the ultimate repudiation of the values and institutions that Americans historically hold dear: it rejects not only the sacrosanct individualism on which American culture thrives, but also the nuclear family unit that evolved alongside industrial capitalism.”

In this way, whether in a Shaker commune or one of Father Divine’s Peace Missions, the intentional community offers “an escape from the degrading alienation of capitalist society by returning—once more—to cooperative and associative modes of living modeled by the apostolic church.” But at the same time, there is the inegalitarian contradiction of allowing a woman or man to be preeminent among the rest, to fashion themselves as a messiah, as a type of absolute CEO of the soul. Such is the marriage of the least American of collectivisms combined with a rugged individualism pushed to the breaking point. And so, America is the land of divine contradictions, for as Harris wrote in his manifesto Brotherhood of the New Life “I have sought to fold the genius of Christianity, to fathom its divine import, and to embody its principles in the spirit and body of our own America.”

Despite the antinomian excesses of theses messiahs, Morris is correct in his argument that the social and economic collectivism promoted and explored by such groups are among the most successful instances of socialism in the United States. A vibrant radical left in America has always been under attack by both capital and the government, so that enduring instances of socialism more often find themselves in a religious context than in a secular one, as failed communes from Brook Farm to Fruitland can attest to. Additionally, because it was the reform-minded French sociologist “Charles Fourier, rather than Karl Marx, who set the agenda of socialist reform in antebellum America,” as Morris writes, there was never the opportunity for a working-class, secular communist party to develop in the United States as it did in Europe. Into that void came sects and denominations that drew freely from America’s religious diversity, marrying Great Awakening evangelism to early modern occultism and 19th-century spiritualism so as to develop heterodox new faiths that often cannily and successfully questioned the status quo; what Morris describes as a “nonsectarian, ecumenical Christian church rebuilt around Christian principles of social justice [that] could be an instrument of radical social reform.”

And though their struggles for racial, gender, and class egalitarianism are often forgotten, Morris does the important job of resuscitating an understanding of how groups from the Shakers to Teed’s Koreshan Unity functioned as vanguards for progressive ideals that altered the country for the better. In ways that are both surprising and crucial to remember, such groups were often decades, if not centuries, ahead of their fellow Americans in embracing the axiom that all people are created equal, and in goading the nation to live up to its highest ideals, while demonstrating the efficacy of religious movements to effect social change and to demonstrate that there are alternative ways to structure our communities. After all, it was in the 18th century that Shakers taught a form of nascent feminism and LGBTQ equality, with their founder Mother Ann Lee publicly praying to “God, Our Mother and Father.”

In groups like the Shakers and the Society of Universal Friends there was an “incipient feminism, which continued to develop” and that defined American messianism over the centuries in the attraction of a “predominantly female following often through promises of equal rights among the faithful.” Women were able to find leadership positions that existed nowhere else in American society at the time, and they also would be emancipated from mandated domestic drudgery and abuse, allowing them an almost unparalleled freedom. As Morris notes, “For the tens of thousands of Americans who attended [these revivals], it was likely the first time any had ever seen a woman permitted to stand at a public lectern.” To read the radicalism of such a spectacle as mere performance, or to ignore their subversive language as simple rhetoric, is to deny that which is transgressive, and perhaps even redemptive, in figures otherwise marginalized in our official histories. An 18th-century church acknowledges the evils of misogyny, and made its rectification one of its chief goals. Nineteenth-century new religions denounced institutional racism from the pulpit long before emancipation. There is admittedly something odd in Spiritualist churches holding seances where the spirits of Thomas Jefferson and George Washington would possess mediums and speak to the assembled, but the fact that those founders then “tended to express regret for their participation in the slave economy, and advanced that a more temperate and egalitarian society would ease the heavenward path of American Christians” isn’t just an attempt at rationalization or absolution, but an acknowledgement of deep historical malignancies in our society that people still have trouble understanding today.

Regarding America’s shameful history of racial inequality, many of these so-called messiahs were often far ahead of conventional politics as well. Father Divine’s Peace Missions are an illustrative example. Possibly born to former slaves in Rockville, Md., and christened George Baker Jr., the man who would come to be called Father Divine studied the emerging loose movement known as “New Thought” and developed his own metaphysics grounded in egalitarianism and the same adoptionist heresy that other pseudo-messiahs had embraced, namely that it was possible to be infused with the spirit of Christ in a way that made you equivalent to Christ. Furthermore, as radical as it was to see a woman preaching, it was just as subversive for Americans to imagine Christ as a black man, but Father Divine’s mission was suffused with the belief that “if God first chose to dwell among a people dispossessed of their land and subject to the diktats of a sinful empire, it was logical for him to return as an African American in the twentieth century.”

Morris explains that it was Father Divine’s contention, as well as that of the other messiahs, that “Such a church might rescue the nation from its spiritual stagnation and the corresponding failure to live up to its democratic ideals.” At the height of the Great Depression, Father Divine built a racially integrated fellowship in a series of communally owned Peace Missions that supplied employment and dignity to thousands of followers, composed of “black and white, rich and poor, illiterate and educated.” His political platform, which he agitated for in circles that included New York City Republican (and sometimes Socialist) mayor Fiorello LaGuardia, as well as President Franklin Delano Roosevelt, included laws requiring insurance, caps on union dues, the abolition of the death penalty, banning of assault weapons, and the repeal of racially discriminatory laws. Even while Morris’s claim that “Father Divine was the most well-known and influential civil rights leader on the national stage between the deportation of Marcus Garvey and the emergence of Martin Luther King Jr.” might need a bit more exposition to be convincing, he does make his case that this enigmatic and iconoclastic figure deserves far more attention and credit than he’s been given.

At times American Messiahs can suffer a bit from its own enthusiasm. It’s always an engaging read, but Morris’s claims can occasionally be a bit too sweeping. After reading quotations from pseudo-messiahs like Teed’s, when he writes that “The universe, is an alchemico-organic dynamo… Its general form is that of the perfect egg or shell, with its central vitellus at or near the center of the sphere,” such exhaustive litanies become exhausting, because strange cosmologies are, well, strange. When you read about the course of study at Teed’s College of Life, where for $50 participants could learn “analogical biology and analogical physiology, disciplines that taught the ‘laws of universal form,'” you can’t help but wish that just a smidgen more cynicism was expressed on Morris’s part. There is an admirability that Morris takes such figures on their own accord, but it’s hard to not approach them with some skepticism. When it comes to hucksters claiming to be possessed by the indwelling presence of God, it’s difficult not to declare that sometimes a crank is a crank is a crank, and a nutcase by any other name would still smell of bullshit.

Still, Morris’s argument that we have to understand the messiahs as instances of often stunningly successful countercultural critique of American greed and inequity is crucial, and he’s right that we forget those lessons at our own peril. If we only read Teed for his bizarre anti-Copernican cosmology positing that we live within the Earth itself, but we forget that he also said that the “question for the people of to-day to consider is that of bread and butter. It must henceforth be a battle to the death between organized labor and organized capital,” than we forget the crucial left role that such groups have often played in American history. Even more important is the comprehension that such groups supplied a potent vocabulary, a sacred rhetoric that often spoke to people and that conceived of the political problems we face in a manner more astute and moving than simple secular analysis did. It’s not incidental that from the Shakers to the Amish, when it comes to successful counter-cultures, it’s the religious rather than the secular communes that endure. When Father Divine’s follower who went by the name John Lamb said that “The philosophies of men…were inadequate to cope with humanity’s problems,” he’s wisely speaking a truth just as accurate today as during the Great Depression when “clouds of tyranny” were wafting in from Europe. Say what you will about Morris’s messiahs, they were often woman and men who understood the score, and posed a solution to those problems regardless of how heterodox we may have found their methods.

Morris writes that “The American messianic impulse is based on a fundamentally irrefutable truth first observed by the Puritans: the injustices of capitalist culture cannot be reformed from within.” You can bracket out your theories of a hollow Earth and your spirit medium seances, but that above observation is the only one worth remembering. A profound lesson to be learned from the example of the women and men who most steadfastly lived in opposition to those idols of American culture. And yet, we can’t forget where the excesses of such opposition can sometimes lead—it’s hard not to wash the taste of the poisoned Kool-Aid of Jonestown from our mouths. With such a tragic impasse, what are those of us who wish there were utopias supposed to do? The great failure of these egalitarian experiments is that they paradoxically ended up enshrining a figure above the rest, that in rejecting American individualism they strangely embraced it in its purest and most noxious forms. If we can all look into those stark, foreboding mirror sunglasses favored by Jim Jones to conceal his drug-addled and bloodshot eyes, I don’t wonder if what we see isn’t our own reflections staring back at us, for good and ill. Perhaps there is an answer in that, for in that reflection we can see not just one man, but rather a whole multitude. What’s needed, if possible, is a messianism without a messiah.

Marks of Significance: On Punctuation’s Occult Power

“Prosody, and orthography, are not parts of grammar, but diffused like the blood and spirits throughout the whole.” —Ben Jonson, English Grammar (1617)

Erasmus, author of The Praise of Folly and the most erudite, learned, and scholarly humanist of the Renaissance, was enraptured by the experience, by the memory, by the very idea of Venice. For 10 months from 1507 to 1508, Erasmus would be housed in a room of the Aldine Press, not far from the piazzas of St. Mark’s Square with their red tiles burnt copper by the Adriatic sun, the glory and the stench of the Grand Canal wafting into the cell where the scholar would expand his collection of 3,260 proverbs entitled Thousands of Adages, his first major work. For Venice was the home to a “library which has no other limits than the world itself;” a watery metropolis and an empire of dreams that was “building up a sacred and immortal thing.”

Erasmus composed to the astringent smell of black ink rendered from the resin of gall nuts, the rhythmic click-click-click of movable type of alloyed lead-tin keys being set, and the whoosh of paper feeding through the press. From that workshop would come more than 100 titles of Greek and Latin, all published with the indomitable Aldus Manutius’s watermark, an image filched from an ancient Roman coin depicting a strangely skinny Mediterranean dolphin inching down an anchor. Reflecting on that watermark (which has since been filched again, by the modern publisher Doubleday), Erasmus wrote that it symbolized “all kinds of books in both languages, recognized, owned and praised by all to whom liberal studies are holy.” Adept in humanistic philology, Erasmus made an entire career by understanding the importance of a paragraph, a phrase, a word. Of a single mark. As did his publisher.

Erasmus’s printer was visionary. The Aldine Press was the first in Europe to produce books made not by folding the sheets of paper in a bound book once (as in a folio), or four times (as in a quarto), but eight times, to produce volumes that could be as small as four to six inches, the so-called octavo. Such volumes could be put in a pocket, what constituted the forerunner of the paperback, which Manutius advertised as “portable small books.” Now volumes no longer had to be cumbersome tomes chained to the desk of a library, they could be squirreled away in a satchel, the classics made democratic. When laying the typeface for a 1501 edition of Virgil in the new octavo form, Manutius charged a Bolognese punchcutter named Francesco Griffo to design a font that appeared calligraphic. Borrowing the poet Petrarch’s handwriting, Griffo invented a slanted typeface that printers quickly learned could denote emphasis, which came to be named after the place of its invention: italic.

However, it was an invention seven years earlier that restructured not just how language appears, but indeed the very rhythm of sentences; for, in 1496, Manutius introduced a novel bit of punctuation, a jaunty little man with leg splayed to the left as if he was pausing to hold open a door for the reader before they entered the next room, the odd mark at the caesura of this byzantine sentence that is known to posterity as the semicolon. Punctuation exists not in the wild; it is not a function of how we hear the word, but rather of how we write the Word. What the theorist Walter Ong described in his classic Orality and Literacy as being marks that are “even farther from the oral world than letters of the alphabet are: though part of a text they are unpronounceable, nonphonemic.” None of our notations are implied by mere speech, they are creatures of the page: comma, and semicolon; (as well as parenthesis and what Ben Jonson appropriately referred to as an “admiration,” but what we call an exclamation mark!)—the pregnant pause of a dash and the grim finality of a period. Has anything been left out? Oh, the ellipses…

No doubt the prescriptivist critic of my flights of grammatical fancy in the previous few sentences would note my unorthodox usage, but I do so only to emphasize how contingent and mercurial our system of marking written language was until around four or five centuries ago. Manutius may have been the greatest of European printers, but from Johannes Guttenberg to William Caxton, the era’s publishers oversaw the transition from manuscript to print with an equivalent metamorphosis of language from oral to written, from the ear to the eye. Paleographer Malcolm Parkes writes in his invaluable Pause and Effect: An Introduction to the History of Punctuation in the West that such a system is a “phenomenon of written language, and its history is bound up with that of the written medium.” Since the invention of script, there has been a war of attrition between the spoken and the written; battle lines drawn between rhetoricians and grammarians, between sound and meaning. Such is a distinction as explained by linguist David Crystal in Making a Point: The Persnickety Story of English Punctuation: “writing and speech are seen as distinct mediums of expression, with different communicative aims and using different processes of composition.”

Obviously, the process of making this distinction has been going on for quite a long time. The moment the first wedged-stylus pressed into wet Mesopotamian clay was the beginning of it, through ancient Greek diacritical and Hebrew pointing systems, up through when Medieval scribes began to first separate words from endless scripto continua, whichbroachednogapsbetweenwordsuntiltheendofthemiddleages. Reading, you see, was normally accomplished out loud, and the written word was less a thing-unto-itself and more a representation of a particular event—that is the event of speaking. When this is the guiding metaphysic of writing, punctuation serves a simple purpose—to indicate how something is to be read aloud. Like the luftpause of musical notation, the nascent end stops and commas of antiquity didn’t exist to clarify syntactical meaning, but only to let you know when to take a breath. Providing an overview of punctuation’s genealogy, Alberto Manguel writes in A History of Reading how by the seventh century, a “combination of points and dashes indicated a full stop, a raised or high point was equivalent to our comma,” an innovation of Irish monks who “began isolating not only parts of speech but also the grammatical constituents within a sentence, and introduced many of the punctuation marks we use today.”

No doubt many of you, uncertain on the technical rules of comma usage (as many of us are), were told in elementary school that a comma designates when a breath should be taken, only to discover by college that that axiom was incorrect. Certain difficulties, with, that, way of writing, a sentence—for what if the author is Christopher Walken or William Shatner? Enthusiast of the baroque that I am, I’m sympathetic to writers who use commas as Hungarians use paprika. I’ll adhere to the claim of David Steel, who in his 1785 Elements of Punctuation wrote that “punctuation is not, in my opinion, attainable by rules…but it may be procured by a kind of internal conviction.” Steven Roger Fischer correctly notes in his A History of Reading (distinct from the Manguel book of the same title) that “Today, punctuation is linked mainly to meaning, not to sound.” But as late as 1589 the rhetorician George Puttenham could in his Art of English Poesie, as Crystal explains, define a comma as the “shortest pause,” a colon as “twice as much time,” and an end stop as a “full pause.” Because our grade school teachers weren’t wrong in a historical sense, for that was the purpose of commas, colons, and semicolons, to indicate pauses of certain amounts of time when scripture was being aloud. All of the written word would have been quietly murmured under the breath of monks in the buzz of a monastic scriptorium.

For grammarians, punctuation has long been claimed as a captured soldier in the war of attrition between sound and meaning, these weird little marks enlisted in the cause of language as a primarily written thing. Fischer explains that “universal, standardized punctuation, such as may be used throughout a text in consistent fashion, only became fashionable…after the introduction of printing.” Examine medieval manuscripts and you’ll find that the orthography, that is the spelling and punctuation (insomuch as it exists), is completely variable from author to author—in keeping with an understanding that writing exists mainly as a means to perform speaking. By the Renaissance, print necessitated a degree of standardization, though far from uniform. This can be attested to by the conspiratorially minded who are flummoxed by Shakespeare’s name being spelled several different ways while he was alive, or by the anarchistic rules of 18th-century punctuation, the veritable golden age of the comma and semicolon. When punctuation becomes not just an issue of telling a reader when to breathe, but as a syntactical unit that conveys particular meanings that could be altered by the choice or placement of these funny little dots, then a degree of rigor becomes crucial. As Fischer writes, punctuation came to convey “almost exclusively meaning, not sound,” and so the system had to become fixed in some sense.

If I may offer an additional conjecture, it would seem to me that there was a fortuitous confluence of both the technology of printing and the emergence of certain intellectual movements within the Renaissance that may have contributed to the elevation of punctuation. Johanna Drucker writes in The Alphabetic Labyrinth: The Letters in History and Imagination how Renaissance thought was gestated by “strains of Hermetic, Neo-Pythagorean, Neo-Platonic and kabbalistic traditions blended in their own peculiar hybrids of thought.” Figures like the 15th-century Florentine philosophers Marsilio Ficino and Giovanni Pico della Mirandola reintroduced Plato into an intellectual environment that had sustained itself on Aristotle for centuries. Aristotle rejected the otherworldliness of his teacher Plato, preferring rather to muck about in the material world of appearances, and when medieval Christendom embraced the former, they modeled his empirical perspective. Arguably the transcendent nature of words is less important in such a context; what difference does the placement of a semicolon matter if it’s not conveying something of the eternal realm of the Forms? But the Florentine Platonists like Ficino were concerned with such things, for as he writes in Five Questions Concerning the Mind (printed in 1495—one year after the first semicolon), the “rational soul…possesses the excellence of infinity and eternity…[for we] characteristically incline toward the infinite.” In Renaissance Platonism, the correct ordering of words, and their corralling with punctuation, is a reflection not of speech, but of something larger, greater, higher. Something infinite and eternal; something transcendent. And so, we have the emergence of a dogma of correct punctuation, of standardized spelling—of a certain “orthographic Platonism.”  

Drucker explains that Renaissance scholars long searched “for a set of visual signs which would serve to embody the system of human knowledge (conceived of as the apprehension of a divine order).” In its most exotic form this involved the construction of divine languages, the parsing of Kabbalistic symbols, and the embrace of alchemical reasoning. I’d argue in a more prosaic manner that such orthographic Platonism is the well-spring for all prescriptivist approaches to language, where the manipulation of the odd symbols that we call letters and punctuation can lend themselves to the discovery of greater truths, an invention that allows us “to converse even with the absent,” as Parkes writes.  In the workshops of the Renaissance, at the Aldine Press, immortal things were made of letters and eternity existed between them, with punctuation acting as the guideposts to a type of paradise. And so it can remain for us.

Linguistic prescriptivists will bemoan the loss of certain standards, of how text speak signals an irreducible entropy of communication, or how the abandonment of arbitrary grammatical rules is as if a sign from Revelation. Yet such reactionaries are not the true guardians of orthographic Platonism, for we must take wisdom where we find it, in the appearance, texture, and flavor of punctuation. Rules may be arbitrary, but the choice of particular punctuation—be it the pregnant pause of the dash or the rapturous shouting of the exclamation mark—matters. Literary agent Noah Lukeman writes in A Dash of Style: The Art and Mastery of Punctuation that punctuation is normally understood as simply “a convenience, a way of facilitating what you want to say.” Such a limited view, which is implicit for either those that advocate punctuation as an issue of sound or as one of meaning, ignores the occult power of the question mark, the theurgy in a comma. The orthographic Platonists at the Aldine Press understood that so much depended on a semicolon; that it signified more than a longer-than-average pause or the demarcation of an independent clause. Lukeman argues that punctuation is rarely “pondered as a medium for artistic expression, as a means of impacting content,” yet in the most “profound way…it achieves symbiosis with the narration, style, viewpoint, and even the plot itself.”

Keith Houston in Shady Characters: The Secret Life of Punctuation, Symbols, and Other Typographical Marks claims that “Every character we type or write is a link to the past;” every period takes us back to the dots that Irish monks used to signal the end of a line; every semicolon back to Manutius’s Venetian workshop. Punctuation, as with the letters whom they serve, has a deep genealogy, their use places us in a chain of connotation and influence that goes back centuries. More than that, each individual punctuation has a unique terroir; they do things that give the sentence a waft, a wisdom, a rhythm that is particular to them. Considering the periods of Ernest Hemingway, the semicolons of Edgar Allan Poe and Herman Melville, and Emily Dickinson’s sublime dash, Lukeman writes that “Sentences crash and fall like the waves of the sea, and work unconsciously on the reader. Punctuation is the music of language.”

To get overly hung up on punctuation as either an issue of putting marks in the right place, or letting the reader know when they can gulp some air, is to miss the point—a comma is a poem unto itself, an exclamation point is an epic! Cecelia Watson writes in her new book, Semicolon: The Past, Present, and Future of a Misunderstood Mark, that Manutius’s invention “is a place where our anxieties and our aspirations about language, class, and education are concentrated.” And she is, of course, correct, as evidenced by all of those partisans of aesthetic minimalism from Kurt Vonnegut to Cormac McCarthy who’ve impugned the Aldine mark’s honor. But what a semicolon can do that other marks can’t! How it can connect two complete ideas into a whole; a semicolon is capable of unifications that a comma is too weak to do alone. As Adam O’Fallon Price writes in The Millions, “semicolons…increase the range of tone and inflection at a writer’s disposal.” Or take the exclamation mark, a symbol that I’ve used roughly four times in my published writing, but which I deploy no less than 15 times per average email. A maligned mark due to its emotive enthusiasms, Nick Ripatrazone observes in The Millions that “exclamation marks call attention toward themselves in poems: they stand straight up.” Punctuation, in its own way, is conscious; it’s an algorithm, as much thought itself as a schematic showing the process of thought.

To take two poetic examples, what would Walt Whitman be without his exclamation mark; what would Dickinson be without her dash? They didn’t simply use punctuation for the pause of breath nor to logically differentiate things with some grammatical-mathematical precision. Rather they did do those things, but also so much more, for the union of exhalation and thought gestures to that higher realm the Renaissance originators of punctuation imagined. What would Whitman’s “Pioneers! O pioneers!” from the 1865 Leaves of Grass be without the exclamation point? What argument could be made if that ecstatic mark were abandoned? What of the solemn mysteries in the portal that is Dickinson’s dash when she writes that “’Hope’ is the thing with feathers –”? Orthographic Platonism instills in us a wisdom behind the arguments of rhetoricians and grammarians; it reminds us that more than simple notation, each mark of punctuation is a personality, a character, a divinity in itself.

My favorite illustration of that principle is in dramatist Margaret Edson’s sublime play W;t, the only theatrical work that I can think of that has New Critical close reading as one of its plot points. In painful detail, W;t depicts the final months of Dr. Vivian Bearing, a professor of 17th-century poetry at an unnamed, elite, eastern university, after she has been diagnosed with Stage IV cancer. While undergoing chemotherapy, Bearing often reminisces on her life of scholarship, frequently returning to memories of her beloved dissertation adviser, E.M. Ashford. In one flashback, Bearing remembers being castigated by Ashford for sloppy work that the former did, providing interpretation of John Donne’s Holy Sonnet VI based on an incorrectly punctuated edition of the cycle. Ashford asks her student “Do you think the punctuation of the last line of this sonnet is merely an insignificant detail?” In the version used by Bearing, Donne’s immortal line “Death be not proud” is end stopped with a semicolon, but as Ashford explains, the proper means of punctuation, as based on the earliest manuscripts of Donne, is simply a comma. “And death shall be no more, comma, Death thou shalt die.”

Ashford imparts to Bearing that so much can depend on a comma. The professor tells her student that “Nothing but a breath—a comma—separates life from everlasting…With the original punctuation restored, death is no longer something to act out on a stage, with exclamation points…Not insuperable barriers, not semicolons, just a comma.” Ashford declares that “This way, the uncompromising way, one learns something from this poem, wouldn’t you say?” Such is the mark of significance, an understanding that punctuation is as intimate as breath, as exulted as thought, and as powerful as the union between them—infinite, eternal, divine.

Image credit: Wikimedia Commons/Sam Town.