Marks of Significance: On Punctuation’s Occult Power

“Prosody, and orthography, are not parts of grammar, but diffused like the blood and spirits throughout the whole.” —Ben Jonson, English Grammar (1617)

Erasmus, author of The Praise of Folly and the most erudite, learned, and scholarly humanist of the Renaissance, was enraptured by the experience, by the memory, by the very idea of Venice. For 10 months from 1507 to 1508, Erasmus would be housed in a room of the Aldine Press, not far from the piazzas of St. Mark’s Square with their red tiles burnt copper by the Adriatic sun, the glory and the stench of the Grand Canal wafting into the cell where the scholar would expand his collection of 3,260 proverbs entitled Thousands of Adages, his first major work. For Venice was the home to a “library which has no other limits than the world itself;” a watery metropolis and an empire of dreams that was “building up a sacred and immortal thing.”

Erasmus composed to the astringent smell of black ink rendered from the resin of gall nuts, the rhythmic click-click-click of movable type of alloyed lead-tin keys being set, and the whoosh of paper feeding through the press. From that workshop would come more than 100 titles of Greek and Latin, all published with the indomitable Aldus Manutius’s watermark, an image filched from an ancient Roman coin depicting a strangely skinny Mediterranean dolphin inching down an anchor. Reflecting on that watermark (which has since been filched again, by the modern publisher Doubleday), Erasmus wrote that it symbolized “all kinds of books in both languages, recognized, owned and praised by all to whom liberal studies are holy.” Adept in humanistic philology, Erasmus made an entire career by understanding the importance of a paragraph, a phrase, a word. Of a single mark. As did his publisher.

Erasmus’s printer was visionary. The Aldine Press was the first in Europe to produce books made not by folding the sheets of paper in a bound book once (as in a folio), or four times (as in a quarto), but eight times, to produce volumes that could be as small as four to six inches, the so-called octavo. Such volumes could be put in a pocket, what constituted the forerunner of the paperback, which Manutius advertised as “portable small books.” Now volumes no longer had to be cumbersome tomes chained to the desk of a library, they could be squirreled away in a satchel, the classics made democratic. When laying the typeface for a 1501 edition of Virgil in the new octavo form, Manutius charged a Bolognese punchcutter named Francesco Griffo to design a font that appeared calligraphic. Borrowing the poet Petrarch’s handwriting, Griffo invented a slanted typeface that printers quickly learned could denote emphasis, which came to be named after the place of its invention: italic.

However, it was an invention seven years earlier that restructured not just how language appears, but indeed the very rhythm of sentences; for, in 1496, Manutius introduced a novel bit of punctuation, a jaunty little man with leg splayed to the left as if he was pausing to hold open a door for the reader before they entered the next room, the odd mark at the caesura of this byzantine sentence that is known to posterity as the semicolon. Punctuation exists not in the wild; it is not a function of how we hear the word, but rather of how we write the Word. What the theorist Walter Ong described in his classic Orality and Literacy as being marks that are “even farther from the oral world than letters of the alphabet are: though part of a text they are unpronounceable, nonphonemic.” None of our notations are implied by mere speech, they are creatures of the page: comma, and semicolon; (as well as parenthesis and what Ben Jonson appropriately referred to as an “admiration,” but what we call an exclamation mark!)—the pregnant pause of a dash and the grim finality of a period. Has anything been left out? Oh, the ellipses…

No doubt the prescriptivist critic of my flights of grammatical fancy in the previous few sentences would note my unorthodox usage, but I do so only to emphasize how contingent and mercurial our system of marking written language was until around four or five centuries ago. Manutius may have been the greatest of European printers, but from Johannes Guttenberg to William Caxton, the era’s publishers oversaw the transition from manuscript to print with an equivalent metamorphosis of language from oral to written, from the ear to the eye. Paleographer Malcolm Parkes writes in his invaluable Pause and Effect: An Introduction to the History of Punctuation in the West that such a system is a “phenomenon of written language, and its history is bound up with that of the written medium.” Since the invention of script, there has been a war of attrition between the spoken and the written; battle lines drawn between rhetoricians and grammarians, between sound and meaning. Such is a distinction as explained by linguist David Crystal in Making a Point: The Persnickety Story of English Punctuation: “writing and speech are seen as distinct mediums of expression, with different communicative aims and using different processes of composition.”

Obviously, the process of making this distinction has been going on for quite a long time. The moment the first wedged-stylus pressed into wet Mesopotamian clay was the beginning of it, through ancient Greek diacritical and Hebrew pointing systems, up through when Medieval scribes began to first separate words from endless scripto continua, whichbroachednogapsbetweenwordsuntiltheendofthemiddleages. Reading, you see, was normally accomplished out loud, and the written word was less a thing-unto-itself and more a representation of a particular event—that is the event of speaking. When this is the guiding metaphysic of writing, punctuation serves a simple purpose—to indicate how something is to be read aloud. Like the luftpause of musical notation, the nascent end stops and commas of antiquity didn’t exist to clarify syntactical meaning, but only to let you know when to take a breath. Providing an overview of punctuation’s genealogy, Alberto Manguel writes in A History of Reading how by the seventh century, a “combination of points and dashes indicated a full stop, a raised or high point was equivalent to our comma,” an innovation of Irish monks who “began isolating not only parts of speech but also the grammatical constituents within a sentence, and introduced many of the punctuation marks we use today.”

No doubt many of you, uncertain on the technical rules of comma usage (as many of us are), were told in elementary school that a comma designates when a breath should be taken, only to discover by college that that axiom was incorrect. Certain difficulties, with, that, way of writing, a sentence—for what if the author is Christopher Walken or William Shatner? Enthusiast of the baroque that I am, I’m sympathetic to writers who use commas as Hungarians use paprika. I’ll adhere to the claim of David Steel, who in his 1785 Elements of Punctuation wrote that “punctuation is not, in my opinion, attainable by rules…but it may be procured by a kind of internal conviction.” Steven Roger Fischer correctly notes in his A History of Reading (distinct from the Manguel book of the same title) that “Today, punctuation is linked mainly to meaning, not to sound.” But as late as 1589 the rhetorician George Puttenham could in his Art of English Poesie, as Crystal explains, define a comma as the “shortest pause,” a colon as “twice as much time,” and an end stop as a “full pause.” Because our grade school teachers weren’t wrong in a historical sense, for that was the purpose of commas, colons, and semicolons, to indicate pauses of certain amounts of time when scripture was being aloud. All of the written word would have been quietly murmured under the breath of monks in the buzz of a monastic scriptorium.

For grammarians, punctuation has long been claimed as a captured soldier in the war of attrition between sound and meaning, these weird little marks enlisted in the cause of language as a primarily written thing. Fischer explains that “universal, standardized punctuation, such as may be used throughout a text in consistent fashion, only became fashionable…after the introduction of printing.” Examine medieval manuscripts and you’ll find that the orthography, that is the spelling and punctuation (insomuch as it exists), is completely variable from author to author—in keeping with an understanding that writing exists mainly as a means to perform speaking. By the Renaissance, print necessitated a degree of standardization, though far from uniform. This can be attested to by the conspiratorially minded who are flummoxed by Shakespeare’s name being spelled several different ways while he was alive, or by the anarchistic rules of 18th-century punctuation, the veritable golden age of the comma and semicolon. When punctuation becomes not just an issue of telling a reader when to breathe, but as a syntactical unit that conveys particular meanings that could be altered by the choice or placement of these funny little dots, then a degree of rigor becomes crucial. As Fischer writes, punctuation came to convey “almost exclusively meaning, not sound,” and so the system had to become fixed in some sense.

If I may offer an additional conjecture, it would seem to me that there was a fortuitous confluence of both the technology of printing and the emergence of certain intellectual movements within the Renaissance that may have contributed to the elevation of punctuation. Johanna Drucker writes in The Alphabetic Labyrinth: The Letters in History and Imagination how Renaissance thought was gestated by “strains of Hermetic, Neo-Pythagorean, Neo-Platonic and kabbalistic traditions blended in their own peculiar hybrids of thought.” Figures like the 15th-century Florentine philosophers Marsilio Ficino and Giovanni Pico della Mirandola reintroduced Plato into an intellectual environment that had sustained itself on Aristotle for centuries. Aristotle rejected the otherworldliness of his teacher Plato, preferring rather to muck about in the material world of appearances, and when medieval Christendom embraced the former, they modeled his empirical perspective. Arguably the transcendent nature of words is less important in such a context; what difference does the placement of a semicolon matter if it’s not conveying something of the eternal realm of the Forms? But the Florentine Platonists like Ficino were concerned with such things, for as he writes in Five Questions Concerning the Mind (printed in 1495—one year after the first semicolon), the “rational soul…possesses the excellence of infinity and eternity…[for we] characteristically incline toward the infinite.” In Renaissance Platonism, the correct ordering of words, and their corralling with punctuation, is a reflection not of speech, but of something larger, greater, higher. Something infinite and eternal; something transcendent. And so, we have the emergence of a dogma of correct punctuation, of standardized spelling—of a certain “orthographic Platonism.”  

Drucker explains that Renaissance scholars long searched “for a set of visual signs which would serve to embody the system of human knowledge (conceived of as the apprehension of a divine order).” In its most exotic form this involved the construction of divine languages, the parsing of Kabbalistic symbols, and the embrace of alchemical reasoning. I’d argue in a more prosaic manner that such orthographic Platonism is the well-spring for all prescriptivist approaches to language, where the manipulation of the odd symbols that we call letters and punctuation can lend themselves to the discovery of greater truths, an invention that allows us “to converse even with the absent,” as Parkes writes.  In the workshops of the Renaissance, at the Aldine Press, immortal things were made of letters and eternity existed between them, with punctuation acting as the guideposts to a type of paradise. And so it can remain for us.

Linguistic prescriptivists will bemoan the loss of certain standards, of how text speak signals an irreducible entropy of communication, or how the abandonment of arbitrary grammatical rules is as if a sign from Revelation. Yet such reactionaries are not the true guardians of orthographic Platonism, for we must take wisdom where we find it, in the appearance, texture, and flavor of punctuation. Rules may be arbitrary, but the choice of particular punctuation—be it the pregnant pause of the dash or the rapturous shouting of the exclamation mark—matters. Literary agent Noah Lukeman writes in A Dash of Style: The Art and Mastery of Punctuation that punctuation is normally understood as simply “a convenience, a way of facilitating what you want to say.” Such a limited view, which is implicit for either those that advocate punctuation as an issue of sound or as one of meaning, ignores the occult power of the question mark, the theurgy in a comma. The orthographic Platonists at the Aldine Press understood that so much depended on a semicolon; that it signified more than a longer-than-average pause or the demarcation of an independent clause. Lukeman argues that punctuation is rarely “pondered as a medium for artistic expression, as a means of impacting content,” yet in the most “profound way…it achieves symbiosis with the narration, style, viewpoint, and even the plot itself.”

Keith Houston in Shady Characters: The Secret Life of Punctuation, Symbols, and Other Typographical Marks claims that “Every character we type or write is a link to the past;” every period takes us back to the dots that Irish monks used to signal the end of a line; every semicolon back to Manutius’s Venetian workshop. Punctuation, as with the letters whom they serve, has a deep genealogy, their use places us in a chain of connotation and influence that goes back centuries. More than that, each individual punctuation has a unique terroir; they do things that give the sentence a waft, a wisdom, a rhythm that is particular to them. Considering the periods of Ernest Hemingway, the semicolons of Edgar Allan Poe and Herman Melville, and Emily Dickinson’s sublime dash, Lukeman writes that “Sentences crash and fall like the waves of the sea, and work unconsciously on the reader. Punctuation is the music of language.”

To get overly hung up on punctuation as either an issue of putting marks in the right place, or letting the reader know when they can gulp some air, is to miss the point—a comma is a poem unto itself, an exclamation point is an epic! Cecelia Watson writes in her new book, Semicolon: The Past, Present, and Future of a Misunderstood Mark, that Manutius’s invention “is a place where our anxieties and our aspirations about language, class, and education are concentrated.” And she is, of course, correct, as evidenced by all of those partisans of aesthetic minimalism from Kurt Vonnegut to Cormac McCarthy who’ve impugned the Aldine mark’s honor. But what a semicolon can do that other marks can’t! How it can connect two complete ideas into a whole; a semicolon is capable of unifications that a comma is too weak to do alone. As Adam O’Fallon Price writes in The Millions, “semicolons…increase the range of tone and inflection at a writer’s disposal.” Or take the exclamation mark, a symbol that I’ve used roughly four times in my published writing, but which I deploy no less than 15 times per average email. A maligned mark due to its emotive enthusiasms, Nick Ripatrazone observes in The Millions that “exclamation marks call attention toward themselves in poems: they stand straight up.” Punctuation, in its own way, is conscious; it’s an algorithm, as much thought itself as a schematic showing the process of thought.

To take two poetic examples, what would Walt Whitman be without his exclamation mark; what would Dickinson be without her dash? They didn’t simply use punctuation for the pause of breath nor to logically differentiate things with some grammatical-mathematical precision. Rather they did do those things, but also so much more, for the union of exhalation and thought gestures to that higher realm the Renaissance originators of punctuation imagined. What would Whitman’s “Pioneers! O pioneers!” from the 1865 Leaves of Grass be without the exclamation point? What argument could be made if that ecstatic mark were abandoned? What of the solemn mysteries in the portal that is Dickinson’s dash when she writes that “’Hope’ is the thing with feathers –”? Orthographic Platonism instills in us a wisdom behind the arguments of rhetoricians and grammarians; it reminds us that more than simple notation, each mark of punctuation is a personality, a character, a divinity in itself.

My favorite illustration of that principle is in dramatist Margaret Edson’s sublime play W;t, the only theatrical work that I can think of that has New Critical close reading as one of its plot points. In painful detail, W;t depicts the final months of Dr. Vivian Bearing, a professor of 17th-century poetry at an unnamed, elite, eastern university, after she has been diagnosed with Stage IV cancer. While undergoing chemotherapy, Bearing often reminisces on her life of scholarship, frequently returning to memories of her beloved dissertation adviser, E.M. Ashford. In one flashback, Bearing remembers being castigated by Ashford for sloppy work that the former did, providing interpretation of John Donne’s Holy Sonnet VI based on an incorrectly punctuated edition of the cycle. Ashford asks her student “Do you think the punctuation of the last line of this sonnet is merely an insignificant detail?” In the version used by Bearing, Donne’s immortal line “Death be not proud” is end stopped with a semicolon, but as Ashford explains, the proper means of punctuation, as based on the earliest manuscripts of Donne, is simply a comma. “And death shall be no more, comma, Death thou shalt die.”

Ashford imparts to Bearing that so much can depend on a comma. The professor tells her student that “Nothing but a breath—a comma—separates life from everlasting…With the original punctuation restored, death is no longer something to act out on a stage, with exclamation points…Not insuperable barriers, not semicolons, just a comma.” Ashford declares that “This way, the uncompromising way, one learns something from this poem, wouldn’t you say?” Such is the mark of significance, an understanding that punctuation is as intimate as breath, as exulted as thought, and as powerful as the union between them—infinite, eternal, divine.

Image credit: Wikimedia Commons/Sam Town.

Another Person’s Words: Poetry Is Always the Speaker

Blessedly, we are speakers of languages not of our own invention, and as such none of us are cursed in only a private tongue. Words are our common property; it would be a brave iconoclast to write entirely in some Adamic dialect of her own invention, her dictionary locked away (though from the Voynich Manuscript to Luigi Serafini’s Codex Seraphinianus, some have tried). Almost every word you or I speak was first uttered by somebody else—the key is entirely in the rearrangement. Sublime to remember that every possible poem, every potential play, ever single novel that could ever be written is hidden within the Oxford English Dictionary. The answer to every single question too, for that matter. The French philosophers Antoine Arnauld and Claude Lancelot enthuse in their 1660 Port-Royal Grammar that language is a “marvelous invention of composing out of 25 or 30 sounds that infinite variety of expressions which, whilst having in themselves no likeness to what is in our mind, allow us to… [make known] all the various stirrings of our soul.” Dictionaries are oracles. It’s simply an issue of putting those words in the correct order. Language is often spoken of in terms of inheritance, where regardless of our own origins speakers of English are the descendants of Walt Whitman’s languid ecstasies, Emily Dickinson’s psalmic utterances, the stately plain style of the King James bible, the witty innovations of William Shakespeare, and the earthy vulgarities of Geoffrey Chaucer; not to forget the creative infusions of foreign tongues, from Norman French and Latin, to Ibo, Algonquin, Yiddish, Spanish, and Punjabi, among others. Linguist John McWhorter puts it succinctly in Our Magnificent Bastard Tongue: The Untold History of English, writing that “We speak a miscegenated grammar.”

There is a glory to this, our words indicating people and places different from ourselves, our diction an echo of a potter in a Bronze Age East Anglian village, a canting rogue in London during the golden era of Jacobean Theater, or a Five Points Bowery Boy in antebellum New York. Nicholas Oster, with an eye towards its diversity of influence, its spread, and its seeming omnipresence, writes in Empires of the Word: A Language History of the World that “English deserves a special position among world languages” as it is a “language with a remarkably varied history.” Such history perhaps gives the tongue a universal quality, making it a common inheritance of humanity. True with any language, but when you speak it would be a fallacy to assume that your phrases, your idioms, your sentences, especially your words are your own. They’ve passed down to you. Metaphors of inheritance can either be financial or genetic; the former has it that our lexicon is some treasure collectively willed to us, the later posits that in the DNA of language, our nouns are adenine, verbs are as if cytosine, adjectives like guanine, and adverbs are thymine. Either sense of inheritance has its uses as a metaphor, and yet they’re both lacking to me in some fundamental way—too crassly materialist, too eugenic. The proper metaphor isn’t inheritance, but consciousness. I hold that a language is as if a living thing, or to be more specific, as if a thinking thing. Maybe this isn’t a metaphor at all, perhapswe’re simply conduits for the thoughts of something bigger than ourselves, the contemplations of the language which we speak.

Philosopher George Steiner, forever underrated, writes in his immaculate After Babel: Aspects of Language and Translation that “Language is the highest and everywhere the foremost of those assents which we human beings can never articulate solely out of our own means.” We’re neurons in the mind of language, and our communications are individual synapses in that massive brain that’s spread across the Earth’s eight billion inhabitants, and back generations innumerable. When that mind becomes self-aware of itself, when language knows that it’s language, we call those particular thoughts poetry. Argentinean critic (and confidant of Jorge Luis Borges) Alberto Manguel writes in A Reader on Reading that poetry is “proof of our innate confidence in the meaningfulness of wordplay;” it is that which demonstrates the eerie significance of language itself. Poetry is when language consciously thinks.

More than rhyme and meter, or any other formal aspect, what defines poetry is its self-awareness. Poetry is the language which knows that it’s language, and that there is something strange about being such. Certainly, part of the purpose of all the rhetorical accoutrement which we associate with verse, from rhythm to rhyme scheme, exists to make the artifice of language explicit. Guy Deutscher writes in The Unfolding of Language: An Evolutionary Tour of Mankind’s Greatest Invention that the “wheels of language run so smoothly” that we rarely bother to “stop and think about all the resourcefulness that must have gone into making it tick.” Language is pragmatic, most communication doesn’t need to self-reflect on, well, how weird the very idea of language is. How strange it is that we can construct entire realities from variations in the breath that comes out of our mouths, or the manipulation of ink stains on dead trees (or of liquid crystals on a screen). “Language conceals its art,” Deutscher writes, and he’s correct. When language decides to stop concealing, that’s when we call it poetry.

Verse accomplishes that unveiling in several different ways, chief among them the use of the rhetorical and prosodic tricks, from alliteration to Terza rima, which we associate with poetry. One of the most elemental and beautiful aspects of language which poetry draws attention towards are the axioms implied earlier in this essay – that the phrases and words we speak are never our own – and that truth is found not in the invention, but in the rearrangement. In Problems of Dostoevsky’s Poetics, the Russian literary theorist Mikhail Bakhtin wrote that we receive “the word from another’s voice and filled with that other voice.” Our language is not our own, nor is our literature. We communicate in a tongue not of our own creation; we don’t have conversations, we are the conversation. Bakhtin reasons that our “own thought finds the world already inhabited.” Just as the organization of words into enjambed lines and those lines into stanzas demonstrates the beautiful unnaturalness of language, so to do allusion, bricolage, and what theorists call intertextuality make clear to us that we’re not individual speakers of words, but that words are speakers of us. Steiner writes in Grammars of Creation that “the poet says: ‘I never invent.’” This is true, the poet never invents, none of us do. We only rearrange—and that is beautiful.

True of all language, but few poetic forms are as honest about this as a forgotten Latin genre from late antiquity known as the cento. Rather than inheritance and consciousness, the originators of the cento preferred the metaphor of textiles. For them, all of poetry is like a massive patchwork garment, squares of fabric borrowed from disparate places and sewn together to create a new whole. Such a metaphor is an apt explanation of what exactly a cento is – a novel poem that is assembled entirely from rearranged lines written by other poets. Centos were written perhaps as early as the first century, but the fourth-century Roman poet Decimus Magnus Ausonius was the first to theorize about their significance and to give rules for their composition. In the prologue to Cento Nuptialias, where he composed a poem about marital consummation from fragments of Virgil derived from The Aeneid, Georgics, and Eclogues, Ausonius explained that he has “but out of a variety of passages and different meanings,” created something new which is “like a puzzle.”

The editors of The Penguin Dictionary of Literary Terms and Literary Theory explain that while forgotten today, the cento was “common in later antiquity.” Anthologizer and poet David Lehman writes in The American Scholar that “Historically, the intent was often homage, but it could and can be lampoon,” with critic Edward Hirsch writing in A Poet’s Glossary that they “may have begun as school exercises.” Though it’s true that they were written for educational reasons, to honor or mock other poets, or as showy performance of lyrical erudition (the author exhibiting their intimacy with Homer and Virgil), none of these explanations does service to the cento’s significance. To return to my admittedly inchoate axioms of earlier, one function of poetry is to plunge “us into a network of textual relations,” as the theorist Graham Allen writes in Intertextuality. Language is not the provenance of any of us, but rather a common treasury; with its other purpose being what Steiner describes as the “rec-compositions of reality, of articulate dreams, which are known to us as myths, as poetry, as metaphysical conjecture.” That’s to say that the cento remixes poetry, it recombines reality, so as to illuminate some fundamental truth hitherto hidden. Steiner claims that a “language contains within itself the boundless potential of discovery,” and the cento is a reminder that fertile are the recombination’s of poetry that have existed before, that literature is a rich, many-varied compost from which beautiful new specimens can grow towards the sun.

Among authors of centos, this is particularly true of the fourth-century Roman poet Faltonia Betitia Proba. Hirsch explains that one of the purposes of the cento, beyond the pedagogical or the parodic, was to “create Christian narratives out of pagan text,” as was the case with Proba’s Cento virgilianus, the first major Christian epic by a woman poet. Allen explains that “Works of literature, after all, are built from systems, codes and traditions established by previous works of literature;” what Proba’s cento did was a more literal expression of that fundamental fact. The classical past posed a difficulty for proud Roman Christians, for how were the faithful to grapple with the paganism of Plato, the Sibyls, and Virgil? One solution was typological, that is the assumption that if Christianity was true, and yet pagan poets like Virgil still spoke the truth, that such must be encoded within his verse itself, part of the process of Interpretatio Christiana whereby pagan culture was reinterpreted along Christian lines.

Daughter of Rome that she was, Proba would not abandon Virgil, but Christian convert that she also was, it became her task to internalize that which she loved about her forerunner and to repurpose him, to place old wine into new skins. Steiner writes that an aspect of authorship is that the “poet’s consciousness strives to achieve perfect union with that of the predecessor,” and though those lyrics are “historically autonomous,” as reimagined by the younger poet they are “reborn from within.” This is perhaps true of how all influence works, but the cento literalizes that process in the clearest manner. And so Proba’s solution was to rearrange, remix, and recombine the poetry of Virgil so that the Christianity could emerge, like a sculptor chipping away all of the excess marble in a slab to reveal the statue hidden within.

Inverting the traditional pagan invocation of the muse, Proba begins her epic (the proem being the only original portion) with both conversion narrative and poetic exhortation, writing that she is “baptized, like the blest, in the Castalian font – / I, who in my thirst have drunk libations of the Light – / now being my song: be at my side, Lord, set my thoughts/straight, as I tell how Virgil sang the offices of Christ.” Thus, she imagines the prophetic Augustan poet of Roman Republicanism who died two decades before the Nazarene was born. Drawing from a tradition which claimed Virgil’s Eclogue predicted Christ’s birth, Proba rearranged 694 lines of the poet to retell stories from Genesis, Exodus, and the Gospels, the lack of Hebrew names in the Roman original forcing her to use general terms which appear in Virgil, like “son” and “mother,” when writing of Jesus and Mary. Proba’s paradoxically unoriginal originality (or is its original unoriginality?) made her popular in the fourth and fifth centuries, the Cento virgilianus taught to catechists alongside Augustin, and often surpassing Confessions and City of God in popularity. Yet criticism of Proba’s aesthetic quality from figures like Jerome and Pope Gelasius I ensured a millennium-long eclipse of her poem, forgotten until its rediscovery with the Renaissance.

Rearranging the pastoral Eclogues, Proba envisions Genesis in another poet’s Latin words, writing that there is a “tree in full view with fruitful branches;/divine law forbids you to level with fire or iron,/by holy religious scruple it is never allowed to be disturbed./And whoever steals the holy fruit from this tree,/will pay the penalty of death deservedly;/no argument has changed my mind.” Something uncanny about the way that such a familiar myth is reimagined in the arrangement of a different myth; the way in which Proba is a redactor of Virgil’s words, shaping them (or pulling out from) this other, different, unintended narrative. Scholars have derided her poem as juvenilia since Jerome (jealously) castigated her talent by calling her “old chatterbox,” but to be able to organize, shift, and shape another poet’s corpus into orthodox scripture is an unassailable accomplishment. Writers of the Renaissance certainly thought so, for a millennium after Proba’s disparagement, a new generation of humanists resuscitated her.

Cento virgilianus was possibly the first work by a woman to be printed, in 1474; a century before that, and the father of Renaissance poetry Petrarch extolled her virtues in a letter to the wife of the Holy Roman Emperor, and she was one of the subjects of Giovani Boccaccio’s 1374 On Famous Women, his 106 entry consideration of female genius from Eve to Joanna, the crusader Queen of Jerusalem and Sicily. Boccaccio explains that Proba collected lines of Virgil with such “great skill, aptly placing the entire lines, joining the fragments, observing the metrical rules, and preserving the dignity of the verses, that no one except an expert could detect the connections.” As a result of her genius, a reader might think that “Virgil had been a prophet as well as an apostle,” the cento suturing together the classic and the Hebraic, Athens and Jerusalem.

Ever the product of his time, Boccaccio could still only appreciate Proba’s accomplishment through the lens of his own misogyny, writing that the “distaff, the needle, and weaving would have been sufficient for her had she wanted to lead a sluggish life like the majority of women.” Boccaccio’s myopia prevented him from seeing that that was the precise nature of Proba’s genius – she was a weaver. The miniatures which illustrate a 15th-century edition of Boccaccio give truth to this, for despite the chauvinism of the text, Proba is depicted in gold-threaded red with white habit upon her head, a wand holding aloft a beautiful, blue expanding sphere studded with stars, a strangely scientifically accurate account of the universe as the poet sings song of Genesis in the tongue of Virgil. Whatever anonymous artist saw fit to depict Proba as a mage understood her well; for that matter they understood creation well, for perhaps God can generate ex nihilo, but artists must always gather their material from fragments shored against their ruin.

In our own era of allusion, reference, quotation, pastiche, parody, and sampling, you’d think that the cento would have new practitioners and new readers. Something of the disk jockeys Danger Mouse, Fatboy Slim, and Girl Talk in the idea of remixing a tremendous amount of independent lines into some synthesized newness; something oddly of the Beastie Boys’ Paul’s Boutique in the very form of the thing. But centos proper are actually fairly rare in contemporary verse, despite T.S. Eliot’s admission that “mature poets steal.” Perhaps with more charity, Allen argues that reading is a “process of moving between texts. Meaning because something which exists between a text and all the other texts to which it refers and relates.” But while theorists have an awareness of the ways in which allusion dominates the modernist and post-modernist sensibility—what theorists who use the word “text” too often call “intertextuality”—the cento remains as obscure as other abandoned poetic forms from the Anacreontic to the Zajal (look them up). Lehman argues that modern instances of the form are “based on the idea that in some sense all poems are collages made up of other people’s words; that the collage is a valid method of composition, and an eloquent one.”

Contemporary poets who’ve tried their hand include John Ashbery, who weaved together Gerard Manley Hopkins, Lord Byron, and Elliot; as well as Peter Gizzi who in “Ode: Salute to the New York School” made a cento from poets like Frank O’Hara, James Schuyler, and Ashbery. Lehman has tried his own hand at the form, to great success. In commemoration of his Oxford Book of American Poetry, he wrote a cento for The New York Times that’s a fabulous chimera whose anatomy is drawn from a diversity that is indicative of the sweep and complexity of four centuries of verse, including among others Robert Frost, Hart Crane, W.H. Auden, Gertrude Stein, Elizabeth Bishop, Edward Taylor, Jean Toomer, Anne Bradstreet, Henry Wadsworth Longfellow, Edna St. Vincent Millay, Wallace Stevens, Robert Pinsky, Marianne Moore, and this being a stolidly American poem, our grandparents Walt Whitman and Emily Dickinson.

Lehman contributed an ingenious cento sonnet in The New Yorker assembled from various Romantic and modernist poets, his final stanza reading “And whom I love, I love indeed,/And all I loved, I loved alone,/Ignorant and wanton as the dawn,” the lines so beautifully and seamlessly flowing into one another that you’d never notice that they’re respectively from Samuel Taylor Coleridge, Edgar Allan Poe, and William Butler Yeats. Equally moving was a cento written by editors at the Academy of American Poets, which in its entirety reads:

In the Kingdom of the Past, the Brown-Eyed Man is King
Brute. Spy. I trusted you. Now you reel & brawl.
After great pain, a formal feeling comes–
A vulturous boredom pinned me in this tree
Day after day, I become of less use to myself,
The hours after you are gone are so leaden.
Take this rather remarkable little poem on its own accord. Its ambiguity is remarkable, and the lyric is all the more powerful for it. To whom is the narrator speaking, who has been trusted and apparently violated that loyalty? Note how the implied narrative of the poem breaks after the dash that end-stops the third line. In the first part of the poem, we have declarations of betrayal, somebody is impugned as “Brute. Spy.” But from that betrayal, that “great pain,” there is some sort of transformation of feeling; neither acceptance nor forgiveness, but almost a tacit defeat, the “vulturous boredom.” The narrator psychologically, possibly physically, withers. They become less present to themselves, “of less use to myself.” And yet there is something to be said for the complexity of emotions we often have towards people, for though it seems that this poem expresses the heartbreak of betrayal, the absence of its subject is still that which affects the narrators so that the “hours…are so leaden.” Does the meaning of the poem change when I tell you that it was stitched together by those editors, drawn from a diversity of different poets in different countries living at different times? That the “true” authors are Charles Wright, Marie Ponsot, Dickinson, Sylvia Plath, and Samuel Beckett?

Steiner claims that “There is, stricto sensu, no finished poem. The poem made available to us contains preliminary versions of itself. Drafts [and] cancelled versions.” I’d go further than Steiner even, and state that there is no individual poet. Just as all drafts are ultimately abandoned rather than completed, so is the task of completion ever deferred to subsequent generations. All poems, all writing, and all language for that matter, are related to something else written or said by someone else at some point in time, a great chain of being going back to the beginnings immemorial. We are, in the most profound sense, always finishing each other’s’ sentences. Far from something to despair at, this truth is something that binds us together in an untearable patchwork garment, where our lines and words have been loaned from somewhere else, given with the solemn promise that we must pay it forward. We’re all just lines in a conversation that began long ago, and thankfully shall never end. If you listen carefully, even if it requires a bit of decipherment or decoding, you’ll discover the answer to any query you may have. Since all literature is a conversation, all poems are centos. And all poems are prophecies whose meanings have yet to be interpreted.

Image credit: Unsplash/Alexandra.

Ten Ways to Change Your God

“Well, it may be the devil or it may be the LordBut you’re gonna have to serve somebody.” —Bob Dylan (1979)

1.Walking Cambridge’s Trinity Lane in 1894, Bertrand Russell had an epiphany concerning the ontological proof for God’s existence, becoming the unlikely convert effected by logical argumentation. In Russell’s essay “Why I Became a Philosopher,” included in Amelie Oksenberg Rorty’s anthology The Many Faces of Philosophy: Reflections from Plato to Arendt, the logician explains how his ruminations turned to fervor, writing that “I had gone out to buy a tin of tobacco; on my way back, I suddenly threw it up in the air, and exclaimed as I caught it: ‘Great Scott, the ontological argument is sound.’” An atheist had a brief conversion—of a sort.

Not exactly Saul being confronted with the light that (quoting Euripides’s The Bacchae) told him “It is hard for thee to kick against the pricks,” or Augustin in his Confessions recounting that after a ghostly young voice told him to “Take up and read!”, he turned to Paul’s epistles. Russell’s conversion was a bit more abstract—of the head rather than the heart. In his flat-cap, tweed jacket, and herring-bone bowtie, he was converted not by the Holy Spirit, but by a deductive syllogism. Envision the co-author of Principia Mathematica, which rigorously reduced all of mathematics to logic, suddenly being moved by the spirit.

Derived by the medieval monk Anselm of Canterbury in his 1078 Proslogion, the ontological argument holds that since existence must be a property of perfection, and God is a priori defined as a perfect being, than quod erat demonstrandum: God must exist. Russell explains this metaphysical trick in his Nobel Prize-winning History of Western Philosophy: a “Being who possesses all other perfections is better if He exists than if He does not, from which it follows that if he does not He is not the best possible Being.”

From Aquinas to Rene Descartes, there is a venerable history of attempting to prove the existence of an omniscient, omnipotent, omnipresent deity, though as Nathan Schneider writes in God in Proof: The Story of a Search from the Ancients to the Internet, these arguments are “taught, argued about, and forgotten, sometimes saving a person’s particular faith, sometimes eroding it, and usually neither.” In defense of Anselm, nobody in the 11th century doubted God’s existence, and such proofs weren’t designed to convince, but rather to glory in divinity. As a subsequent defense, his proof has endured in a manner that other proofs haven’t. Cosmology and evolution have overturned most others, making them seem primitive to the point of adorableness, but Anselm endures.

Still, the syllogism can’t help but seem like a bit of a magic trick, defining God into existence rather than establishing even what type of God we’re to believe in. Critics of Anselm maintain that existence isn’t a property in the same way that other qualities are. We can imagine all sorts of characters with all sorts of qualities, but that doesn’t mean that they have to exist. Defenders of Anselm would claim that God isn’t like any other character, since a perfect thing that doesn’t exist can’t be said to be a perfect thing, and God is a perfect thing Critics of that would say that it’s possible to conceive of a perfect city, but that doesn’t mean you can buy an Amtrak ticket there, nor would a benevolent God allow Penn Station to look as it does. As the puzzle-writer (and theist) Martin Gardner notes in his delightful The Whys of a Philosophical Scrivener, “I agree with the vast majority of thinkers who see the proof as no more than linguistic sleight-of-hand.”

Eventually Russell’s new faith diffused like incense from a swinging thurible. If philosophy got Russell into this mess, then it also got him out. Russell explains that Immanuel Kant in his Critique of Pure Reason would “demolish all the purely intellectual proofs of the existence of God.” But what faith had Russell gained on Trinity Lane? It wasn’t a belief in God whom that street was named after, nor was it the Lord of Abraham, Isaac, and Jacob. What Russell’s faith was in, had always been in, and would always be in, was the power of reason, and in that he was unwavering.

David Hume, another of Russell’s antecedents, wrote in his 1739 A Treatise of Human Nature that “Reason is, and ought only to be the slave of the passions.” We’re going to believe what we’re going to (dis)believe, and we’ll concoct the reasons for it later. For his part, late in life, Russell was asked how he’d respond if upon death he was brought before God’s throne, and asked why he had dared not to believe? Russell said that he’d answer “Not enough evidence!”

2.According to mercurial family lore, when my paternal grandmother’s grandfather, August Hansmann, boarded a New York-bound steamship two years after the American Civil War and one year before his native Hanover would be subsumed into Prussia, he brought along with him a copy of the Dutch philosopher Baruch Spinoza’s Tractatus Theologico-Politicus, denounced when it was printed in 1677 as “a book forged in hell…by the devil himself.” Like Spinoza, Hansmann was a Jew who lived among gentiles, and like Spinoza, he understood that being Other in a narrative not written by yourself had tragic consequences.

Born illegitimate, Hansmann was raised Jewish even though his father was Christian; a man who understood how being two things sometimes meant that you were seen as nothing, he also knew the strange freedom of how dictated faith is no faith at all. Similarly, Spinoza was a Sephardic Jew of converso background whose Portuguese ancestors practiced their Judaism in secret until Dutch freedom allowed them to reinvent their hidden faiths. Hansmann encountered Spinoza’s celebration of religious liberty, “where everyone’s judgement is free and unshackled, where each may worship God as his conscience dictates, and where freedom is esteemed before all things.” For the pious lens grinder, content to work by the tulip-lined canals of red-brick Amsterdam, religious truth can only be discovered without shackles, divinity only visible if you’re not compelled by Church or State.

When the Jews of Spain and Portugal were forced to convert to Catholicism, many secretly practiced the mitzvoth, venerating the Sabbath, abjuring treyf, and kissing mezuzah’s surreptitiously concealed within the ceramic blue slipper of the Virgin. As scholar Karen Armstrong notes in The Battle for God, these were people who “had been forced to assimilate to a…culture that did not resonate with their inner selves.” When finally able to practice their religion in Holland, many of them then discovered that the Judaism of the rabbis was not the same Judaism that they’d imagined, and so they chose to be something else, something completely new —neither Jewish or Christian, but rather nothing. Armstrong writes that such persecution ironically led to the “first declarations of secularism and atheism in Europe.”

Many of those slurred as swinish Marranos found it more honest to live by the dictates of their own reason. Spinoza was the most famous, condemned by his synagogue for writing things like “I say that all things are in God and move in God,” holding that nature is equivalent with the Lord, so that either nothing is God or everything is. Such pantheism is what made some condemn Spinoza as an atheist, and others such as Russell later describe him as a “God-intoxicated man” who saw holiness in every fallen leaf and gurgling creek, his very name, whether “Baruch” or “Benedict” meaning “blessed.”

Rebecca Newberger Goldstein, in Betraying Spinoza: The Renegade Jew Who Gave us Modernity, asks if he can “be considered…a Jewish thinker?” She argues that his universalism derives from the Mosaic covenant, the monotheism of the Shema extended so that God is Everything. As a result, he is the primogeniture for a certain type of rational, secular, progressive, liberal, humane contemporaneity. On that steamer crossing the Atlantic, Hansmann may have read that “freedom [can] be granted without prejudice…but also that without such freedom, piety cannot flourish.” My great-great grandfather lived his life as a Jew, but the attraction he saw in Spinoza was that each individual could decide for themselves whether to be Jew, Catholic, Protestant, or nothing.

Hansmann worked as a peddler on the Lower East Side, until the Homestead Act enticed him to Iowa, where he married a Huguenot woman who bore him 10 children, while he worked as a trader among the Native Americans. He refused to raise his children in any religion—Jewish or Protestant—preferring rather that they should decide upon reaching adulthood. And so, a union was made between the Jewish and the Low Church Protestant, rejecting both baptism and bris, so that my grandmother born on the frontier had absolutely no religion at all.

That such things are even possible—to be of no religion—is due in no small part to Spinoza’s sacrifice, his congregation having excommunicated him by extinguishing each individual light in the synagogue until the assembled dwelled in darkness. From that expulsion, Spinoza was expected to find refuge among the Protestants—but he didn’t. I’ve a photo from the early years of the 20th century: August Hansmann surrounded by his secular, stolid, midwestern progeny, himself siting in the center with a thick black beard, and a kippah barely visible upon his head.

3.A long line of Spinoza’s ancestors, and my great-great-grandfather’s ancestors, would have concluded Pesach evenings with a “Next year in Jerusalem,” praying for the reestablishment of the Temple destroyed by the Romans in the first century. Less known than the equally exuberant and plaintive Passover declaration is that, for a brief period in the fourth century, it seemed that the Temple might actually be restored, ironically by Rome’s last pagan emperor. Born in Constantinople only six years after the Council of Nicaea convened there to define what exactly a Christian was, Julian the Apostate would mount a failed revolution.

His uncle was Rome’s first Christian emperor who conquered by the cross and who turned his Rome over to Christ. Julian was of a different perspective, seeing in the resurrection of Apollo and Dionysius, Jupiter and Athena, the rejuvenation of Rome. He bid his time until military success foisted him onto the throne, and then Julian revealed himself as an initiate into those Eleusinian Mysteries, a celebrant of Persephone and Demeter who greeted the morning sun and prayed for the bounty of the earth, quoted in W. Heinemann’s The Works of the Emperor Julian as having written “I feel awe of the gods, I love, I revere, I venerate them.”

In Julian’s panegyrics, one can smell the burning thyme and sage, feel the hot wax from votive candles, spy the blue moonlight filtered through pine trees in a midnight cedar grove. If Plutarch recorded the very heavens had once declared “the great god Pan is dead,” then Julian prayed for his return; if the oracles at Delphi and the Sibyllines had been silenced by the Nazarene, then the emperor wanted the divinations of those prophets to operate once again. Julian wanted this paganism to be a new faith, an organized, unified, consolidated religion that bore as much similarity to the cohesion of the Christian Church as it did to the rag-tag collection of rituals and superstitions that had defined previous Roman beliefs.

Classicist Robin Lane Fox makes clear in Pagans and Christians that this wasn’t simple nostalgia. Fox explains that those who returned to paganism normally did so with “an accompanying philosophy” and that apostasy “always lead to a favor for some systematic belief.” The emperor’s conversion was a turning back combined with a the reformer’s desire for regeneration. In paganism, Julian approached origin, genesis, birth—less conversion than a return to what you should have been, but was denied.

Julian the Apostate endures as cipher—duplicitous reactionary who’d see Christian Rome turn back, or tolerant visionary who theologically elevated paganism? Christian thinkers had long commandeered classical philosophy, now pagan thinkers were able to apply the same analytical standards to their own beliefs, developing theology as sophisticated as that of Christianity. The American rake and raconteur Gore Vidal repurposed the emperor as a queer hero of liberalism in his unusual 1964 novel Julian, having his protagonist humanely exclaim that “Truth is where ever man has glimpsed divinity.” Where some had seen those intimations in Golgotha’s sacrifice, the Apostate saw them in the oracles of Chaldea or the groves of Athena.

Far from banning the new faith, Julian declared that “By the gods I do not want the Galileans to be killed or beaten unjustly nor to suffer any other ill.” Julian was rather interested in monopolistic trust-busting, and in part that included funding the rebuilding of the Jewish Temple that would have been destroyed by the emperor’s ancestors. The building of a Third Temple would be terminated when, as a Roman witness to the construction attempts wrote, “fearful balls of fire [broke]…out near the foundations…till the workmen, after repeated scorchings, could approach no more.” The Christians attributed the disaster to God; the Jews and Romans to the Christians.

The desire for a pagan Rome would similarly end with Julian’s defeat on the battle fields of Persia, an emperor who longed to see old gods born again now forced to declare that “You have won, Galilean.” Hard to reverse an eclipse, and so, we supplicate on another mournful and deferred day—“Next year at Delphi.”

4.The titular character in Julian claims that “academics everywhere are forever attacking one another.” During the fourth century, the academic debates were theological, all of those schisms and heresies, excommunications and counter-excommunications between exotic groups with names like the Monophysites and the Chalcedonians, the Arians and the Trinitarians. By the middle of Vidal’s 20th century, such disputations were just as rancorous, but theology was now subsumed into politics. Vidal’s own politics were strange, broadly left but with a sympathy afforded to the anti-establishmentarians of any ideological persuasion.

Vidal is most celebrated for calling the conservative founder of the National Review William F. Buckley a “crypto-Nazi” during a debate on ABC News scheduled to coincide with the 1968 Democratic convention; even the pyrotechnic rainbow of early television was unable to conceal the pure hatred between those two prep school grads. If the earliest years of Christianity saw bishops and monks moving between ever nuanced theological positions, than the 20th century was an era of political conversion, liberals becoming conservatives and conservatives becoming liberals, with Buckley’s magazine a fascinating case study in political apostasy.

Buckley’s politics were cradle-to-grave Republican conservatism, even as he garnered a reputation for expelling acolytes of both Ayn Rand and John Birch from the movement as if he was a medieval bishop overseeing a synod (they’ve long since found a way back in). Entering public life with his 1951 God and Man at Yale: The Superstitions of “Academic Freedom,” Buckley understood better than most how ideology is theology by another name (even as I personally revile his politics). Into this midst, National Review was the stodgy, tweedy vanguard of the reactionary intelligentsia, defining a conservative as “someone who stands athwart history, yelling Stop, at a time when no one is inclined to do so.”

The problem with a manifesto that defines itself entirely by anti-progress is that such a doctrine can be rather nebulous, and so many of the bright young things Buckley hired for the National Review, such as Joan Didion and Garry Wills, found themselves moving to the left. Such were the subtleties of conversion that Wills could be both the author of Confessions of a Conservative and a journalist placed on Richard Nixon’s infamous “enemies list.”

As people become harder of hearing and their bone-density decreases, movement from the left to the right does seem the more predictable narrative. For every Gary Wills, there’s a Norman Podhoretz, an Irving Kristol, a David Horowitz, a Christopher Hitchens. Leave it to the arm-chair Freudians to ascertain what Oedipal complex made those men of the left move towards the Big Daddy of right-wing politics, but what’s interesting are the ways in which they refashioned conservatism in a specifically leftist manner. Their migration was not from milquetoast Democratic liberalism, for they’d indeed been far to the left, several of them self-described Trotskyites. And as the Aztecs who became Catholic kept secretly worshiping their old gods, or as basilicas were built atop temples to Mithras, so too did those doctrines of “permanent revolution” find themselves smuggled into neoconservatism.

If politics is but religion by another means, than it’s the ideological conversion that strikes us as most scandalous. We’ve largely ceded the ground on the sacred—what could be less provocative than abandoning Presbyterianism for Methodism? But politics, that’s the thing that keeps us fuming for holy war, and we’re as titillated by stories of conversion as our ancestors were in tales of heresy and schism. Psychologist Daniel Oppenheimer observes, in Exit Right: The People Who Left the Left and Reshaped the American Century, that “belief is complicated, contingent, multi-determined. But do we really know it? Do we feel it?” Strange to think that Elizabeth Warren was once a Republican, and the man whom she will beat for the presidency was once a Democrat, but such are the vagaries of God and man, whether at Yale or anywhere else.

5.For all their differences, Buckley and Vidal could at least agree on the martini. Buckley would write in 1977 that a “dry martini even at night is a straightforward invitation for instant relief from the vicissitudes of a long day,” and Vidal in his novel Kalki published a year later would rhapsodize about the “martini’s first comforting haze.” On the left or on the right, one thing WASPs concurred about (and though Buckley was technically Catholic he had the soul of an Episcopalian) was the cocktail hour. I’ve no idea if the two had been drinking before their infamous sparring on ABC, though the insults, homophobia, and violent threats make me suspicious.  

Better that they’d have followed the path of conversion that another prep school boy who moved in their social circles named John Cheever did: When on April 9, 1975 his brother checked him into New York’s Smithers Alcoholic Rehabilitation Unit, he never took another drink. Cheever had lived up to the alcoholic reputation of two American tribes—High Church Protestants and Low Church writers. From the former he inherited both the genes and an affection for gin and scotch on a Westchester porch watching the trains from Grand Central thunder Upstate, and from the later he took the Dionysian myth that conflates the muse with ethanol, pining for inspiration but settling for vomiting in an Iowa City barroom.

Cheever was one of the finest short story writers of the 20th century, his prose as crystalline and perfect as a martini. Such was the company of those other addicts, of Ernest Hemingway and F. Scott Fitzgerald, William Faulkner and Thomas Wolfe. Cheever’s story “The Swimmer” is one of the most perfect distillations of how alcoholism will sneak up on a person, and he avoids the laudatory denials you see in a lesser writer like Charles Bukowski. With the repressed self-awareness that is the mocking curse of all true alcoholics, Cheever would write in his diary some two decades before he got sober that “When the beginnings of self-destruction enter the heart it seems no bigger than a grain of sand,” no doubt understanding how a single drink is too many since a dozen is never enough.

His daughter Susan Cheever, herself a recovering alcoholic, notes in Drinking in America: Our Secret History that “My father’s drinking had destroyed his body, but it had also distorted his character—his soul. The restoration of one man through the simple measure of not drinking was revelatory.” The ancients called them spirits for a reason, and in their rejection there is a conversion of a very literal sort. Cheever—along with his friend Raymond Carver—is the happy exception to the fallacy that finds romance in the gutter-death of literary genius, and he got sober by doing the hard work of Alcoholics Anonymous.

The central text of that organization was compiled by Bill W., the founder of AA; its title is technically Alcoholics Anonymous, but members informally call it “The Big Book.” Past the uninspired yellow-and-blue cover of that tome, Cheever would have read stories where he’d have “found so many areas where we overlapped—not all the deeds, but the feelings of remorse and hopelessness. I learned that alcoholism isn’t a sin, it’s a disease.” And yet the treatment of that disease was akin to a spiritual transformation.

A tired debate whether Alcoholics Anonymous is scripture or not, but I’d argue that anything that so fully transforms the countenance of a person can’t but be a conversion, for as the Big Book says, “We talked of intolerance, while we were intolerant ourselves. We missed the reality and the beauty of the forest because we were diverted by the ugliness of some of its trees.” I once was lost, and now I’m found, so on and so forth. When Cheever died, he had seven sober years—and they made all the difference.

6. Conversion narratives are the most human of tales, for the drama of redemption is an internal one, played out between the protagonist and his demons. Certain tropes—the pleasure, the perdition, the contrition, the repentance, the salvation. Augustine understood that we do bad things because bad things are fun—otherwise why would he write in Confessions “Lord, grant me chastity—but not yet.” What readers thrill to are the details, the rake’s regress from dens of iniquity, from gambling, drinking, and whoring to some new-found piety.

For Cheever’s Yankee ancestors, the New England Puritans in whose stead we’ve uneasily dwelled for the past four centuries, “election” was not a matter of personal choice, but rather grace imparted onto the unworthy human. Easy to see some issues of utility here, for when accumulation of wealth is read as evidence of God’s grace, and it’s also emphasized that the individual has no role in his own salvation, the inevitable result is spiritual disenchantment and marginalization. By the middle of the 18th century, some five generations after the first Pilgrim’s slipper graced Plymouth Rock, the Congregationalist pastors of New England attempted to suture the doubts of their flocks, coming up with “half-way covenants” and jeremiads against backsliding so as to preserve God’s bounty.

Into that increasingly secular society would come an English preacher with a thick Gloucester accent named George Whitfield, who first arrived in the New World in 1738. Technically an Anglican priest, Whitfield was a confidant of George Wesley, the father of Methodism, and from that “hot” faith the preacher would draw a new vocabulary, dispelling John Calvin’s chill with the exhortation that sinners must be born again. Crowds of thousands were compelled to repent, for “Come poor, lost, undone sinner, come just as you are to Christ.” On the Eastern seaboard, the Englishman would preach from Salem to Savannah, more than 10,000 times, drawing massive crowds, even impressing that old blasphemer Benjamin Franklin at one Philadelphia revival (the scientist even donated money).

Such was the rhetorical style of what’s called the Great Awakening, when colonial Americans abandoned the staid sermons of the previous century in favor of this shaking, quaking, splitting, fitting preaching. Whitfield and Spinoza shared nothing in temperament, and yet one could imagine that the later might smile at the liberty that “established fractious sectarianism as its essential character,” as John Howard Smith writes in The First Great Awakening: Redefining Religion in American, 1725-1775. Whitfield welcomed worshippers into a massive tent—conversion as a means towards dignity and agency.

So ecumenical was Whitfield’s evangelization that enslaved people came in droves to his revivals, those in bondage welcomed as subjects in Christ’s kingdom. Such was the esteem in which the reverend was held that upon his passing in 1770 a black poet from Cambridge named Phyllis Wheatly would regard the “happy saint” as a man whom “in strains of eloquence refin’d/[did] Inflame the heart, and captivate the mind.” Whitfield’s religious charity, it should be said, was limited. He bemoaned the mistreatment of the enslaved, while he simultaneously advocated for the economic benefits of that very institution.

Can we tighten this line. As different as they were, Whitfield and Malcolm X were both children of this strange Zion that allows such reinvention. Malcolm X writes in a gospel of both American pragmatism and American power, saying that “I’m for truth, no matter who tells it. I’m for justice, no matter who it’s for or against…I am for whoever and whatever benefits humanity as a whole.” Conversion can be a means of seizing power; conversion can be a means of reinvention.

Activist Audre Lorde famously wrote that “The master’s tools will never dismantle the master’s house,” and for a young Harlem ex-con born Malcolm Little, the Christianity of Wheatly and Whitfield would very much seem to be the domain of the plantation’s manor, so that conversion to a slave religion is no conversion at all. Mocking the very pieties of the society that Whitfield preached in, Malcolm X would declare “We didn’t land on Plymouth Rock—Plymouth Rock landed on us.” Malcolm X’s life was an on-going narrative of conversion, of the desire to transform marginalization into power. As quoted by Alex Haley in The Autobiography of Malcolm X, the political leader said “I have no mercy or compassion in me for a society that will crush people, and then penalize them for not being able to stand up.”

Transformation defined his rejection of Christianity, his membership in the Nation of Islam, and then finally his conversion to orthodox Sunni Islam. Such is true even in the rejection of his surname for the free-floating signifier of “X,” identity transformed into a type of stark, almost algebraic, abstraction. If America is a land of conversion narratives, than The Autobiography of Malcolm X is ironically one of the most American. Though as Saladin Ambar reminds us in Malcolm X at Oxford Union, his “conversion was indeed religious, but it was also political,” with all which that implies.

7.It is a truth universally acknowledged, that an apostate in possession of a brilliant spiritual mind, must be in want of a religion. If none of the religions that already exist will do, then it becomes her prerogative to invent a better one and convert to that. Critic Harold Bloom writes in The American Religion that “the religious imagination, and the American Religion, in its fullest formulations, is judged to be an imaginative triumph.” America has always been the land of religious invention, for when consciences are not compelled, the result is a brilliant multitude of schisms, sects, denominations, cults, and communes. In his Essays, the French Renaissance genius Michel de Montaigne quipped that “Man is certainly stark mad; he cannot make a worm, and yet he makes gods by the dozens.” Who, however, if given the choice between a worm or a god, would ever possibly pick the former? For America is a gene splicing laboratory of mythology, an in vitro fertilization clinic of faith, and we birth gods by the scores.

Consider Noble Drew Ali, born Timothy Drew in 1886 to former North Carolina slaves who lived amongst the Cherokee. Ali compiled into the Holy Koran of the Moorish Science Temple of America a series of ruminations, meditations, and revelations he had concerning what he called the “Moorish” origins of African-Americans. Drawing freely from Islam, Christianity, Buddhism, Hinduism, and the free-floating occultism popular in 19th-century America, Ali became one of the first founders of an Afrocentric faith in the United States, his movement the original spiritual home to Wallace Fard Muhammad, founder of the Nation of Islam. Ali writes that the “fallen sons and daughters of the Asiatic Nation of North America need to learn to love instead of hate; and to know of their higher self and lower self. This is the uniting of the Holy Koran of Mecca for teaching and instructing all Moorish Americans.”

Ali drew heavily from mystical traditions, combining his own idiosyncratic interpretations of Islam alongside Freemasonry and Rosicrucianism. Such theurgy was popular in the 19th century, a melancholic era when the almost million dead from Antietam and Gettysburg called out to the living, who responded with séance and Ouija Board. Historian Drew Gilpin Faust recounts in The Republic of Suffering: Death and the American Civil War that “Many bereaved Americans…unwilling to wait until their own deaths reunited them with lost kin…turned eagerly to the more immediate promises of spiritualism.” The 19th century saw mass conversions to a type of magic, a pseudo-empirical faith whose sacraments were technological—the photographing of ghostly ectoplasm, or the receipt of telegraphs from beyond the veil of perception.

Spiritualism wasn’t merely a general term for this phenomenon, but the name of an actual organized denomination (one that still exists). Drawing from 18th-century occultists like Emanuel Swedenborg and Franz Mesmer, the first Spiritualists emerged out of the rich soil of upstate New York, the “Burned Over District” of the Second Great Awakening (sequel to Whitfield’s First). Such beliefs held that the dead were still among us, closer than our very breath, and that spirits could interact with the inert matter of our world, souls intermingled before the very atoms of our being.

Peter Manseau writes in The Apparitionists: A Tale of Phantoms, Fraud, Photography, and the Man Who Captured Lincoln’s Ghost, that “It was a time when rapidly increasing scientific knowledge was regarded not as the enemy of supernatural obsessions, but an encouragement…Electricity had given credence to notions of invisible energies…The telegraph had made communication possible over staggering distances, which raised hopes of receiving messages from the great beyond.”

Among the important founders of the movement were the Fox Sisters of Hydesville, N.Y.; three siblings whom in 1848 claimed that they’d been contacted by spirits, including one named “Mr. Splitfoot,” who communicated in raps, knocks, and clicks. Decades later, Margaret Fox would admit that it was a hoax, since a “great many people when they hear the rapping imagine at once that the spirits are touching them. It is a very common delusion.” Despite the seeming credulity of the movement’s adherents, Spiritualists were crucial reformers, with leaders like Cora L.V. Scott and Paschal Beverly Randolph embracing abolitionism, temperance, civil rights, suffragism, and labor rights. When the cause is good, perhaps it doesn’t matter which god’s vestments you wear.

And of course the great American convert to a religion of his own devising is Joseph Smith. America’s dizzying diversity of faith confused young Smith, who asked “Who of all these parties are right, and how shall I know?” From the same upstate environs as the Fox Sisters, Smith was weened on a stew of evangelicalism and occultism, a child of the Second Great Awakening, who in those flinty woods of New York dreamt of finding shining golden tablets left by angels. Writing in No Man Knows My History: The Life of Joseph Smith, scholar Fawn M. Brodie notes that for the New England and New York ancestors of Smith there was a “contempt for the established church which had permeated the Revolution, which had made the federal government completely secular, and which was in the end to divorce the church from the government of every state.”

Smith rather made America itself his invented religion. Stephen Prothero writes in American Jesus: How the Son of God Became a National Hero that there is a tendency of “Americans to make their nation sacred—to view its citizens as God’s chosen people.” Yet it was only Smith’s Mormons who so completely literalized such a view, for the Book of Mormon describes this as “a land which is choice above all other lands.” The effect was electrifying; Brodie writes: “In the New World’s freedom the church had disintegrated, its ceremonies had changed, and its stature had declined.” What remained was a vacuum in which individual minds could dream of new faiths. Spinoza would recognize such independence, his thin face framed by his curled wig, reflected back from the polished glow of one of Moroni’s tablets excavated from the cold ground of Palmyra, N.Y.

8.“In the beginning there was the Tao, and the Tao was God,” reads John 1:1 as translated in the Chinese Version Union bible commissioned by several Protestant denomination between 1890 and 1919. Appropriating the word “Tao” makes an intuitive sense, arguably closer to the Neo-Platonist language of “Logos” as the term is rendered in the koine Greek, than to the rather confusing terminology of “the Word” as it’s often translated in English.

Read cynically, this
bible could be seen as a disingenuous use of Chinese terminology so as to make
Christianity feel less foreign and more inviting, a Western wolf in Mandarin robes.
More charitably, such syncretism could be interpreted as an attempt to find the
universal core between those two religions, a way of honoring truth regardless
of language. Conversion not between faiths, but above them. Perhaps naïve, but
such a position might imply that conversion isn’t even a possibility, that all
which is needed in the way of ecumenicism is to place the right words with the
right concepts.

The earliest synthesis between Taoism, Buddhism, Confucianism, and Christianity is traceable to the seventh century. At the Mogao Caves in Dunhuang, Gansu Province, a cache called the Jingjiao Documents penned during the Tang Dynasty and attributed to the students of a Syrian monk named Alopen were rediscovered in 1907. Alopen was a representative of that massive eastern branch of Christianity slurred by medieval European Catholics as being “Nestorian,” after the bishop who precipitated their schism at a fifth-century church council (the theological differences are arcane, complicated, and for our purposes unimportant).

During those years of late antiquity, European Christendom was a backwater; before the turn of the first millennium the Catholicus of Baghdad would have been a far more important cleric than the Pope was, for as scholar Philip Jenkins explains in The Lost History of Christianity: The Thousand Year Golden Age of the Church in the Middle East, Africa, and Asia—and How it Died, the “particular shape of Christianity with which we are familiar is a radical departure from what was for well over a millennium the historical norm…For most of its history, Christianity was a tricontinental religion, with power representations in Europe, Africa, and Asia.”

In 635, Alopen was an evangelist to a pluralistic civilization that had a history that went back millennia. His mission was neither colonial nor mercantile, and as a religious scholar he had to make Christianity appealing to a populace content with their beliefs. And so, Alopen converted the Chinese by first converting Christianity. As with the translators of the Chinese Version Union bible, Alopen borrowed Taoist and Buddhist concepts, configuring the Logos of John as the Tao, sin as karma, heaven as nirvana, and Christ as an enlightened Bodhisattva.

Sinologist Martin Palmer, writing in The Jesus Sutras: Rediscovering the Lost Scrolls of Taoist Christianity, argues that Alopen avoided “what many missionaries have tried to do—namely, make people adapt to a Western mind-set.” Rather, Alopen took “seriously the spiritual concerns of China.” Alopen was successful enough that some 150 years after his arrival, a limestone stele was engraved in both Mandarin and Syriac celebrating the history of Chinese Christianity. With a massive cross at the top of the Xi’an stele, it announced itself as a “Memorial of the Propagation in China of the Luminous Religion from Rome.” During a period of anti-Buddhist persecution in the ninth century, when all “foreign” religions were banned, the stele was buried, and by 986 a visiting monk reported that “Christianity is extinct in China.”

Like Smith uncovering his golden tablets, workers in 1625 excavated the Xi’an stele, and recognizing it as Christian sent for Jesuits who were then operating as missionaries to the Ming Court. Portuguese priest Alvaro Semedo, known to the court as Xie Wulu, saw the stele as evidence of Christian continuity; other clergy were disturbed that the monument was from a sect that the Church itself had deemed heretical 1,000 years before. German Jesuit polymath Athanasius Kirchner supplied a Latin translation of the stele, enthusing in his China Illustrata that Xi’an’s rediscovery happened by God’s will “at this time when the preaching of the faith by way of the Jesuits pervaded China, so that old and new testimonies…would go forth…and so the truth of the Gospel would be clear to everyone.” But was it so clear, this strange gospel of the Tao?

Much of Kircher’s book was based on his colleague Fr. Mateo Ricci’s accounts of the Ming Court. Ricci had taken to wearing the robes of a Confucian scholar, borrowing from both Confucius and Lao-Tzu in arguing that Catholicism was a form of those older religions. The Dominicans and Franciscans in China were disturbed by these accommodations, and by 1645 (some 35 years after Ricci had died) the Vatican’s Sacred Congregation for the Doctrine of the Faith ruled against the Jesuits (though this was a process that went back and forth). Maybe there is something fallacious in simply pretending all religions are secretly the same. Prothero writes in God Is Not One: The Eight Rival Religions that Run the World, that we often have “followed scholars and sages down the rabbit hole into a fantasy world in which all gods are one.” Catholicism is not Taoism, and that’s to the integrity of both.

But Ricci’s attitude was a bold one, and in considering different beliefs, he was arguably a forerunner of pluralistic tolerance. We risk abandoning something beautiful if we reject the unity that Alopen and Ricci worked for, because perhaps there is a flexibility to conversion, a delightful promiscuity to faith. Examining one of the Chinese water-colors of Ricci, resplendent in the heavenly blue silk of the panling lanshan with a regal, heavy, black putou on his head, a Roman inquisitor may have feared who exactly was converting whom.  

9.In the painting, Sir Francis Dashwood —11th Baron le Despencer and Great Britain’s Chancellor of the Exchequer from 1762 to 1763—is depicted as if he was St. Francis of Assisi. Kneeling in brown robes, the aristocrat is a penitent in some rocky grove, a hazy blue-grey sfumato marking the countryside visible through a gap in the stones. In the corner is a silver platter, grapes and cherries tumbled onto the soil of this pastoral chapel, as if to remind the viewer of life’s mutability, “Vanity of vanity” and all the rest of it. Some tome—perhaps The Bible?—lay open slightly beyond the nobleman’s gaze, and with hand to breast, Dashwood contemplates what looks like a crucifix. But something is amiss in this portrait painted by Dashwood’s friend, that great notary of 18th-century foibles William Hogarth. The crucifix—it’s not Christ on the cross, but a miniature nude woman with her head thrown back. Suddenly the prurient grin on the stubbly face of Dashwood makes more sense.

If you happen to be an expert on 18th-century French pornography, you might notice that it’s not the gospels that lay open on cracked spine next to Dashwood, but a copy of Nicolas Chorier’s Elegantiae Latini sermonis; were you familiar with the intricacies of Westminster politics in the 1760s, you may have observed that rather than a golden, crescent halo above the baron’s head, it’s actually a cartoon of the Earl of Sandwich in lunar profile.

Already raised in the anti-Catholic environment of British high society, Dashwood’s disdain for religion was incubated during his roguish youth while on his fashionable Grand Tour of the continent—he was expelled from the Papal States. In the anonymously written 1779 Nocturnal Revels, a two-volume account of prostitution in London, the author claims that Dashwood “on his return to England, thought that a burlesque institution in the name of St. Francis, would mark the absurdity of such Societies; and in lieu of the austerities and abstemiousness there practiced, substitute convivial gaiety, unrestrained hilarity, and social felicity.”

To house his “Franciscans,” Dashwood purchased a former Cistercian Abby in Buckinghamshire that overlooked the Thames, and in dazzling stain-glass had inscribed above its entrance the famous slogan from the Abby of Thelema in Francois Rabelais’s 15th-century classic Gargantua and Pantagruel—“Do What Thou Wilt.” Its grounds were decorated with statues of Dionysius—Julian the Apostate’s revenge—and the gothic novelist (and son of a Prime Minister) Horace Walpole wrote that the “practice was rigorously pagan: Bacchus and Venus were the deities to whom they almost publicly sacrificed; and the nymphs and the hogsheads that were laid in against the festivals of this new church.” Within those gothic stone walls, Dashwood’s compatriots very much did do what they would, replacing sacramental wine with liquor, the host with feasting, and the Mass with their orgies. The Monks of Medenham Abby, founded upon a Walpurgis Night in 1752, initiated occasional worshipers including the respected jurist Robert Vansittart, John Montague 4th Earl of Sandwich, the physician Benjamin Edward Bates II, the parliamentarian George Bubb Dodington, and in 1758 they hosted a colonial scientist named Benjamin Franklin (fresh from a Whitfield revival no doubt).

Such gatherings were not uncommon among the bored upper classes of European society; Black Masses were popular among French aristocrats into the 17th century, and in Britain punkish dens of obscenity like Dashwood’s were known as “Hell-Fire Clubs.” Evelyn Lord writes in her history The Hellfire Clubs: Sex, Satanism and Secret Societies that long before Dashwood ever convened his monks, London had been “abuzz with rumors of highborn Devil-worshipers who mocked the established Church and religion, and allegedly supped with Satan,” with the apparently non-Satanic members of Parliament pushing for anti-blasphemy legislation.

That’s the thing with blasphemy though—there’s no Black Mass without first the Mass, no Satan without God. Irreverent, impious, and scandalous though Dashwood may have been, such activities paradoxically confirm faith. Lord writes that the “hell-fire clubs represented an enduring fascination with the forbidden fruit offered by the Devil…But the members of these clubs faced a dilemma: if they believed in Satan and hell-fire, did they by implications believe in a supernatural being, called God, and a place called Heaven?” Should the sacred hold no charged power, were relics simply bits of rag and bone, than there would be no electricity in their debasement; were a crucifix meaningless, than there would be no purpose in rendering it pornographic. A blasphemous conversion, it turns out, may just be another type of conversion.  

Geoffrey Ashe argues in The Hell-Fire Clubs: Sex, Rakes and Libertines that Thelema is an antinomian ethic that can be traced from Rabelais through the Hell-Fire Clubs onto today. He writes that such a history is “strange and unsettling. It discloses scenes of pleasure and laughter, and also some of the extremist horrors ever conceived. It introduces us to cults of the Natural, the Supernatural; to magic, black and otherwise.” Dashwood’s confraternity encompasses figures as diverse as the Marquis de Sade, the notorious occultist Aleister Crowley (who had Rabelais’s motto carved above the entrance to his own monastery in Sicily), and LSD evangelist Timothy Leary. Fear not the blasphemer, for such is merely a cracked prophet of the Lord. As Master Crowley himself wrote in Aceldama: A Place to Bury Strangers, “I was in the death struggle with self: God and Satan fought for my soul those three long hours. God conquered – now I have only one doubt left—which of the twain was God?”  

10.When the Blessed Kateri Tekakwitha, lily of the Mohawks and the sainted maiden of the Iroquois village of Kahnawake, laid her head upon her death-bed one chill spring in 1680, it was said that the disfiguring small-pox scars she’d contracted vanished from her beautiful corpse. There in the dread wilderness of New France, where spring snows fall blue and deep and the horizon is marked with warm smoke from maple long-houses and fallen acorns are soggy under moccasin slippers, America’s indigenous saint would die. A witness recorded that Tekakwitha’s face “suddenly changed about a quarter of an hour after her death, and became in a moment so beautiful.” A fellow nun records that the evening of the saint’s death, she heard a loud knock at her door, and Tekakwitha’s voice saying “I’ve come to say good-bye; I’m on my way to heaven.”

Tekakwitha’s short decades were difficult, as they must by necessity be for anyone who becomes a saint. She was victim of a world collapsing in on itself, of the political, social, economic, and ecological calamities precipitated by the arrival of the very people whose faith she would convert to, one hand holding a bible and a crucifix, the other a gun—all of them covered in the invisible killing virus. Despite it being the religion of the invaders, Tekakwitha had visions of the Virgin and desired conversion, and so she journeyed over frozen Quebec ground to the village of the “Black Robes” who taught that foreign faith.

When Tekakwitha met with the Jesuits, they told the Iroquois woman not of the Tao, nor did they speak of heaven, rather they chanted a hymn of Karonhià:ke, the realm from which the father of all things did send his only son to die. Of her own accord, Tekakwitha meditated on the words of the Jesuits, her confessor Fr. Cholonec recording that she finally said “I have deliberated enough,” and she willingly went to the baptismal font. She has for the past three-centuries been America’s indigenous saint, a symbol of Christ reborn on this land, the woman of two cultures whom William T. Vollman describes in his novel Fathers and Crows as “Tekakwitha…praying besides the Cross of maple wood she had made.”

Much controversy follows such conversions: are we to read Tekakwitha—who endures as a symbol of syncretism between Christianity and indigenous spirituality—as a victim? As a willing penitent? As some cross between the two? In his novel Beautiful Losers, the Canadian poet, novelist, and songwriter Leonard Cohen says of Tekakwitha that a “saint does not dissolve the chaos.” Tekakwitha is not a dialectic to resolve the contradictions between the Catholic and the Iroquois, the French and the Mohawk. She is not an allegory, a parable, a metaphor, or an example—she is Tekakwitha, a woman.

If we are to draw any allegorizing lesson from her example, it must be this—conversion, like death, is something that is finally done alone. Who can we be to parse her reasons for embracing that faith, just as how can we fully inhabit the decisions of Julian, or Spinoza, or Hansmann, or Ricci? Nothing can be more intimate, or sometimes more surprising, than the turn of a soul, the conversion of a woman or man. We aren’t known to one another; we’re finally known only to God—though it’s impossible to say which one. When Tekakwitha’s appearance changed, was this an indication of saintliness? Of her true form? Of the beatified face when it looks upon the creator-god Ha-wen-ni-yu? All that can be said of conversion is that it’s never final, we’re always in the process of being changed, and pray that it’s possible to alter our broken world in return. Converts, like saints, do not reconcile the chaos, they exist amidst it. In hagiography, we find not solution, but mystery—as sacred and holy as footprints on a virgin Canadian snow, finally to be erased as the day turns to night.

Image credit: Unsplash/Diana Vargas.

Philosophizing the Grave: Learning to Die with Costica Bradatan

“It was the hemlock that made Socrates great.” —Seneca

“Honorable purpose in life invites honorable purpose in death.” —David Buckel

On an early spring morning in 2018, when the stars were still out and Manhattan glowed in all of its wasteful wattage across the East River, a 60-year-old retired lawyer named David Buckel made his way past the Grand Army Plaza and the Brooklyn Museum down to Prospect Park. In those hours before the dawn, Buckel dug a shallow circle into the black dirt, which investigators believed he made to prevent the spread of the fire he was about to set, having understood that our collective future would have enough burning. The former attorney paused to send an email to The New York Times—it read: “Most humans on the planet now breathe air made unhealthy by fossil fuels, and many die early deaths as a result—my early death by fossil fuel reflects what we are doing to ourselves”—doused himself in gasoline, and lit a match. Buckel was pronounced dead by 6:30 a.m.

As a civil rights attorney, Buckel’s life was one of unqualified success—an early supporter of Lambda Legal, he’d fought doggedly for LGBTQ rights not just in New York, but in seemingly inhospitable places from Nebraska to Iowa. As a human being, his life was defined by companionship and love—raising a daughter with his husband of 34 years alongside the girl’s biological mother and her wife. And as a community member, his life was committed to stewardship and responsibility—rather than using wasteful fossil fuels he walked several miles every day to the Brooklyn Botanic Garden where, in retirement, he organized the center’s composting program. By all accounts Buckel’s life was centered around justice, both personal and ecological, which should go to some length in explaining his death on April 14, though the philosopher Costica Bradatan reminds us that wherever “self-immolators are going they are not asking anyone with them: their deaths are fierce, but remain exclusively their own.”

With some incongruity, I thought about Buckel’s sacrifice as I sat by my apartment complex’s pool on a hot New England summer day (they seem hotter and hotter), welcoming my 35th birthday reading Bradatan’s poignant, provocative, astute, moving, thoughtful Dying for Ideas: The Dangerous Lives of the Philosophers. A philosopher at Texas Tech University, as well as an honorary research professor at Australia’s University of Queensland, Bradatan has in his role as religion editor for The Los Angeles Review of Books consistently proven the utility of philosophical and theological ideas when it comes to the art of living. In Dying for Ideas, Bradatan examines those who sacrificed their lives for ideas through a “purely secular martyrdom” (though one of his subjects was a saint): the deaths of thinkers for whom the nature and impact of their executions “confers a discrete sublimity upon these philosophers’ gestures. It is great to die for God, but it may be greater to die for no God.” Like the women and men Bradatan writes of, from Thomas More awaiting his beheading in the Tower of London’s turret to the anti-Soviet dissident Jan Patočka tortured in a Prague investigation chamber, Buckel had lived his life, and his death, for an idea. In Buckel’s death, Bradatan might argue, we see not just suicide, but an argument about life, where what “we feel toward the person performing [self-immolation]…is in fact a complex mix of fear and respect, of fascination and repulsion, attraction and revulsion, all at once.”

What Dying for Ideas clarifies is that we must not look away from the pyre; we must consider these deaths that often turn out to be the “threshold where history ends and mythology begins,” as Bradatan writes. Buckel attempted a sacrifice similar to those of the Buddhist nuns and monks who’d self-immolated themselves during the Vietnam War, people of incomparable bravery and detachment whom Buckel had long admired. In short, the attorney wished to turn his body into a candle “helping others to find the right direction,” as Bradatan might have put. As we perhaps approach our own collective martyrdom, measuring the rising tide and warmth alike, there is a pressing wisdom that we must grapple with, what the medievals called the Ars morienda, the art of the good death. Can we notice the warmth of Buckel’s auto de fe amidst the escalating temperatures, or do such flames recede into the apocalyptic heat?

Morbid thoughts to have in the sunniness of a Boston June, listening to a “’90s Hits” Spotify playlist. Yet in the hazy days of the Anthropocene, it’s hard not to feel the uncanniness that acknowledges Buckel’s life and death lived with authenticity, while the rest of us wait for the ice caps to melt listening to Radiohead and Soundgarden on our smart phones. While taking a break from Bradatan’s astute readings of those martyrs like Socrates, Hypatia, and Giordano Bruno who died not for God or country, but rather for philosophical ideas, I paused to scroll through my Twitter feed, only to come across a Vice article titled “New Report Suggests ‘High Likelihood of Human Civilization Coming to an End’ Starting in 2050.” That’s the year that I’ll be (theoretically) “celebrating” my 66th birthday. If I hadn’t paid attention to why Buckel died when it happened, it would behoove me to take notice now.

Dying for Ideas came out three years before Buckel’s suicide, but the glow of his burning body couldn’t help but give off more light by which to read Bradatan’s book. He writes of a death like Buckel’s, that it “does not always mean the negation of life—sometimes it has the paradoxical capacity of enhancing it, of intensifying it to the point of, yes, breathing new life into life.” As Socrates died for the good, as Hypatia died for reason, as More died for faith, and as Bruno died for magic, so Buckel died for the Earth. When scheduling my reviews and writing projects for the summer, I hadn’t intended to necessarily be reading Dying for Ideas on my 35th birthday, but there’s something appropriate in having Bradatan as my Virgil for the date that Dante described in The Divine Comedy as being “Midway upon the journey of our life/[when] I found myself within a dark forest.” Now the dark wood has been paved over, and all of its pollinating insects are going extinct, so for a member of the millennial generation facing the demise of our entire ecosystem and the possible extinction of humanity, both Buckel’s death and Bradatan’s invocation of those who faced their own demise with magnanimity and wisdom hang humid in the air.

Dying for Ideas embodies the French Renaissance essayist Michel de Montaigne’s conviction that “To philosophize is to learn to die.” Bradatan makes the compelling case that to evaluate philosophers in their entirety, you must ascertain not just the rationality of their arguments, but if their lives were consistent with their claims—and that for a few singular philosophers, if their deaths also make their arguments. For philosophers like Bradatan, philosophy is a method as much of a discipline; not merely the realm of logical syllogism, but something that prepares us for the “perilous journey from the agora to the scaffold or the pyre.” If you’ve ever taken an undergraduate philosophy course, Montaigne’s aphorism might not strike you as accurate, for the Anglo-American academic discipline goes a long way to perennially proving Henry David Thoreau’s crack that there are a lot of philosophy professors, but precious few philosophers.

In the American and British academy, the vast majority of university departments tend to orient themselves towards what’s called analytical philosophy, a tradition that rather than engage those eternal philosophical questions concerning the examined life, is rather content with precise, logical anal retention; pleased to enumerate all of the definitions of the word “is” rather than to question how it is that we should live. Fortunately, Bradatan is a disciple of that other contemporary wing, the continental philosophy of 20th-century France and Germany that is still capable of approaching the discipline as “an act of living…[that] often boils down to…learning how to face death—an art of dying,” as Bradatan puts it. He brings to bear not the rigid, rigorous, logical arguments of analytical philosophers like A.J. Ayers, Saul Kripke, and Willard van Orman Quine—men whom for all their brilliance and importance did little to illuminate the philosophical issue of how are we to live?—in favor of the impressionistic and poetic truths embodied by thinkers like Simone Weil, Martin Heidegger, and most of all Pierre Hadot.

The last philosopher has never quite achieved the reputation he deserves in the Anglophone world, his philosophy concerning “care for the self” rather filtered through by more popular disciples, such as the historian Michel Foucault. In Hadot’s estimation, the pre-Socratic philosophers of ancient Greece approached their thought not as a means of abstract contemplation, or as a way of producing one more publication for the tenure review file, but rather as a solemn, rigorous, unsparing honest examination and method that served to transform one’s very life. As Hadot wrote in What Is Ancient Philosophy?, there is “No discourse worthy of being called philosophical, that is separated from the philosophical life.” For philosophers in Hadot’s stead, including Bradatan, to approach philosophy as if it were simply the academic discipline that investigates the history of ideas, or with even more futility as a type of logical game, is to abdicate a solemn responsibility. In reducing philosophy to nothing more than just another university department with journals and conferences it’s to reject philosophy as something that can change your life.

Thus do we honor Socrates, that ugly little gadfly who in acknowledging his ignorance was paradoxically the wisest man in Athens. Socrates is Bradatan’s first martyr-philosopher, the most celebrated of men who died for an idea. Such is the power of the event, the forced drinking of hemlock at the hands of an Athenian state that claimed Socrates had corrupted the youths of the city and preached against her gods. He is not an uncomplicated figure, from the detached, calm, almost-absurdly-professorial figure of clean lines and smooth surfaces in French painter Jacques-Louis David’s The Death of Socrates, to radical journalist I.F. Stone’s The Trial of Socrates with its revisionist interpretation of the anti-democratic teacher’s death as being in some sense warranted.

What Bradatan does differently is that he reads the death of Socrates itself as an argument to be interpreted—treating the moment of extinction itself as one would a proof. The historical Socrates is himself mute; though he appears in a few scenes of the dramatist Xenophon, and while he is the beguiling central character in the dialogues of his student Plato, no actual writing of the founder of Western philosophy itself exists. Considering Plato’s treatment of Socrates in dialogues like The Republic, The Symposium, The Crito, and The Apology, Bradatan writes that the teacher’s “voice is authoritative, compelling, commanding, almost annoyingly so. Yet his silences—whenever they occur…these silences can be unbearable. This is Socrates at his most uncanny.” With all of the words of Socrates that we have being mediated through the mouth of another, Bradatan rather returns to his death as the ultimate silence, a masterpiece to be read as rigorously as any dialogue of Plato’s.

In Socrates’s willing execution, Bradatan sees a consistency of purpose and a representation of how the philosopher argued we should live our lives. Had Socrates capitulated to the court, had he admitted to wrong-doing and served a lighter sentence, it would be to invalidate his own teaching. Such consistency of purpose is something that united the martyrs he examines, no matter how different their actual thought may have been. Bradatan writes that “With the spectacle of their dying bodies alone they had to express whatever they could not communicate through all their rhetorical mastery…death was the most effective means of persuasion…such a death is a philosophical work in its own right—sometimes a masterpiece.”

When Bruno was hoisted aloft the burning green wood of Rome’s campo de fiori, he abjured the penitential crucifix presented by the tonsured Dominicans, offering back rather a blasphemous stream of invective in his thick Neapolitan accent, the proud heretic to the very end. And from his execution, though Bruno was himself an advocate for magic, occultism, and hermeticism, the Italian Enlightenment would find inspiration against the forces of Inquisition and superstition that had condemned him; his death becoming a far more potent argument than anything he ever actually wrote. More than a millennium before, and Hypatia made a very different argument after Christian zealots under the command of Alexandria’s bishop Cyril grabbed the Neo-Platonist mystic and mathematician, dragged her through the filthy streets of that cosmopolitan city and ultimately stripped her of her clothes, and then cut her very flesh from her body using either sharpened pottery shards or oyster shells depending on the account. Supposedly Hypatia uttered no words or cries. Hypatia’s silence had its own lessons as a woman in a culture that denies half the population its bodily autonomy, a rational mystic who thought the material world was fallen and an illusion. Though none of Hypatia’s writings survive, she has often been figured as the first feminist, her silent protest even as she was murdered making a powerful argument about the power of obstinate quiet. By contrast, More approached the infinite not with silence, but with wry British humor, the ever ironic and mutable founder of Utopia ascending the scaffold and telling his executioner, “See me safe up, as for my coming down, I can shift for myself.”

All of us will rendezvous with the eternal one day, and most philosophy professors will die in their beds. As Bradatan confesses, “No matter how hard we fight, how graceful our dance, how bold our stance, the end is always the same: total annihilation.” But these philosophers were those whose death was as an apotheosis, albeit accomplished in different ways: from Socrates’s lecturing to Hypatia’s silence, Bruno’s cursing to More’s joking, all in their varied ways confirming Bradatan’s contention that “the point is not to avoid death but to live without fear and humiliation before it comes.” Were that the only critical intervention of Dying for Ideas, it’d be powerful enough, but Bradatan makes his most important (and audacious) contribution concerning the societal importance of such sacrifice.

Drawing on the work of the French anthropologist Rene Girard, Bradatan applies what’s called the “scapegoat mechanism,” Girard’s contention that all of human civilization is propelled by the occasional sacrifice of some kind of innocent agent on whom all of the sins of the culture are imparted. This is most obvious in Christianity, as Girard writes of the “non-violent God who willingly becomes a victim in order to free us from our violence” in Evolution and Conversion: Dialogues on the Origins of Culture. As Christ was supposed to be a sacrifice who died for the world, so Bradatan argues that these philosophers’ deaths functioned as sacrifices for the truth, the radical honesty that Foucault calls parrhesia. The philosopher possesses a radical freedom and powerful powerlessness that is perhaps only matched by that of the jester; she has the agency to confront and compel the truth where others are mired in lies and delusions, and in her sacrifice the philosopher dies so that we may light ourselves out of our ignorance. All of Bradatan’s examples are partisans of parrhesia, and all took leave of this world at moments of profound social and cultural dislocation and crisis. Socrates was sacrificed by a democratic state extricating itself from years of tyranny, Hypatia skinned by a crowd that watched the eclipse of paganism and the ascendancy of Christianity, Bruno immolated upon pyres set by men whose religious certainties were challenged by the scientific revolution, More decapitated as Christendom was fractured by the Reformation, and Patočka tortured by a communist state in which the people no longer had any faith. Bradatan explains that scapegoats are needed in places where “An atmosphere of doom settles in, society is divided, the crisis persists. Everything seems to move in circles, the usual remedies don’t seem to work. Something radical is needed.” What we see in Socrates’s calm disputation or More’s scaffold stand-up act is not simply death, but the sacrifice of parrhesia so that the rest of us may know truth, an example of thinkers whom Bradatan describes as “mystics [who] are God’s traffickers” that “routinely smuggle bits of eternity into our corrupted world.”

So, I return to my phone, skimming accounts of collapsing ice shelfs and flooding river banks, reading articles about how humanity may have less than a few decades left, pushed to the brink by the irrational, insatiable hungers of our economic system and its supporting faith. As analysis, Bradatan offers us a claim about how some brave souls die for ideas, how their sacrifices are meant to illuminate those malignancies that threaten a society in collapse. Women and men like Bruno and Hypatia are set aside from the realm of the rest of us, they are, in the original sense of the word, sacred. It is the action of the sacred that, according to Bradatan, when it “erupts into our lives…its presence is unmistakable, its effects lasting, its memory haunting.” The martyred, these sacred women and men, are separate from us regular folk who are more content to scroll through Twitter by the pool rather than die for an idea. If ever a people needed a holy sacrifice uttering parrhesia on the scaffold, a sacred scapegoat shouting the truth from a pyre, it is our own. In the social media frenzy of orthographically-challenged White House tweets and the hermetic reading of Mueller investigation tea leaves, the media moved on from Buckel’s martyrdom in Prospect Park with disturbing promptness—just another horrific story receding deep into our news feeds, his radical honesty drowned out in the static of so much otherwise meaningless information permeating our apocalyptic age. Perhaps it’s time to listen to him, before its too late.

Image credit: Unsplash/Cullan Smith.

Is There a Poet Laureate of the Anthropocene?

“Annihilating all that’s made/To a green thought in a green shade.” -Andrew Marvell, 1681

Sometime in 1612, the genius dramatist, unofficial Poet Laurette, and all-around Falstaffian personality that was the writer Ben Jonson imagined a sort of epic voyage down London’s now-long-lost Fleet River. Jonson’s epic concerned neither the harrowing of hell, nor the loss of Paradise, or at least it didn’t do either in quite the manner that Dante had or that Milton would. Rather, Jonson envisioned the travails of his characters on this industrial Styx as less sacred and more profane, lacking transcendence but making up for it in the sheer fecundity of sewage that floated upon the canal that today flows underneath the bohemian environs of Camden Town, and whose tinkling can be heard through sewer grates in Clerkenwell.

In Jonson’s mock-epic “On the Famous Voyage,” the bucolic and pastoral have been replaced with offal and shit, where “Arses were heard to croak, instead of frogs,” and a thick crust called “Ycleped Mud” composed of excrement and refuse was known to bob like moss on the surface of the water. So disgusting are the noxious fumes from both commerce and latrine that Jonson’s fellow colleague, the poet Sir John Harrington, would write in his 1586 A New Discourse of a Stale Subject Called the Metamorphosis of Ajax that such smells “are two of those pains of Hell…and therefore I have endeavored in my poor buildings to avoid those two inconveniences as much as I may.” And so, at his manor of Kelston, he constructed the forerunner of the modern flushing toilet.

Conventions of pastoral poetry often had characters with names like Strephon and Chloe in repose upon halcyon shores; Jonson’s is rather a sort of anti-pastoral, one in keeping with the grime and dirt that increasingly defined his era.  On the Fleet River, “The sinks ran grease, and hair of measled hogs, /The heads, houghs, entrails, and the hides of dogs.” Flowing from the center green of the city out to the Thames, the Fleet was polluted with the garbage of nascent industry, a slick, oily stream; a fetid and beshitted, offal-filled cesspool, made glistening with the rendering of animal fat and the corpses of dogs and cats. Notorious prisons like Ludgate and Newgate were on the banks of that now-subterranean river; plague-ridden slums filled with rural transplants clung to the brown shores of the Fleet. 

In his delightful A Time Traveler’s Guide to Elizabethan England, social historian Ian Mortimer describes the privies that would have been in homes lining rivers like the Fleet: a “twelve-foot shaft of several hundred gallons of decomposing excrement and urine…seeping into the clay, for two or three years.” Mortimer reminds us, however, that though “Noisome smells and noxious fumes are common” in early modern England, this “does not mean that people do not notice them.” Indeed, both the population growth of London as well as the beginnings of mass industry, from leather tanning to wool dying, would have wafted new smells into the nostrils of the English. As Jonson wrote with infernal gleam, “Your Fleet Lane Furies…That, with still-scalding streams, make this place hell.”

Filth had been a topic of literary expression long before Jonson, one only read Geoffrey Chaucer, Francois Rabelais, or Giovani Boccaccio to know that poets have long sung not just of heaven, but of the asshole, the piss-bucket, and the privy as well. I’d venture that “On the Famous Voyage” does something a little bit different than the scatological depictions in The Canterbury Tales, however. Because Jonson’s London was so much bigger than Chaucer’s, because it was just beginning to be ensnared in the environmental degradations of industrialization, the scope of the olfactory and hygienic assault is greater than a Medieval writer could have imagined. “On the Famous Voyage” is satirical verse, yes; but it’s also an ecological poem.

Almost three centuries before the Romantic poet William Blake would castigate the “dark Satanic Mills” of Britain’s industrial revolution, Jonson gave expression to misgivings about how London was quickly erasing nature in the name of mercantile aspiration. Throughout the 16th-century, London expanded from the former sleepy agrarian capital of a sleepy agrarian kingdom into what would soon be the largest city on Earth. Around when Jonson was born, the city’s population was roughly 70,000 people; by the time he wrote “On the Famous Voyage,” it had grown to 200,000. Only a half-century later and London was home to half-a-million women and men. Emily Cockayne writes in Hubbub: Filth, Noise & Stench in England that “London was a wealthy bustling and expanding city, but infrastructural development could not keep pace and parts of the city became increasingly crowded, dirty and noisy.” Jonson didn’t just speak of dirt, he sang about waste; he didn’t just talk of filth, he was revulsed at garbage. “On the Famous Voyage” was among the first of what critics call ecopoems, and that’s because it’s an early missive from the beginning of our current geological epoch of the Anthropocene.

That term has become recently trendy in academic discussions of literature, a designation borrowed from geologists and climatologists to clarify the ways in which the Earth has been inextricably altered by humanity. With coinage frequently credited to the Nobel Prize-winning chemist Paul Crutzen, the Anthropocene is supposed to define our current era, when people have (mostly for worse) altered the environment of the world in such a way that we’ve become the dominate actor and force in the acidity of the ocean, the thickness of the ozone layer, the very temperature of the planet. Legal scholar Jedediah Purdy explains in After Nature: A Politics of the Anthropocene that “we have made the world our anthill: the geological layers we are now laying down on the earth’s surface are marked by our chemicals and other industrial emissions, the pollens of our crops, and the absence of the many species we have driven to extinction.”

Scientists disagree on when it’s appropriate to mark the beginnings of the Anthropocene. As the period is most spectacularly defined by anthropogenic climate catastrophe, the Industrial Revolution of the 19th century, with its harnessing of fossil fuels such as coal and oil, would seem an appropriate starting point. Others identify the Anthropocene’s creation moment as recently as the Trinity test of the first atomic bomb in 1945, to as long as 10 millennia ago when agriculture first emerged on the silty banks of the Tigris and Euphrates. At the risk of betraying my own early-modern-minded myopia, a credible case could be made for Jonson’s era as the dawn of the Anthropocene, which would have certain implications for how we read him and his compatriots in our own day, when the United Nations Intergovernmental Panel on Climate Change’s 2018 report concludes that we may have less than a decade to avert the worst results of global warming.

There are social, cultural, technological, and economic reasons for understanding the 16th and 17th centuries as the earliest decades of the Anthropocene. By Jonson’s birth there had already been a century of the Columbian Exchange, whereby the flora and fauna of the western and eastern hemispheres, which had been separated for millions of years, suddenly circumnavigated the world in a flurry of trade that radically altered the planet. Many of the economic and social trends that we associate with modernity—colonialism, globalization, and capitalism—see their start in Jonson’s century. The Renaissance also helps us to understand the interactions between climate and humanity, as the women and men of Jonson’s day were in the midst of what’s been called the “Little Ice Age.” During that period, temperatures plummeted, possibly due to the reforestation of North America brought about by plague and genocide that decimated native populations. Arguably, this process didn’t end until the cumulative effect of the Industrial Revolution’s mass emissions of carbon-dioxide began to warm the planet—obviously an ongoing process. During those years of snow and ice, Europe appeared radically different from the way it does today, as accounts of Tudor fairs upon the frozen Thames or the grey winter paintings of Peter Breughel attest. Philipp Blom in Nature’s Mutiny: How the Little Ice Age of the Long Seventeenth Century Transformed the West and Shaped the Present writes that “Climate change…affected everyone. There was no escaping the weather.” What’s crucial to remember is that though the thermometer’s mercury was headed in a different direction than it is today, the weather of Jonson’s day was also shaped profoundly by the affairs of people. 

An important result of humanity’s changed relationship to nature—the alteration that defines the Anthropocene—is the emergence of new urban spaces, new relationships between people and place that fundamentally changed the experience of what it means to be an inhabitant of Earth. Something new in “On the Famous Voyage:” Jonson has produced a literature that isn’t just about hygiene (or the lack thereof) but about mass pollution. For such lyrics to be written, the conditions of crowded, filthy, industrialized urbanity were required. “On the Famous Voyage” is about environmental collapse. Though rarely thought of as such, Jonson is an ecopoet at the precise moment in history when we redefined our relationship to nature—for the worse.  Which is precisely what begs for a reevaluation of not just Jonson, but that entire tribe of under-read 17th-century poets whom he influenced and that called themselves the “Tribe of Ben,” posterity remembering them (when it does) as the Cavalier poets. Writers like Robert Herrick, John Suckling, Thomas Carew, Richard Lovelace, and most famous of them, though only occasionally categorized in their company, Andrew Marvell. Editor Miriam K. Starkman writes in her introduction to 17th-Century English Poetry that for the Cavalier poets, “External nature is…[the] most direct referent, source of beauty, joy, and mutability.”

For the Cavaliers, the pastoralism of classical poets like Hesiod and Virgil had much to recommend. One of their favored genres was the “country-house poem,” where their ecological concerns become apparent. In Jonson’s 1616 “To Penshurst,” he described the manor of Sir Richard Sidney with a language very different from that which he deployed four years earlier in his panegyric to the Fleet River. In “To Penshurst” Jonson extols this estate with its “better marks, of soil, of air, /Of wood, of water,” this “lower land, that to the river bends, /Thy sheep, they bullocks, kine, and calves do feed;/The middle grounds they mares and horses breed.” Jonson’s is a rhetoric of Eden, with prelapsarian tongue he describes:

…thy orchard fruit, they garden flowers, hurstFresh as the air, and new as are the hours. The early cherry, with the later plum, Fig, grape, and quince, each in his time doth come;The blushing apricot and wooly peachHang on thy walls, that every child may reach.

Such prelapsarian evocation of Eden is a common
trope in country-house poems, what Starkman described as “vague overtones of a faded
fragrance, a world just lost.” Unlike the Puritan, the Cavalier does not simply
mourn paradises lost, but rather preserves a bit of that charged immanence
within nature as it is now, acknowledging for the possibility of transcendence
just below surface appearances. What the country-house poem presents is paradise
in verse, a lyric crafted by the human mind as surely as a garden is planted by
human hands, with the verse itself becoming a type of perfection that you can
step into. Consider Jonson’s clear influence in Marvell’s almost-perfect 1681 “Upon
Appleton House:”

Ripe apples drop about my head;The luscious clusters of the vineUpon my mouth do crush their wine;The nectarine and curious peachInto my hands themselves do reach;Stumbling on melons as I pass, Ensnar’d with flow’rs, I fall on grass.

By imagining a world without the fall, poems such as these query us with the possibility of a future where the fall has been reversed, where the exploitation of nature is ceased. “On the Famous Voyage,” with its bawdy, earthy, fecal corporality may seem a long distance from Penshurst Palace. Yet pastoralism and its discontents are actually part of the same project; the disjunct between depicting nature in its abundance as well as the exploitation of the environment share a similar ideological underpinning. Starkman explains that for the Cavaliers, there is a “stoical awareness of the tragedy of Nature, the garden of innocence violated by experience.” Whether writing about bucolic orchards or shit-contaminated rivers, whether talking of nature or its violation, what these poems take as their subject is the environment. What their critics might say they lack in ingenuity, the Cavaliers more than make up in ecological prescience.

Drawing inspiration from Jonson’s verse, the Cavaliers have historically (and with much reductionism) been made to contrast with the other dominant tradition of 17th-century English poetry, the metaphysical school influenced by John Donne, and including George Herbert, Henry Vaughan, and Thomas Traherne. Owing much to a distinction made by Dr. Johnson (of no relation to Ben) in his 1781 Lives of the Most Eminent English Poets, the author described the Cavalier as being concerned with “sprightliness and dignity,” a verse which “endeavors to be gay,” where the poetry is “liberally supplied with… soft images; for beauty is more easily found.”  By contrast, Dr. Johnson saw the metaphysicals as writing poetry where the “most heterogenous ideas are yoked by violence together; nature and art are ransacked for illustrations, comparisons, and illusions.” Something, perhaps, to be observed in the fact that where the Cavaliers were content to observe nature, the metaphysicals mined the environment for metaphors, as if they were a precious non-renewable resource hidden below the broken crust of the world. Poetry such as Donne’s was defined by the so-called “metaphysical conceit,” the deployment of a metaphor that was surprising and novel—Dr. Johnson’s “heterogenous ideas… yoked by violence together.”

If the Cavaliers were plain-spoken, the metaphysicals were sophisticated; the former literal and physical, the latter metaphorical and spiritual; the first were backward-looking pastoral conservatives, the second forward-looking aesthetic radicals. Not to mention the coming political and sectarian splits of the English civil wars, with the Cavaliers (true to their courtly name) associated with High Church religion while fighting on behalf of the Royalist cause. John Stubbs writes in Reprobates: The Cavaliers of the English Civil War that “the cavaliers were elegant gentlemen, chivalrous if sometimes dissipated,” though by contrast their political adversaries “the roundheads were religious and social revolutionaries.” Such a difference could presumably be seen in their writing. Cavalier verse lends itself to the almost pagan imagery of a poem like Herrick’s 1648 “The Argument to His Book,” which indeed could be read as an ars poetica for the entire tradition:

I sing of brooks, of blossoms, birds, and bowers:Of April, May, of June, and July-flower.I sing of Maypoles, hock-carts, wassails, wakesOf bride-grooms, brides, and of their bridal-cakes.I write of youth, of love and have accessBy these, to sing of cleanly-wantonness.

Nobody would mistake that sentiment for a Puritan ethos. Yet there is a simplicity to the traditional division; it implies that the Cavaliers lacked in Christianity (though Herrick was a priest), or that the metaphysicals lacked in sensuality—and anybody who has read Donne knows that that’s not the case. Literary historians often still teach the split between those two 17th-century literary traditions as an archetypal and Manichean struggle between abstraction and literalism, metaphysical sophistication and sentimental pastoralism. Despite the crudeness of such a formulation, there is a romanticism in understanding seventeenth-century poetry as divided between the head of the Puritan and the heart of the Cavalier. Scholar Earl Miner observes in an essay included in the Norton Critical Edition of Ben Jonson and the Cavalier Poets that the Cavalier ideal “reflects many things: a conservative outlook, a response to a social threat, classical recollections, love of a very English way of life, and a new blending of old ideas.”

Dr. Johnson, it should be said, cared not for the metaphysicals; his poetic conservatism and political royalism predisposed him to the Cavaliers, but this is a position that has not been commonly held for a very long time. For the literary modernists of the early 20th-century, the Cavaliers seemed naïve, sentimental, simple, pastoral; poet T.S. Eliot in his 1922 essay on the metaphysicals argued that “civilization comprehends great variety and complexity… The poet must become more and more comprehensive, more allusive, more indirect, in order to force, to dislocate if necessary, language into his meaning.” For Eliot and others like him, the metaphysicals with their ingenious and sophisticated rhetoric, their complexity and abstraction, were a model to be emulated, and the Cavaliers, well, not so much.

The result is that the Cavaliers have seen a critical eclipse over the course of the last 10 decades. The metaphysicals dwelled amongst the stars, but the Cavaliers were content to muck in the dirt, and perhaps to dwell upon the beauty of the rosebuds while they were there. The ideology of the Cavalier was seen as hopelessly archaic when confronting the complexity of modernity. Not for nothing, the Cavaliers—and their royalist political program—are associated with Maypoles and Mummer parades, feast-days and carnival, and all the rest of the lackadaisical accoutrement conflated with a Merry Old England swept aside by Puritanism and modern capitalism. The Cavalier is thus a figure of naïve romanticism, Stubbs writing of how “Everyone can picture him…with his lovelocks, his broad hat, his mantle and bucket-topped boots, the basked handled rapier at his side, a buskin covering his satin doublet.” It’s true that the Cavaliers were often aristocratic (though not always), often royalist (though some like Marvell equivocated with chameleon-like urgency depending on politics), and that their verse could be plain-spoken and conservative (though deceptively so).

But we need not abandon them because of their embrace of a royalist politics; we need not obscure them because they spoke not to Eliot, or because modernists didn’t find their verse sufficiently complicated. To slur the Cavaliers as “conservative” is to perform a political category mistake; it’s to impose the conditions of the present day onto a period where exact corollaries are impossible to find. Michael Schmidt writes in Lives of the Poets that the Cavaliers mark the “beginnings of a literary tradition that takes pastoral convention into the actual countryside and finds in the harmony between nature and nurture a civilizing theme.” They have at the core of their ethics an understanding about an inseparable connection between nature and humanity that is almost “pagan in attitude.” For all of their reputation as being steadfastly traditionalist, and as much as their enthusiasms for the Caroline regime strike us as reactionary, the Cavaliers’ embrace of nature does have a radical message, standing as it does in opposition to the environmental exploitation that in our current day takes us to the precipice of complete collapse.

Stubbs described how the “cavalier and the puritan are potent archetypes. The puritan upholds the work ethic and the will to give up pleasure, scourging the soul for flaws. In the cavalier we have the individualist, more attuned to the passing moment and in greater touch with his desires.” How could we not recommend them, the Cavaliers, standing as they did in opposition to positivism, Puritanism, and privatization—forces that threaten to destroy our world—at the precise moment when those forces first emerged? Often dismissed for lacking seriousness, for their enthusiasm for sport and drink, for their indulgence, foppery, and libertinism, could we not identify such values as precisely those that should be valorized? Could we not see in their celebration of fairs, feasts, festivals, flora, and fauna a denunciation of work, industry, commerce, and all the rest of the alienated soullessness that now threatens us with ecological collapse?

Now is the precise moment to consider the earliest body of eco-poems ever penned—at the moment when the Anthropocene dawned. There is as much of Extinction Rebellion in Cavalier poetry as there is royalism. In embracing nature, they rejected the Puritanism that threatens our world, and in the process, what emerged was a powerful aesthetic of “Anarchopastoralism.” Schmidt writes that a “long time must pass before an anachronism is released back into time,” but if ever there was a moment to embrace the radical ecopoetics of the Cavaliers and their Anarchopastoralism, it’s in our current warm winter of the late Anthropocene. Too often dismissed by the ruthless individualists of modernism as embarrassing throwbacks engaged in Medieval affectations, the Cavaliers actually offered a complex meditation on the relationship of humanity to nature, and how the violation of the later compels the same for the former.

What the modernists saw as so rightly evocative in metaphysical poetry—the abstraction, the ingenuity, the philosophical sophistication—is arguably the foundation of the very alienation that has so easily separated us from nature; an inadvertent capitulation to the inhuman perspective that treats both people and the environment as mere commodities. This is not to blame the metaphysicals—that would be absurd, and I’m too much in love with the verse of Donne and Herbert to ever countenance such a thing. Besides, I may argue that the Cavaliers are more than just charming, but it’d be a hard claim to count Lovelace the poetic equal of Donne. What the metaphysical poets did accomplish, however, is a certain achievement of abstraction; a product of the age that allowed for mechanistic metaphors for human anatomy, where the French philosopher Rene Descartes could argue contra all experience that animals are simply little machines. Such a perspective is one that hasn’t unsurprisingly pushed us deeper into the Anthropocene.

To court reductionism once again, it’s the Puritanism that’s so dangerous in the metaphysicals, but we might yet be saved by the paganism in the Cavaliers; we may yet find our proper relationship to what Herrick called the “civil wilderness.” Stubbs writes that the “puritan is more dominant in recent times, present in the astonishing intellectual and physical achievements of the modern era—achieved at crushing human cost.” Might we not find room for the Cavalier then? For theirs is a theology that Starkman described as “under the influences of Neo-Platonism,” a “sensibility…well on its way to secular transcendentalism,” where nature “is divine.”

Perhaps the most crucial, if most subtle, difference between the metaphysicals and the Cavaliers is in their approach towards time, mutability, finality, and death. With a touch of critical eccentricity, I claim that for the metaphysicals their approach to the hereafter is one of memento mori, but for that of the Cavalier it’s carpe diem. The first refers to the approach that asks a penitent to forever remember while they are alive that one day, they shall be dead; the second is the exhortation that because you’ll shall be dead one day, you must “Seize the day” in the present. Memento mori is the aesthetic of spoiled fruit and time-glasses depicted in Dutch vanitas paintings; it’s the winged skull on a Puritan’s grave. Carpe diem, by contrast, is the drained wine-glass, the chicken bone cleared of meat. Not necessarily mutually exclusive positions, but as aesthetics they differ by giving the metaphysicals a gloss of piety, prayer, and death-obsession; the Cavaliers one of a lusty embrace of the moment. Carpe diem is the convention that allows Herrick to implore virgins to “Gather ye rosebuds while ye may, /Old Time is still a-flying;/And this same flower that smiles today/To-morrow will be dying.”

While often simply read as an injunction to live life to the fullest, Stubbs correctly notes how this poem from Herrick’s 1648 Hesperides is “almost an austere lyric.” For lacking the apparent sobriety of memento mori, poems read as carpe diem are counter-intuitively more severe. Without claiming that any of the poets across both traditions were anything other than (mostly) orthodox Christians, the differences between a memento mori and a carpe diem perspective are crucial. While it would seem that the later would encourage us to live a life that could be wasteful, the opposite is actually true. Without the consolations of eternity, we are to make our lives as fit as possible while we’re actually living them, for when “now to the abyss I pass” (as Marvell wrote in 1651), any continued action becomes an impossibility.

What the ecopoetics of the Cavaliers offer us, in this era where (to repurpose a lyric of Carew), “the winters [are] gone, the earth hath lost/Her snow-whited robes, and now no more the frost/Candies the grasse, or casts an ycie creame/Vpon the silver Lake, or Chyrstall streame,” is a type of wisdom. Memento mori may ask us to reflect on the singularity of our own death, yet such a view presupposes a passing moment separating this life from the next, an entrance into eternity where all may be reconciled, all may be answered, all may be saved. But we can’t wait for such a moment of enlightenment, or for saviors other than ourselves to provide entrance into the next scene. Carpe diem, contrary to its reputation, does not necessarily hold such a naive faith. What “Gather ye rosebuds” reminds us of is not just our own mortality, but that of Arcadia as well. It’s an elegy for a dying world. The Cavalier intuits that the garden is not a symbol for anything higher; the garden is all that we have—and it’s good enough. Now our task is to preserve it.

Samuel Pepys Would Have Been Huge on the Internet

At some point during the 31st of May 1669, a learned if bawdy, witty if obscene, educated if scandalous, pious if irreverent rake, raconteur, and libertine who’d recorded over one million words about his life for almost a decade stopped his private scribblings, even though this gentleman named Samuel Pepys would live for more than another three decades. To the best of our knowledge, until that point no Englishman had ever provided such a complete accounting; such a scrupulous interrogation not of the soul, but of a life—a largely secular exercise in tabulating not just wars, but dinners; not just plagues, but nights at the theater. Beginning on January 1st upon the first year of Restoration, Pepys would record everything from when fire immolated the city of London to a particularly enjoyable stew of tripe and mustard. An entry dated March 10th, 1666 confesses that the “truth is, I do indulge myself a little the more in pleasure, knowing that this is the proper age of my life to do it,” and such a position could be the motto of Pepys’s diary. That document doesn’t reach the rhetorical heights of other 17th-century classics—it has not the poetry of William Shakespeare’s famed soliloquy in Hamlet, nor the intellectual sophistication of John Donne’s Holy Sonnets or George Herbert’s The Temple. Rather, what Pepys offered was something different, but no less impressive—a complete map of an individual human life and mind during that defined period of time. As novelist Philip Hensher notes in The Atlantic, “there is no precedent and no parallel for what Pepys actually did.”

Restoration was inaugurated with King Charles II’s triumphant return to London to avenge his father’s regicide, and Pepys would work as administrator of the Navy in the new regime. This was a fabulous era of theatricality after a decade of dreary Puritan Interregnum; when John Dryden’s and Aphra Behn’s elaborate set-pieces thrilled London audiences, when Isaac Newton’s New Physics transformed the very nature of motion, when wits from John Wilmot to William Wycherley injected English letters with a pump of aphrodisiacs.  An era ruled by an aristocracy that Peter Stallybras and Allon White describe in The Politics and Poetics of Transgression as being “carelessly demonic, nonchalantly outrageous, cynical in the way that only a class which despises its compromises can be cynical,” all of which Pepys was able to document. Pepys observed both the plague and the Great Fire of London, the first which decimated the capital and the later which purified it, and the Second Anglo-Dutch War when the English traded the tropical paradise of Suriname for a small village named New Amsterdam on the tip of Manhattan Island. With large, soulful brown eyes, jutting lower lip, and curly auburn hair, Pepys cut a swath through London society, from the coffee houses and printers of Fleet Street to the book stalls at St. Paul’s, pushing the Socratic injunction to “Know thyself” to its extreme, the most self-obsessed man in a self-obsessed era. A man aptly described by Emily Cockayne in Hubbub: Filth, Noise & Stench in England as the sort “not to make too much of a fuss about being accidentally spat on by a lady in the theatre—providing the lady was pretty.”

Yet after nine years of privately recording his movement in regal circles, his observation of scientific and technological changes, his attendance at the splendid plays of the Restoration, his intellectual intercourse with the era’s great minds (as well as the other type of intercourse), Pepys made his last entry on that spring evening in 1669. Fearing that he was going blind (he was not going blind), the diarist signed off with “The good God prepare me,” and so after one million words Pepys would fall silent in the record of his own life. A funny thing which literary anniversaries we choose to commemorate or not. Certain authors come in for posthumous honoring more than others—Shakespeare, Jane Austen, Charles Dickens. This year sees the 200th birthday of the great, grey bard of Camden, Walt Whitman, and his work will be rightly celebrated with events throughout his cities of New York, Philadelphia, and Washington. Three years ago was the 400th anniversary of Shakespeare’s death, and it received a predictable amount of attention; 2023 will be the 400th year of publication for the first folio of the dramatist’s complete works, and it too will undoubtedly be commemorated with exhibitions, lectures, plays, books, and articles (I’m penciling such retrospectives into my own writing schedule right now). Pepys’s retirement as a diarist, by contrast, seems to largely be passing without much mention; the release of a commemorative coin from the Royal Mint (which he was associated with) notwithstanding.

An irony in this, because Pepys is in many ways a prophet of our own self-obsessed age. Pepys’s fragmentary, digressive, contradictory, messy diary (which was as voluminous in its output as it was disorganized in its execution) foreshadows our own individual self-fashioning. In Pepys we see Facebook; we see Twitter. British actor and web-programmer Phil Gyford sees in the diary a forerunner of blogging, and as part of an online project he spent nine years posting Pepys’s entries in real time. Lisa Schamess, in a delightful essay for Creative Nonfiction, considers both Gyford’s project and the general compatibility of Pepys’s diary with our own digital moment, arguing this his prose itself is “elegant evidence of how lustily the 17th century’s most famous diarist might have embraced the internet, tapping up its opulent charms deep into the night.” With an admirable eye towards close reading and comparison, Schamess reads through several of Pepys’s entries to demonstrate how in their half-formation, their digressions, and their exhibitionism, they’re reminiscent of Facebook posts. Schamess writes that Pepys’s “sharp eye and acid wit would be perfect for the restless internet, with its thin, glowing scrim between life and audience, its illusion of anonymity and controllable intimacy.”

Much is convincing in Schamess’s observation, yet it’s undeniable that even if his prose would be copacetic with the internet’s “illusion of anonymity and controllable intimacy,” Pepys’s actual writing was, at least while he was alive, completely private. Scholarly arguments abound about just how private Pepys’s expected his writings to ultimately be, and yet Gyford’s neat conceit aside, the historical diarist was not hitting “Post” after each one of his entries. Hewing to a more traditional interpretation of Pepys that sees him as a man of the late, late Renaissance, content to exist in wonder and curiosity, his editor Richard La Gallienne claimed that for Pepys’s “It is not so much himself that interests him, more merely the things that happen to himself, but the people about him and the things that are happening to everybody, all the time, to his nation as well as to his acquaintance.” Schamess’s and Gyford’s arguments about “social media Pepys” would be anachronistic coming from Pepys’s editor, a man old enough to have had an affair with Oscar Wilde, but perhaps La Gallienne would have concurred with them had he known what the internet was. Regardless, even in La Gallienne’s reading of the man’s character, there is something undeniable modern (or post-modern) in his vociferous appetites, his manner of absorbing, repackaging, and projecting his experience. In Pepys’s diary, there is an equivalence between his mind and the world, and what could be more contemporary than that, whether on paper or in 140-characters?

Written in a code-like short-hand developed in the 16th century, Pepys’s diary wouldn’t see publication until his writing was deciphered in the early 19th century; his previous reputation resting entirely on his role in civil government, ranging from membership in Parliament and being administrator of the Royal Navy to a position on the Tangier Council during the short years that the English governed a Moroccan colony. Pepys’s great colleague in self-introspection (or self-obsession), the Frenchman Michelle de Montaigne, may have invented the essay form more than a century earlier, but even he couldn’t match the Englishman for sheer magnificent, glorious, transcendent narcissism. The diary is what his name shall be inextricably linked with, not necessarily for the quality of the prose (though Pepys is often a fine stylist), but rather for the raw, honest, unguarded reflection on a sheer multitude of subjects ranging from politics to theater to medicine to sex. One of his 19th-century readers, the Scottish novelist Robert Louis Stevenson, writes that Pepys’s style “may be ungrammatical, it may be inelegant, it may be one tissue of mistakes, but it cannot be devoid of merit.” With what seems like faint praise, Stevenson clarifies that the worthiness of Pepys lay in a style that is “indefatigably lively, telling and picturesque…[dealing] with the whole matter of a life and yet is rarely wearisome.”

At turns anxious and perverse, aroused and guilty, introspective and arrogant, horny and holy, Pepys’s diary was the most complete record of the Restoration era, and of the vagaries of a human mind in all of its splendid contradiction. Tolerant and humane, if skeptical, in his Anglicanism, Pepys was an often unconvinced enthusiast of church sermons, writing on January 19 1661: “To church in the morning, where Mr. Mills preached upon Christ’s being offered up for our sins, and there proving the equity with what justice would lay our sins upon his Son.” Yet he was also the author who was able to write of his wife discovering Pepys’s dalliance with her maid Deb Willet as “coming up suddenly, did find me embracing the girl [with] my [hand under] her coats; and indeed I was with my [hand] in her cunny,” his indiscretions characteristically hidden in a hodgepodge of ellipses. Elsewhere he deploys a strange pidgin of English, Spanish, and French to mask his pornographic obsessions—idiosyncratic ciphers that if one can read his shorthand take most readers mere seconds to crack. That’s always been the enigma of Pepys, a man who spent so much time writing his apparently private diary, who took the most marginal of non-pains to cloak his indiscretions, and yet had the entire project bound in six volumes and categorized in his library’s bibliography with the apparent foreknowledge that it’d inevitably see posthumous publication. 

Pepys is the virtual font of an age for those of us who are weirdly enmeshed in the 17th century, attracted to a melancholic era of stunning contradiction, which White and Stallybras describe as being both “classical and grotesque, both regal and foolish, high and low.” To read Pepys is to inhabit his world, and while among the great prose stylists of that century he lacks the metaphysical acumen of Donne, the philosophical flights of Thomas Browne, or the psychological insight of Robert Burton, Pepys makes up for those deficiencies by simply being there—day after day, for the better part of Restoration’s first decade. Consider the eeriness of his first-hand account of the plague which leveled London in 1665, forcing the court to rusticate themselves as the buboes spread through the capital:

This day, much against my will, I did in Drury Lane see two or three houses marked with a red cross upon the doors, and “Lord Have Mercy upon Us” writ there – which was a sad sight to me, being the first of the kind… that I ever saw. It put me into an ill conception of myself and my smell, so that I was forced to buy some roll tobacco to smell and chew, which took away the apprehension.

Such a fusion of the horrific and the prosaic
conveys an immediacy that is still present three-and-a-half centuries later, a
sense of “This must have been what it was like.” Or consider his account of the
Great Fire of London from that satanic year of 1666, which remains haunting in
its specificity, the small details of tragedy illuminating the experience more
than maps and demographics ever could hope to:

Everybody endeavouring to remove their goods, and flinging into the river or bringing them into lighters that layoff; poor people staying in their houses as long as till the very fire touched them, and then running into boats, or clambering from one pair of stairs by the water-side to another. And among other things, the poor pigeons, I perceive, were loth to leave their houses, but hovered about the windows and balconys till they were, some of them burned, their wings, and fell down.

Hensher observes that from a “seventeenth-century perspective, everything here is a deplorable breach of literary manners: the undignified interest in inessentials, the failure to assert any kind of moral about people’s scrabbling after their possessions, and the eccentric, unpolished syntax.” And yet Pepys’s is a novelistic sensibility, apt more for Dickens or Gustave Flaubert than for his own century; an empathy that understands that there is infinitely more to be conveyed in the image of singed pigeons than the sophistries of theodicy that impose false meaning on such tragedy.

In making record of the 17th century, there is certainly something innately attractive in gravitating towards those particular dates that loom large, but what’s most evocative in Pepys are the personal details, the mundanities but which by virtue of his having recorded them now belong to the annals of eternity. On April 4th, 1663 he makes record of dinner “most neatly dressed by our own only maid,” in which Pepys and his guest feasted upon a “fricassee of rabbits and chickens, a leg of mutton boiled, three carps in a dish, a great dish of a side of lamb, a dish of roasted pigeons, a dish of four lobsters, three tarts, a lamprey pie (a most rare pie), a dish of anchovies, good wine of several sorts, and all things mighty noble and to my great content.” There are, it should be said, numerous entries of this sort. Think of it as the Restoration equivalent of an Instagrammed food picture. He’s less charitable in his theater recommendations; writing on September 29tht, 1662 that he went to the “King’s Theatre, where we saw Midsummer’s Night’s Dream, which I had never seen before, nor shall ever again, for it is the most insipid ridiculous play that ever I saw in my life.” But what’s Shakespeare next to a lamprey pie?

Or Pepys harrowing reminiscence of a surgical procedure, more than two centuries before anesthesia, which removed a kidney stone the size of a tennis ball from his bladder. Medical historian Roy Porter writes in Blood and Guts: A Short History of Medicine that “invasive surgery was limited in scope; lengthy operations, or ones demanding great precision, were out of the question.” Nevertheless, “A brave man—Samuel Pepys was one—might risk having a bladder stone removed surgically.” We should be thankful that the physician was the rare 17th-century doctor who saw fit to wash his hands before venturing tasks urological, for had there been for a bit more grime upon his digits when he performed surgery on Pepys’s peep and we’d never have had the diary to read. Pepys had his anatomical memento mounted as a trophy, writing March 26th, 1660 that “This day it is two years since it pleased God that I was cut of the stone…and did resolve while I live to keep it a festival.” Supposedly Pepys would plunk the stone into glasses of wine.

Then of course there is all of the sex in Pepys, with squeamish Victorian editors deleting whole entries where the diarist both luxuriated and punished himself over perversions both imagined and enacted. Pepys enumerated the women, from aristocrats to maids, wives and daughters of friends and colleagues, whom he fucks; women united in the status of not being his wife. Obsessed with not only his own erotic adventures, Pepys spends ample time hypocritically chastising Charles II’s own notorious appetites, while fantasizing about the monarch’s mistresses, from the actress (and “Protestant Whore”) Nell Gwynne to the aristocratic Barbara Villiers, whom Pepys claims he had a sex dream of that was “the best that ever was dreamt.” Still substantially less problematic than the entry from May 21st, 1662 in which Pepys writes that he came across Villiers’s underwear being hung out to dry in the palace at Whitehall’s privy garden, being “the finest smocks and linned petticoats…laced with rich lace at the bottom, that ever I saw; and did me good to look upon them.”

Cockayne writes that Pepys was “often led by his libido,” and indeed there is something disquieting about the author spending all of this time lusting after scullery maids and servants, duchesses and actresses. Critic Warren Chernaik writes in Sexual Freedom in Restoration Literature that the infamously scurrilous theater of the period was “fundamentally conservative in its sexual attitudes.” Reading all of those lustily guilty passages of Pepys, and you can get a sense for the fundamentally reactionary nature of the diarist’s priapic concerns, where prurience and puritanism are twined pairs. Chernaik writes that “With nothing to rebel against, no taboos to be transgressed, blasphemy would lose its power to shock. It can be argued that society creates its rebels,” so that far from an exercise in liberation, Pepys’s orgasmic encounters were a type of prison, with nobody trapped in the neurotic cycle of release and guilt more than the author himself. Evelyn Lord in The Hell-Fire Clubs: Sex, Satanism and Secret Societies, writes about Pepys’s encountering, while perusing book-stalls with his wife, a lewd French volume entitled The School of Venus (infamous for its illustrations of society women purchasing prodigiously endowed dildos). Lord writes that after expressing disgust at the book, Pepys “put it back on the shelf. However, he was unable to resist it, and eventually went back and purchased it in plain binding, took it home, read it and then burned it.” One imagines that Pepys perhaps had more onanistic concerns with the book that even he wouldn’t put into record. 

Denouncing The School of Venus to his wife, while later purchasing it in plain paper—was Pepys a hypocrite? Of course, he was a hypocrite. Did he feel guilt over his indiscretion? The ashes of his smut should leave little doubt that he did. Something modern in that position, the enigma of the neurotic. Pepys is our contemporary in that he dwells in a certain negative capability, a fractured ego strung as it is between the public and personal, the spectacle of accountability and the private web browser. In that manner, I see less of Twitter and Facebook in Pepys, less of the carefully manicured self-creation implied by our collective digital subterfuge, and more of a different post-modern literary genre—Samuel Pepys was the first writer of autofiction. That form, defined as it is by the presence of a narrator who is largely the same as the author but who dwells in the massive complexity of the individual, including all that is hidden (perhaps even from the author themselves). The true inheritors of Pepys’s ethos aren’t all of us clicking away on Twitter, it’s not the vulgarities of those writing status update while sitting on the toilet. Rather it’s those obsessive writers cataloging the minutia of their lives; poet Ben Lerner in 10:04, Sheila Heti’s How Should a Person Be?, Teju Cole’s Open City, and especially the Norwegian completist Karl Ove Knausgård’s six-volume My Struggle.

In that 3,600-page door-stopper, Knausgård contemplates both his conflicted relationship with his father, and the breakfasts he prepares for his children—with as much detail as Pepys once did. Knausgård writes of days that were “jam-packed with meaning, when each step opened a new opportunity, and when ever opportunity filled me to the brim.” The task of My Struggle was for Knausgård to write deliberately and simply, to dwell in the prosaicness of detail. By comparison, Hensher describes the minutia of Pepys’s diary as being such that most of its entries couldn’t be “considered important in any obvious way; each has the quality, instead, of being interesting, which is much stranger and harder to achieve. We know about the socially aspiring dish of tripe and the randy morning because the man wrote it down.” That is the cracked wisdom shared by both Knausgård and Pepys, the understanding that we don’t write about things because they’re important, but rather things become important because we write about them. Jonathon Sturgeon claims in Flavorwire that the best description of the autofictional novel is a book where “the oeuvre is the soul. The artist’s body of work…has come to replace the religious ideal of the immortal spirit.” If that’s true, I’d venture that Pepys’s profane, grubby, earthy, secular diary is the first autofictional novel, in all of its over-determined detail, with all of its insignificant meanings, and especially with all of its contradictions of spirit, so very human in their deployment.

Writing of Pepys shortly after the diary had been rediscovered and published in the 19th century, Stevenson provided gloss for Pepys’s protean character: “We all, whether we write or speak, must somewhat drape ourselves when we address our fellows; at a given moment we apprehend our character and acts by some particular side; we are merry with one, grave with another, as befits the nature of demands of the relation.”  Such a mercurial nature is our common birthright, and in the sloppy, imperfect, anomalous medium of a diary we can see a certain process made naked. A polished essay is like the woman or man dressed formally for a job interview, clothing dry-cleaned and hair perfectly coifed—the individual self-fashioned into the most presentable of versions. Diaries are how we actually are more or less all of the time—messy, confused, and impolite. Le Gallienne argues that “The record was a secret between himself and his own soul, not forgetting his God… whom he invokes on many curious occasions.” Written for the Lord and for posterity, Pepys’s diary is a record of the soul before editing and revision, which is to say a record of the soul as it actually is. No deletions, no rearrangements, no strike-throughs, but rather a manuscript as a man, all error and contradiction—and the more perfect for it.

The Sound of Silence: Have We Forgotten How to Be Quiet?

The entire Western front went silent at exactly 11 a.m. on November 11, 1918. An armistice had been reached earlier that morning between the Allies and Germany, so it was agreed that hostilities would cease at that precise hour. Last year, audiologists at London’s Imperial War Museum used seismic data that had been gathered in the trenches as part of the war effort to recreate that moment. Now it’s possible to listen to what American troops along the River Moselle heard: repeated blasts of artillery fire for two minutes, a final explosion, and then at 11 a few gentle bird chirps in the autumn morning as everything falls quiet. An eerie, stunning, beautiful sound. The most sacred thing you will ever hear.

Everything and nothing is answered in silence as deep as that. Following such barbarity and horror, the very earth was now enveloped in a deep blanketing quiet. Novelist Kurt Vonnegut, who had experienced the nightmare of war when he was a POW in Dresden during the Allied bombing campaign of World War II, said of the Armistice that “I have talked to old men who were on battlefields during that minute. They have told me in one way or another that the sudden silence was the voice of God.” A moving and strange wisdom that understands that when the Lord speaks it may not be through thunder-clap and lightning, but rather in the final blast of artillery and a few bird chirps across a scarred field. If dissonance is the question, then silence may be the only answer.

Our present world is not quite as loud as it had been on the Western front (yet), but still we live mired in a never-ending cacophony. A century after God spoke unto the trenches of the Great War, and the volume is getting louder and louder. Not artillery shells, but the din of chatter, the thrum of status updates, the vibration of push notification. With the omnipresence of the smartphone, we’re continually in multiple conversations that often don’t deserve our attention. The 24-hour news cycle forces us to formulate hot-takes and positions on every conceivable event, from issues of political importance to fleeting pop culture phenomenon, and bellicose bloviating announces itself 140 characters at a time from the top on down. If Vonnegut is right that God spoke a century ago, then our current age is too loud to make that voice out today.


So loud is our current age, the echo of social media static resounding in our ears, that the French historian Alain Corbin writes that when it comes to silence “We have almost forgotten what it is.” If the industrial revolution heralded a landscape where silence was eliminated, the peace of town and country punctuated by the clank of machinery, then the digital revolution has shifted that very noise into our heads as well. Perhaps that’s why there has been a recent spike in interest about silence in titles like Erline Kagge’s Silence: In the Age of Noise or Jane Brox’s Silence: A Social History of One of the Least Understood Elements of Our Lives. Like Corbin in his A History of Silence, we’re all looking for a little peace and quiet these days.

Philosophy is traditionally conveyed through debate and argument. A reliance on disputation and dialogue, syllogism and seminar, would seem to proffer a noisy discipline. Yet philosophy can also be defined by a peaceful undercurrent, a metaphysics of silence. Corbain writes that silence is the “precondition for contemplation, for introspection, for meditation, for prayer, for reverie and for creation.” Silence allows us the intentionality required of thought, the presentness needed for rumination. If one model of philosophy takes place in the noisy Agora of ancient Athens, long Socratic dialogues among the sellers of lemons and olives, then another way of practicing philosophy emphasizes the quiet, the construction of a space in which thought can grow unhampered, where answers may be found among the still. Such quiet is its own philosophical method, a type of thought where answers are not generated through dialectic, but from within the soul of the thinker herself.

Something instrumental in this, where silence is a means to an end. But such an approach still has noise as its ultimate goal, in the form of declared conclusions.When parsing the significance of silence there are radical conclusions. Stillness can be the space that allows us to find future answers, but there is a wisdom that sees silence itself as an answer. In a Western tradition that very much sees logic as a goal unto itself, where truth is a matter of positive statements about objective reality, silence is its own anarchic technique, its own strange approach to that which we cannot speak. Silence can be both process and result.

From the sixth century before the Common Era when Lao-Tzu claimed that “The spoken Tao is not the eternal Tao” until Ludwig Wittgenstein’s 1924 conclusion that “Whereof one cannot speak, thereof one must be silent,” silence has been a philosophical counter-tradition. Wittgenstein argued that if all of the questions of science, logic, and mathematics were somehow to be answered, we’d still have to be silent about that which is most profound. Issues of metaphysics would be passed over in a deep silence, not because they’re insignificant, but rather because they’re the most important. Perhaps it’s pertinent that the Austrian philosopher organized these thoughts while fighting in the loud trenches of World War I. Though separated by millennia, both thinkers believed that when the veil of reality is peeled back what mutely announces itself is a profound and deep silence.

Such an approach shares much with what theologians call the apophatic, from the Greek “to deny,” an understanding that when it comes to the approximation of ultimate things, it’s sometimes more accurate to say what something isn’t. Rather than circumscribe God with the limitations of mere words, we should pass eternity over in silence. Such a perspective holds that there can be more truth in uttering nothing than in a literal description. As the 9th-century Irish monk Johan Scotus Erigena wrote, “We do not know what God is.” Anticipating the monk’s contention, Jesus’s response to Pilate’s question of his origins was such that Christ “gave him no answer.” Scripture itself conveys that God’s silence is often more sacred than his declarations.

More than mere humility, apophatic theology is a profound approach that conveys that God isn’t just silent, but that in some ways God is silence. What would it mean, in our age of endless distraction and deafening noise, to inculcate silence not just for a bit of peace, but as an answer itself? What would it mean to engender within our lives an apophatic sensibility? To still the ear and mind long enough so that we could, as Vonnegut said, “remember when God spoke clearly to mankind?” Not in a booming voice, but rather in the sublime silence that permeates the emptiness between atoms and the space between stars, the very quiet where creation is born.
Image credit: Felipe Elioenay.

Missives from Another World: Literature of Parallel Universes

“He believed in an infinite series of times, in a growing, dizzying net of divergent, convergent and parallel times.”—Jorge Luis Borges, The Garden of Forking Paths (1942)

“And you may tell yourself, ‘This is not my beautiful house’/And you may tell yourself, ‘This is not my beautiful wife.’”—Talking Heads, “Once in a Lifetime” (1980)

1.By the release of their 17th album, Everyday Chemistry, in 1984, The Beatles had been wandering for years in a musical wilderness. Their last cohesive venture had been 1972’s Ultraviolet Catastrophe, but the ’70s were mostly unkind to the Beatles—an output composed of two cover albums of musicians like Ben E. King and Elvis Presley, rightly derided by critics as filler. Meanwhile, The Rolling Stones released their brilliant final album before Keith Richards’s death; the disco-inflected 1978 Some Girls which marked them as the last greats of the British Invasion. By contrast, The Beatles’s Master Class and Master Class II were recorded separately and spliced together by engineers at Apple Studies; a two-star Rolling Stone review from 1977 arguing that “Lennon and McCartney don’t even appear in the same room with each other. Their new music is a cynical ploy by a band for whom it would have perhaps been better to have divorced sometime around Abby Road or Let it Be.”

Maybe it was the attempt on John Lennon’s life in 1980, or the newfound optimism following the election of Walter Mondale, but by the time the Fab Five properly reunited to record Everyday Chemistry there was a rediscovered vitality. All of that engineering work from the last two albums actually served them well as they reentered the studio; true to its title with its connotations of combination and separation, catalyst and reaction, Everyday Chemistry would borrow from the digital manipulations of Krautrock bands like Kraftwerk, and the synthesizer-heavy experimentation of Talking Heads. The Beatles may have missed punk, but they weren’t going to miss New Wave.

With a nod to the Beatlemania of two decades before, Lennon and Paul McCartney sampled their own past songs, now overlaid with flourishes of electronic music, the album sounding like a guitar-heavy version of David Byrne and Brian Eno’s avant-garde classic My Life in the Bush of Ghosts. A formula that would define this reconstituted version of the band, now committed to digital production, and whose influences are seen from Jay Z’s Lennon-produced The Grey Album, to the tracks George Harrison played with James Mercer in Broken Bells.

By asking Eno to produce their new album, The Beatles signaled that they were once-again interested in producing pop that didn’t just pander. Always pioneers in sound effects, the modulation on Revolver, Sergeant Pepper’s Lonely Heart’s Club Band, and Ultraviolet Catastrophe were a decidedly lo-fi affair, but by the era of the Macintosh, the Beatles had discovered the computer. Speaking to Greil Marcus in 1998, Ringo Starr said “You know, we were always more than anything a couple of kids, but John was always into gizmos, and something about that box got his attention, still does.” Billy Preston, officially the band’s pianist since Ultraviolet Catastrophe, was also a zealous convert to digital technology. In Marcus’s Won’t Get Fooled Again: Constructing Classic Rock, Preston told the critic that “They were a bar band, right? Long before I met them, but I was a boogie-woogie guy too, so it was always copacetic. You wouldn’t think we’d necessarily dig all that space stuff, but I think the band got new life with that album.” From the nostalgic haziness of the opening track “Four Guys” to the idiosyncratic closing of “Mr. Gator’s Swamp Jamboree,” Everyday Chemistry was a strange, beautiful, and triumphant reemergence of The Beatles.

2.Such a history may seem unusual to you, because undoubtedly you are a citizen of the same dimension that I am. Unless you’re a brave chrononaut who has somehow twisted the strictures of ontological reality, who has ruptured the space-time continuum and easily slides between parallel universes, your Beatles back-catalog must look exactly the same as mine. And yet Everyday Chemistry exists as a ghostly artifact in our reality, a digital spirit uploaded to the Internet in 2009 by some creative weirdo, who cobbled together an imagined Beatles album from the fragments of their solo careers. A bit of Wings here, some of the Plastic Ono Band there, samplings from All Things Must Pass and Sentimental Journey, edited together into a masterful version of what could have been.

Most of my narrative above is my own riffing, but claims that the album is from a parallel universe are part of the mythmaking that makes listening to the record so eerie. “Now this is where the story becomes slightly more unbelievable,” the pseudonymous “discoverer” James Richards writes. Everyday Chemistry is a seamlessly edited mashup done in the manner of Girl Talk or Danger Mouse, but its ingenious creator made a parallel universe origin of Everyday Chemistry the central conceit. Richards claims that a tape of the album was swiped after he fell into a vortex in the California desert and was gifted Everyday Chemistry by an inter-dimensional Beatles fan.

At Medium, John Kerrison jokes that “inter-dimensional travel probably isn’t the exact truth” behind Everyday Chemistry, even if the album is “actually pretty decent.” Kerrison finds that whoever created the album is not going to reveal their identity anytime soon. Unless of course it actually is from a parallel universe. While I mostly think that that’s probably not the truth, I’ll admit that anytime I listen to Everyday Chemistry I get a little charged frisson, a spooky spark up my spine. It’s true that Everyday Chemistry is kind of good, and it’s also true that part of me wants to believe. Listening to the album is like finding a red rock from Mars framed by white snow in your yard—a disquieting interjection from an alien world into the mundanity of our lives.

Part of what strikes me as so evocative about this meme that mixes science fiction, urban legend, and rock ‘n’ roll hagiography, is that we’re not just reading about a parallel universe, but the evidence of its existence is listenable right now. Tales of parallel universes—with their evocation of “What if our world was different from how it is right now?”—is the natural concern of all fiction. All literature imagines alternate worlds. But the parallel universe story makes such a concern explicit, makes it obvious. Such narratives rely upon the cognitive ability to not accept the current state of things, to conjecture and wonder at the possibility that our lives could be different from how we experience them in the present.

Such stories are surprisingly antique, as in Livy’s History of Rome written a century before the Common Era, in which he conjectured about “What would have been the results for Rome if she had been engaged in a war with Alexander?” Even earlier than Livy, and the Greek father of history Herodotus hypothesized about what the implications would have been had there been a Persian victory at Marathon. Such questions are built into how women and men experience our lives. Everyone asks themselves how things would be different had different choices been made—what if you’d moved to Milwaukee instead of Philly, majored in art history rather than finance, asked Rob out for a date instead of Phil?

Alternate history is that narrative writ large. Such stories have been told for a long time. In the 11th century there was Peter Damian’s De Divina Omnipotentia, which imagined a reality where Romulus and Remus had never been suckled by a she-wolf and the Republic was never founded. In 1490, Joanot Martorell’s romance Tirant lo Blanch, perhaps the greatest work ever written in the Iberian Romance language of Valencian, envisioned a conquering errant knight who recaptures Constantinople from the Ottomans. Medieval Europeans were traumatized as the cross was toppled from the dome of the Hagia Sophia, but in Martorell’s imagination a Brittany-born knight is gracious enough so that “A few days after he was made emperor he had the Moorish sultan and the Grand Turk released from prison.” What followed was a “peace and a truce for one hundred one years,” his former enemies “so content that they said they would come to his aid against the entire world.” Written only 37 years after Mehmed II’s sacking of Orthodoxy’s capital, Tirant lo Blanch presents a Christian poet playing out a desired reality different from the one in which he actually found himself.

In the 19th century, the American writer Nathaniel Hawthorne did something similar, albeit for different ideological aims. His overlooked “P.’s Correspondence” from his 1846 Mosses from an Old Manse is credibly the first alternate history story written in English. An epistolary narrative where the titular character, designated by only his first initial, writes about all the still-living Romantic luminaries he encounters in a parallel version of Victorian London. Lord Byron has become a corpulent, gouty, conservative killjoy; Percy Shelley has rejected radical atheism for a staunch commitment to the Church of England; Napoleon Bonaparte skulks the streets of London, embarrassed and vanquished while kept guard by two police officers; and John Keats has lived into a wise seniority where he alone seems to hold to the old Romantic faith that so animated and inspired Hawthorne. P. is a character for whom the “past and present are jumbled together in his mind in a manner often productive of curious results,” a description of alternate history in general. Hawthorne’s is a message about the risks of counter-revolution, but also an encomium for the utopian light exemplified by Keats, for whom there remains so “deep and tender a spirit of humanity.”

Alternate history’s tone is often melancholic, if not dystopian. An exercise in this world might not be great, but think of how much worse it could be. Think of authors like Philip K. Dick in The Man in the High Castle or Robert Harris in Fatherland, both exploring the common trope of imagining a different outcome to the second world war. Such novels present Adolf Hitler running rough-shod over the entire globe, crossing the English Channel and ultimately the Atlantic. Such narratives highlight the ways in which the evils of fascism haven’t been as vanquished as was hoped, but also as a cautionary parable about what was narrowly averted. In his own indomitable amphetamine-and-psychosis-kind-of-way, Dick expresses something fundamental about the interrogative that defines alternative history, not the “What?” but the “What if?” He asks “Can anyone alter fate?…our lives, our world, hanging on it.”

Such novels often trade in the horror of an Axis victory or the catastrophe of Pickett’s Charge breaking through that Confederate high-water line in that quiet, hilly field in Pennsylvania. Some of the most popular alternate history depicts a dark and dystopian reality in which polished Nazi jack-boots stomp across muddy English puddles and Confederate generals hang their ugly flag from the dome of the Capital building; where an American Kristallnacht rages across the Midwest, or emancipation never happens. Gavriel Rosenfeld in his study The World Hitler Never Made: Alternate History and the Memory of Nazism argues that such stories serves a solemn purpose, that the genre has a “unique ability to provide insights into the dynamics of remembrance.” Rosenfeld argues that alternate history, far from offering impious or prurient fascination with evil, memorializes those regimes’ victims, generating imaginative empathy across the boundaries of history and between the forks of branching universes.

Philip Roth in The Plot Against America and Michael Chabon in The Yiddish Policeman’s Union imagine and explore richly textured versions of the 20th century. With eerie prescience, Roth’s 2004 novel reimagines the genre by focusing on the personal experience of the author himself, interpolating his own childhood biography into a larger narrative about the rise of a nativist, racist, sexist, antisemitic American fascism facilitated through the machinations of a foreign authoritarian government. Chabon’s novel is in a parallel universe a few stops over, but examines the traumas of our past century with a similar eye towards the power of the counterfactual, building an incredibly detailed alternate reality in which Sitka, Alaska, is a massive metropolis composed of Jewish refugees from Europe. Such is the confused potentiality that defines our lives, both collective and otherwise; an apt description of our shared predicament could be appropriated from Chabon’s character Meyer Landsman: “He didn’t want to be what he wasn’t, he didn’t know how to be what he was.”

For Rosenfeld, the form “resists easy classification. It transcends traditional cultural categories, being simultaneously a sub-field of history, a sub-genre of science fiction, and a mode of expression that can easily assume literary, cinematic, dramatic or analytical forms.” More than just that, I’d suggest that these narratives says something fundamental about how we tell stories, where contradiction and the counter-factual vie in our understanding, the fog from parallel universes just visible at the corners of our sight, fingerprints from lives never lived smudged across all of those precious things which we hold onto.

While long the purview of geeky enthusiasts, with their multiverses and retconning, alternate history has been embraced by academic historians for whom such conjecture has traditionally been antithetical to the sober plodding of their discipline. In history no experiment can ever be replicated, for it is we who live in said experiment—which is forever ongoing. Temporality and causality remain a tricky metaphysical affair, and it’s hard to say how history would have turned out if particular events had happened differently. Nonetheless, true to its ancient origins in the conjectures of Herodotus and Livy, some scholars engage in “counterfactual history,” a variety of Gedankenexperiment that plays the tape backwards.

Economist Niall Ferguson has advocated for counterfactuals; arguing that they demonstrate that history doesn’t necessarily follow any predetermined course. Writing in his edited collection Virtual History: Alternatives and Counterfactuals, Fergusson claims that the “past—like real life chess, or indeed any other game—is different; it does not have a predetermined end. There is no author, divine or otherwise; only characters, and (unlike in a game) a great deal too many of them.“

Seriously considering counterfactual history as a means of historiographical analysis arguably goes back to John Squire’s 1931 anthology If it Had Happened Otherwise. That volume included contributions by Hilaire Belloc, who true to his monarchist sympathies imagines a very much non-decapitated Louis XVI returning to the Bourbon throne; his friend G.K. Chesterton enumerating the details of a marriage between Don John of Austria and Mary Queen of Scots; and none-other-than future prime minister Winston Churchill writing a doubly-recursive alternate history entitled “If Lee had not won the Battle of Gettysburg,“ narrated from the perspective of a historian in a parallel universe in which the Confederacy was victorious, who roughly imagines a different version of our history.

Churchill concludes the account with his desired reunification of the English speaking peoples, a massive British, Yankee, and Southern empire stopping the Teutonic menace during the Great War. As with so much of Lost Cause fantasy, especially in the realm of alternate history (including Newt Gingerich’s atrocious Gettysburg: A Novel of the Civil War—yes that Newt Gingerich), Churchill’s was a pernicious revisionism, obstinate fantasizing that posits the Civil War as being about something other than slavery. Churchill’s imaginary Robert E. Lee simply abolishes slavery upon the conclusion of the war, even while the historical general fought in defense of the continuation and expansion of that wicked institution. Yet ever the Victorian Tory, Churchill can’t help but extol a generalized chivalry, with something of his ideal character being implicit in his description of Lee’s march into Washington, D.C. and Abraham Lincoln’s rapid abandonment of the capital. The president had “preserved the poise and dignity of a nation…He was never greater than in the hour of fatal defeat.“ In counterfactual history, Churchill had been cosplaying dramatic steadfastness while facing invasion before he’d actually have to do it.

Counterfactuals raise the question of where exactly these parallel universes are supposed to be, these uncannily familiar storylines that seem as if they inhabit the space at the edge of our vision for a duration as long as an eye-blink. Like a dream where unfamiliar rooms are discovered in one’s own house, the alternate history has a spooky quality to it, and the mere existence of such conjecture forces us to confront profound metaphysical questions about determinism and free-will, agency and the arc of history. Did you really have a choice on whether or not you would move to Philly or Milwaukee? Was art history ever a possibility? Maybe Phil was always going to be your date.

The frustration of the counterfactual must always be that since history is unrepeatable, not only is it impossible to know how things would be altered, but we can’t even tell if they could be. How can one know what the impact of any one event may be, what the implications are for something happening slightly different at Marathon, or at Lepanto, or at Culloden, or Yorktown? All those butterflies fluttering their wings, and so on. Maybe Voltaire’s Dr. Pangloss in Candide is right, maybe this really is the best of all possible worlds, though five minutes on Twitter should make one despair at such optimist bromides. Which is in part why alternate history is so evocative—it’s the alternate, stupid. James Richards found that other world easily, apparently there is a wormhole in the California desert that takes you to some parallel universe where scores of Beatles albums are available. But for all of those who don’t have access to the eternal jukebox, where exactly are these parallel realities supposed to be?

Quantum mechanics, the discipline that explains objects at the level of subatomic particles, has long produced surreal conclusions. Werner Heisenberg’s Uncertainty Principle proves that it’s impossible to have complete knowledge of both the location and the momentum of particles; Louis de Broglie’s wave-particle duality explains subatomic motion with the simultaneous mechanics of both particle and wave; and Erwin Schrödinger’s fabled cat, who is simultaneously dead and alive, was a means of demonstrating the paradoxical nature of quantum supposition, whereby an atom can be both decayed and not at the same time. The so-called Copenhagen Interpretation of quantum mechanics is comfortable with such paradoxes, trading in probabilities and the faith that observation is often that which makes something so. At the center of the Copenhagen Interpretation is how we are to interpret that which physicists call the “collapse of the wave-function,“ the moment at which an observation is made and something is measured as either a wave or a particle, decayed or not. For advocates of the orthodox Copenhagen Interpretation, the wave-function exists in blissful indeterminacy until measured, being both one thing and the other until we collapse it.

For a Pentagon-employed physicist in 1957 named Hugh Everett, such uncertainty was unacceptable. That a particle could be both decayed and not at the same time was nonsensical, a violation of that fundamental logical axiom of non-contradiction. If Everett thought that the Copenhagen Interpretation was bollocks, then he had no misgivings about parallel universes, for the physicist would argue that rather than something being both one thing and its opposite at the same time, it’s actually correct to surmise that the universe has split into two branching forks. In Schrödinger’s fabled thought-experiment, a very much not sub-atomic cat is imprisoned in some sadist’s box, where the release of a poison gas is connected to whether an individual radioactive atomic nucleus has decayed or not. According to the Copenhagen Interpretation, that cat is somehow dead and alive since the nucleus is under the purview of quantum law, and can exist in indeterminacy as both decayed and not until it is observed and the wave-function collapses. Everett had a more parsimonious conclusion—in one universe the cat was purring and licking his paws, and in an unlucky dimension right next door all four fury legs were rigid and straight-up in the air. No weirder than the Copenhagen Interpretation, and maybe less so. Writing of Everett’s solution, the physicist David Deutsch in his book The Fabric of Reality claims that “Our best theories are not only truer than common sense, they make more sense than common sense.“

Maybe mathematically that’s the case, but I still want to know where those other universes are? Whither in wardrobe or wormhole, it feels like Narnia should be a locale more accessible than in just the equations of quantum theorists. For myriad people who congregate in the more eccentric corners of the labyrinth that is the Internet, the answer to where those gardens of forking paths can be found is elementary—we’re all from them originally. Those who believe in something called the “Mandela Effect” believe they’re originally from another dimension, and that you probably are as well. Named after people on Internet message boards who claim to have memories of South African president Nelson Mandela’s funeral in the early ’80s (he died in 2013), whole online communities are dedicated to enumerating subtle differences between our current timeline and wherever they’re originally from. Things like recalling a comedy about a genii starring Sinbad called Shazaam! or the ursine family from the The Berenstain Bears spelling their surname “Berenstein“ (I think that I’m actually from that dimension).

Everett’s calculations concern minuscule differences; the many-worlds interpretation deals in issues of momentum and location of subatomic particles. That doesn’t mean that there isn’t a universe where the Berenstain bears have a different last name—in a multiverse of infinite possibility all possibilities are by definition actual things—but that universe’s off-ramp is a few more exits down the highway. This doesn’t stop believers in the Mandela Effect from comparing notes on their perambulations among the corners and byways of our infinite multiverse, recalling memories from places and times as close as your own life and as distant as another universe. Looking out my window I can’t see the Prudential Center anymore, and for a second I wonder if it ever really existed, before realizing that it’s only fog.

Have some sympathy for those of us who remember Kit-Kat bars as being spelled with a dash, or Casablanca having the line “Play it again, Sam.” Something is lost in this universe of ours, here where whatever demiurge has decided to delete that line. Belief in the Mandela Effect illuminates our own alterity, our own discomfort in this universe or any other—a sense of alienness, of offness. The Mandela Effect is when our shoes pinch and our socks are slightly mismatched, when we could swear that we didn’t leave our keys in the freezer. And of course the Mandela Effect is the result of simply misremembering. A deeper truth is that existence can sometimes feel so oft-putting that we might as well be from a parallel universe. Those other dimensions convey the promise of another world, of another reality. That just because things are done this way where we live now, doesn’t mean that they’re done this way everywhere. Or that they must always be done this way here, either.

What’s moving about Everyday Chemistry is that those expertly mixed songs are missives from a different reality, recordings from a separate, better universe. The album is a tangible reminder that things are different in other places, like the fictional novel at the center of K. Chess’s brilliant new novel Famous Men Who Never Lived, which imagines thousands of refugees from a parallel universe find a home in our own. In that novel, the main character clutches onto a science fiction classic called The Pyronauts, a work of literature non-existent in our reality. The Pyronauts, like Everyday Chemistry, betrays a fascinating truth about parallel universes. We may look for physical, tangible, touchable proof of the existence of such places, but literature is all the proof we need. Art is verification that another world isn’t just possible, but already exists. All literature is from a parallel universe and all fiction is alternate history.

Whether or not the Beatles recorded Everyday Chemistry, the album itself exists; if The Pyronauts is written not in our universe, then one only need transcribe it so as to read it. In the introduction to my collection The Anthology of Babel, I refer to “imagined literature;” an approach towards “probing the metaphysics of this strange thing that we call fiction, this use of invented language which is comprehensible and yet where reality does not literally support the representation.” Every fiction is an epistle from a different reality, even Hugh Everett would tell you that somewhere a real Jay Gatsby pined for Daisy Buchanan, that a few universes over Elizabeth Bennett and Mr. Darcy were actually married, and somewhere Mrs. Dalloway is always buying the flowers herself. The Great Gatsby, Pride and Prejudice, and Mrs. Dalloway are all, in their own way, alternate histories as well.

Alternate history functions to do what the best of literature more generally does—provide a wormhole to a different reality. That fiction engenders a deep empathy for other people is true, and important, but it’s not simply a vehicle to enter different minds—but different worlds as well. Fiction allows us to be chrononauts, to feel empathy for parallel universes, for different realities. Such a thing as fiction is simply another artifact from another dimension; literature is but a fragment from a universe that is not our own.  We are haunted by our other lives, ghosts of misfortune averted, spirits of opportunities rejected, so that fiction is not simply the experience of another, but a deep human connection with those differing versions on the paths of our forked parallel lives.

Image credit: Unsplash/Kelly Sikkema.

Island Time: On the Poetics of the Isle

“The isle is full of noises, /Sounds, and sweet airs, that give delight, and hurt not.”—Caliban in William Shakespeare’s The Tempest (1610)

“Is our island a prison or a hermitage?”—Miranda in Aimé Césaire’s Une Tempête (1969)

In 1810 a struggling whaler by the name of Jonathan Lambert, “late of Salem…and citizen thereof,” set out for torrid waters. By December of 1811, Lambert and his crew of three alighted upon an unpopulated volcanic island in the south Atlantic that Portuguese navigators had christened Ilha de Tristão da Cunha three centuries before. Lambert, in a spirit of boot-strapping individualism, declared himself to be king. A solitary monarchy, this Massachusettsian and his three subjects on that rocky shoal of penguins and guano. Still, Lambert exhibited utopian panache, not just in spite of such remoteness, but because of it. Contrary to the British maritime charts that listed the archipelago as “Tristan de Cunha,” the American renamed them a far more paradisiacal “Islands of Refreshment.”

This whaler’s small kingdom promised not just refreshment from Lambert’s old life, where he explained that “embarrassments…have hitherto constantly attended me,” but from the affairs of all people. Lambert’s Islands of Refreshment were, and are, the most distant habitation on the planet, laying 2,166 miles from the Falkland Islands, 1,511 miles from Cape Town, South Africa, and 1,343 miles from St. Helena where lonely Napoleon Bonaparte would soon expire. And for the whaler’s sake, Lambert’s new home was 10,649 miles from Salem, Massachusetts.

Parliamentary records quote Lambert as claiming that he had “the desire
and determination of preparing myself and family a home where I can enjoy
life.” Here on the Islands of Refreshment, where he subsisted on the greasy
blubber of elephant seals, Lambert prayed that he would be “removed beyond the
reach of chicanery and ordinary misfortune.” As it was, all utopias are deigned
to fail sooner or later, and ordinary misfortune was precisely what would end
Lambert’s life, the experienced sailor drowning five months after arriving,
such final regicide belonging to the sea.

Four years after Lambert’s drowning, the British navy claimed the Islands of Refreshment for king, England, and St. George, hoping to prevent the use of the archipelago by either the Americans (whose ships trolled those waters during the War of 1812) or any French who wished to launch a rescue mission for their imprisoned emperor those 1,343 miles north. And as Tristan de Cunha was folded into that realm, so it remains, now administered by the office of the British Overseas Territory, joining slightly more than a dozen colonies from Anguilla to Turks and Caicos that constitute all that remain of the empire upon which it was said that the sun would never set. As of 2019, Lambert’s ill-fated utopia remains the most solitary land mass on Earth, some 250 hearty souls in the capital of Edinburgh-of-the-Seven-Seas, as far from every other human as is possible on the surface of the planet, and a potent reminder of the meaning of islands. An island, you see, has a certain meaning. An island makes particular demands.

In trying to derive a metaphysics of the island—a poetics of the isle—few square miles are as representative as Tristan de Cunha. A general theory could be derived from Lambert’s example, for failure though he may have been, he was a sort of Prospero and Robinson Crusoe tied into one. Maybe more than either, Lambert’s project resembles that of Henry Neville’s strange 1668 utopian fantasy The Isle of Pines. Neville’s arcadian pamphlet was a fabulist account by a fictional Dutch sailor who comes upon an isle that had been settled by an English castaway named George Pine and four women from each corner of the world, whose progeny populate that isle three generations later.

Lambert drowned before he had opportunity to be a George Pine for the Islands of Refreshment, but the one survivor of his original quartet, at Italian named Tommaso Corri, has descendants among the populace of Tristan de Cunha today. Neville’s account is significantly more ribald than what we know of Tristan de Cunha’s peopling. The Isle of Pines is a psychosexual mine-field ripe for Freudian analysis, where the phallic anagram of Pine’s name makes clear the pornographic elements involved in a narrative of a castaway kept company not by Friday, but by four women. Pine’s account includes such exposition as “Idleness and Fulness of every thing begot in me a desire of enjoing the women…I had perswaded the two Maids to let me lie with them, which I did at first in private, but after, custome taking away shame (there being none but us) we did it more openly, as our Lusts gave us liberty.”

Imaginary islands often present colonial fantasy, an isolated Eden ready for exploitation by an almost always male character, where the morality of home can be shuffled off, while those whose home has been violated are not even given the dignity of names. Such is in keeping with the pastoral origins of the island-narrative, the myth that such locations are places outside of time and space, simultaneously remote and yet connected by the ocean to every other point on the globe. This is the metaphysics of the island, for they may be sun-dappled, palm-tree-lined, blue-water isles many fathoms from “civilization,” but by virtue of the ocean current they are connected to every other location that sits on a coast. By dint of that paradoxical property, islands are almost always the geographic feature with which utopias are associated, for paradise is not paradise if it’s easily accessible.

The Isle of Pines is an example of the utopian genre that flourished in early modern England, and includes Francis Bacon’s 1627 science fiction novel New Atlantis and James Harrington’s 1656 The Commonwealth of Oceana. Writers popularized the idea of the New World island; authors like Richard Hakluyt in his 1600 Principle Navigations and Samuel Purchas in his 1614 Purchas, his Pilgrimage collated the fantastic accounts of Walter Raleigh, Francis Drake, Thomas Harriot, and Martin Frobisher, even while those compilers rarely ventured off another island called Britain. Renaissance authors were fixated not on river or lake, isthmus or peninsula, but on islands. Not even the ocean itself held quite as much allure. “The Island” was that era’s geographic feature par excellence.

Islands have of course been the stuff of fantasy since Homer sang tales of brave Ulysses imprisoned on Calliope’s erotic western isle, or Plato’s account of the sunken continent of Atlantis. Geographer Yi-Fu Tuan in Topophilia: A Study of Environmental Perception, Attitudes, and Values explains that the “island seems to have a tenacious hold,” arguing that unlike continental seashores or tropical forests, islands played a small role in human evolution, and that rather their “importance lies in the imaginative realm.” But it was the accounts of the so-called “Age of Discovery” that endowed the geography of the island with a new significance, a simultaneous rediscovery and invention of the very idea of the island. We’re still indulging in that daydream.

Scholar Roland Greene explains in Unrequited Conquests: Love and Empire in the Colonial Americas that the island took on a new significance during the Renaissance, writing that the geographical feature often signified a “figurative way of delimiting a new reality in the process of being disclosed, something between a fiction and an entire world.” Knowledge of islands obviously existed before 1492, and earlier stories about them even had some of the same fantastic associations, as indicated in the aforementioned examples. But unlike the Polynesians who were adept navigators, Europeans had to anxiously hug the coasts for centuries. Improvements in Renaissance technology finally allowed sailors to venture into the open ocean, and in many ways the awareness that on the other side of the sea was a different continent meant the discovery of something that people had travelled on for millennia—the Atlantic Ocean. And the discovery of that ocean invested the remote and yet interconnected island with a glowing significance.

It’s not a coincidence that Thomas More’s Utopia was printed in 1516 only 14 years after the navigator Amerigo Vespucci argued that the islands off the coast of Asia which Christopher Columbus had “discovered” were continents in their own right—though in many ways the Americas are islands off the coast of Asia, even if the scale and distance are larger than might have been assumed. More’s account of an imagined perfect society as narrated by yet another Dutch sailor has occasioned five centuries of disagreement on how the tract should be read, but whether in earnest or in jest, Utopia’s American location and its status as an island “two hundred miles broad” that holds “great convenience for mutual commerce” is not incidental. If the sandy, sunny island took on new import, it’s because the isolated hermitage of the isle made the perfections of utopia seem possible.

So powerful was More’s tract that later colonists would pilfer his utopian imagery, conflating real Caribbean isles with the future saint’s fiction. Such was the magic of what Tuan describes as the “fantasy of island Edens” that Ponce de Leon assumed a place as lovely as Florida must be an island, and for generations cartographers depicted California as such, as they “followed the tradition of identifying enchantment with insularity.” Utopias, paradises, or Edens have no place in a landlocked location. Tuan explains that the island “symbolizes a state of prelapsarian innocence and bliss, quarantined by the sea from the ills of the continent,” and that in the early modern era they signaled “make-believe and a place of withdrawal from high-pressured living on the continent.” Watch an advertisement from the Bahamian travel board and see if much has changed in that presentation since the 16th century, or spend a few days on a carefully manicured Caribbean beach in the midst of a cold and grey winter, and see if you don’t agree.

Such were the feelings for the crew of the Sea Venture bound for Jamestown in 1609, finding themselves blown by a hurricane onto the corrals off an isolated isle whose eerie nocturnal bird-calls had long spooked navigators, known to posterity as Bermuda. Historians Marcus Rediker and Peter Linebaugh in The Many-Headed Hydra: Sailors, Slaves, Commoners, and the Hidden History of the Revolutionary Atlantic write that Bermuda was a “strange shore, a place long considered by sailors to be an enchanted ‘Isle of Devils’ infested with demons and monsters… a ghoulish graveyard.” During that sojourn, many of the Sea Venture’s crew came to a different conclusion, as Bermuda “turned out to be an Edenic land of perpetual spring and abundant food.”  

A gentleman named William Strachey, dilettante sonneteer and investor in both the Blackfriars Theater and the Virginia Company, would write that Bermuda’s environment was so ideal that it “caused many of them utterly to forget or desire ever to return…they lived in such plenty, peace, and ease.” Silvester Jourdain, in his 1610 A Discovery of the Bermudas, Otherwise Called the Isle of Devils, claimed that contrary to its reputation, this island was “the richest, healthfullest and pleasantest they ever saw.” Trouble in this paradise, as Strachey reported, for so pleasant was Bermuda that many of the shipwrecked sailors endeavored never to continue onto Virginia, for on the continent “nothing but wretchedness and labor must be expected, with many wants and churlish entreaty, there being neither that fish, flesh, nor fowl which here… at ease and pleasure might be enjoyed.”

The Virginia Company could of course not allow such intransigence, so while those 150 survivors made do on Bermuda’s strand for nine months, the threat of violence ensured that all the sailors would continue onto America after two new ships were constructed. For a pregnant year, the survivors of the Sea Venture sunned themselves on Atlantic white-sand beaches, nourished by the plentiful wild pigs, coconuts, and clean fresh-water streams, while in Jamestown the colonists resorted to cannibalism, having chosen to hand their agriculture over entirely to the cultivation of an addictive, deadly, and profitable narcotic called tobacco. When the Sea Venture’s crew arrived in Jamestown, including Pocahontas’s future husband, John Rolfe, they found a settlement reduced from more than 500 souls to fewer than 60, while the Bermudian vessel lost only two sailors whom history remembers as Carter and Waters, the pair so enraptured with this Eden that they absconded into its internal wilderness never to be seen again.

William Shakespeare’s sprite Ariel in The Tempest refers to the isle as the “still-vex’d Bermoothes,” and for a generation scholars have identified Strachey’s letter as source material for that play, the two men having potentially been drinking buddies at the famed Mermaid Tavern. Long has it been a theoretical point of contention as to if The Tempest could be thought of as Shakespeare’s “American play.” Excluding Strachey’s letter, the plot of this last play of Shakespeare’s is arguably his only original one, and though geographic calculation places Prospero’s isle in the Mediterranean, his concerns are more colonial in an American sense, with Ariel and the ogre Caliban being unjustly exploited. The latter makes this clear when he intones that this “island’s mine, by Sycorax my mother/Which thou tak’st from me.” In a disturbing reflection of actual accounts, the working-class castaways Trinculo and Stephano (Is that you Carter and Waters?) conspire to exhibit Caliban in a human zoo after plying him with liquor. 

If America has always been a sort of Faustian bargain, a fantasy of Eden purchased for the inconceivable price of colonialism, slavery, and genocide, then The Tempest offers an alternative history where imperialism is abandoned at the very moment that it rendered Paradise lost. As Faustian a figure as ever, the necromancer Prospero ends those revels with “As you from crimes would pardon’d be/Let your indulgences set me free.” And so, the Europeans return home, leaving the isle to Ariel and Caliban, its rightful inhabitants. Something unspeakably tragic, this dream of a parallel universe penned by Shakespeare, an investor in that very same Virginia Company. Shortly after The Tempest was first staged, Bermuda would be transformed from liberty to a place “of bondage, war, scarcity, and famine,” as Linebaugh and Rediker write. But even if such a terrestrial heaven was always the stuff of myth, in the play itself “something rich and strange” can endure, this place where “bones are coral made” and there are “pearls that were his eyes,” and were “Sea-nymphs hourly ring his knell.”

This is the island of Andrew Marvell’s 1653 poem “Bermudas,” an archipelago “In th’ocean’s bosom unespied,” until religious schismatics fleeing for liberty come “Unto an isle so long unknown, /And yet far kinder than our own.” In their “small boat,” the pilgrims land “on a grassy stage/Safe from the storm’s and prelates’ rage.” God has provided in these fortunate isles fowls and oranges, pomegranates, figs, melons, and apples. On Bermuda cedars are more plentiful than in Lebanon, and upon the beaches wash the perfumed delicacy of ambergris, the whale bile that Charles II supposedly consumed alongside his eggs. For these Englishmen, they sing a “holy and a cheerful note, /And all the way, to guide their chime, with falling oars they kept the time,” for Bermuda itself is a “temple, where to sound his name.” In Marvell’s imagination the islands are a place where the Fall itself has reversed, where the expulsion from Eden never happened, here on Bermuda where God “gave us this eternal spring/Which here enamels everything.”

While Strachey wrote of their nine-month vacation of indolence, sensuality, and pleasure, islands have also been feared and marveled at as sites of hardened endurance. If utopian literature sees the island as a hermitage, then its sister genre of the Robinsonade sees the isle as a prison, albeit one that can be overcome. That later genre draws its name, of course, from Daniel Defoe’s 1719 classic Robinson Crusoe. Greene notes that “shipwreck become a locus for the frustrations of conquest and trade,” and nowhere is that clearer than in Defoe’s book. The titular character remarks that “We never see the true state of our condition till it is illustrated to us by its contraries, nor know how to value what we enjoy, but by the want of it.” In the fiery cauldron that is the desert island, we’re to understand that the hardship and isolation of Crusoe’s predicament will reveal to him (and us) what he actually is, and what he should be. Defoe concludes that the proper state of man should be defined by industry, imperialism, Puritanism, and an anal-retentive sense of organizing resources, time, and our very beliefs.

Robinson Crusoe was written when the relationship between travelogue and fiction was still porous; readers took the account of the sailor shipwrecked on an isle at the mouth of Venezuela’s Orinoco River as factual, but the character was always the invention of Defoe’s mind. Defoe’s novel, if not the first of that form, was certainly an early bestseller; so much so that Robinson Crusoe endures as an archetypal folktale, the narrative of the castaway and his servant Friday known by multitudes who’ve never read the book (and perhaps still don’t known that it was always a fiction). Living on in adaptation over the centuries, its influence is seen everywhere from the Robert Zemeckis film Cast Away, to the television series Lost and Gilligan’s Island. With what we’re to read as comfortable acclimation, Crusoe says that he “learned to look more upon the bright side of my condition, and less upon the dark side.” Whether Crusoe is aware of that dark side or not, he very much promulgates a version of it, the castaway turning himself into a one-man colonial project, ethnically cleansing the island of its natives, basically enslaving “my man Friday” after converting him to Christianity, and exploiting the isle’s natural resources.

With more than a bit of Hibernian skepticism towards the whole endeavor, James Joyce claimed that Crusoe was the “true prototype of the British colonist” (and the Irishman knew something about British colonialism). Joyce sees the “whole Anglo-Saxon spirit in Crusoe: the manly independence, the unconscious cruelty, the persistence, the slow yet efficient intelligence, the sexual apathy, the calculating taciturnity.” Something appropriate in the Robinsonade using the atomistic isolation of the island as a metaphor for the rugged individual, the lonely consumer, the lonelier capitalist. Crusoe spent three decades on his island, from 1651 to 1686, but unbeknownst to him, half-way during his seclusion the Second Anglo-Dutch War settled a few small issues of geography not very far from Crusoe’s natural anchorage. When that war ended, the English would trade their colony in Suriname, watered by the tributaries of the Orinoco, in exchange for an island many thousands of miles to the north called Manhattan. That colder bit of rock would be much more associated with capitalism and rugged individualism than even Crusoe’s.

Innumerable are the permutations of utopia’s hermitage and the prison of the Robinsonade. Consider the kingdoms of Jonathan Swift’s 1726 Gulliver’s Travels; the pseudonymous Unka Eliza Winkfield in 1767’s The Female American, with its tiki idols talking through trickery; Johann David Wyss’s wholesome 1812 Swiss Family Robinson; Jules Verne’s fantastic 1874 The Mysterious Island; H.G Wells’s chilling 1896 The Island of Dr. Moreau; William Golding’s thin 1954 classic Lord of the Flies (which has terrified generations of middle school students); Scott O’Dell’s 1960 New Age Island of the Blue Dolphins; Yann Martel’s obscenely popular 2001 Life of Pi; and even Andy Weir’s 2012 technocratic science fiction novel The Martian. All are Robinsonades or utopias of a sort, even if sometimes the island must be a planet. Once you start noticing islands, they appear everywhere. An archipelago of imagined land masses stretching across the library stacks.

Which is to remember that islands may appear in the atlas of our canon, but actual islands existed long before we put words to describe them. There is the platonic island of the mind, but also the physical island of reality, and confusion between the two has ramifications for those breathing people who call the latter home. Both utopias and Robinsonades reflected and created colonial attitudes, and while it’s easy to get lost in the reverie of the imagined island, actual islanders have suffered upon our awakening. Island remoteness means that one person’s paradisiacal resort can be another person’s temperate prison. So much of the writing about islands is projection from those on the continent, but no true canon of the island can ignore those who actually live there, so that we can study “utopian literature,” but there is no literature from a real Utopia, rather we must also read poetry, prose, and drama from Jamaica and Haiti, Cuba and Hawaii, Puerto Rico and the Dominican Republic.

We must orient ourselves to not just the imagined reveries of a Robinson Crusoe, but the actual words of Franz Fanon, C.L.R. James, V.S. Naipaul, Jamaica Kincaid, Derek Walcott, Marlon James, Junot Diaz, Claude McKay, and Edwidge Danticat. Imagined islands can appear in the atlases of our minds, but real islands have populations of real people. An author who refused to let his audience forget that fact was the Martinique dramatist Aimé Césaire’s who, in his 1969 play Une Tempête, rewrote the cheery conclusion of Shakespeare’s play so as to fully confront the legacy of colonialism. If Shakespeare gave us a revision before the facts of imperialism, then Césaire forces us to understand that Prospero has never been fully willing to go home, and that for Caliban and Ariel happy endings are as fantastic as utopia. Yet Césaire’s characters soldiers on, a revolutionary Caliban reminding us of what justice looks like, for “I’m going to have the last word.”

What is the use then, of an island? We may read of Utopia, but we must not forget the violent histories of blood and sugar, plantations and coups, slavery and revolution. Yet we must not abandon “The Island” as an idea in our sweltering days of the Anthropocene, with Micronesia and the Maldives, the Seychelles and the Solomon Islands threatened by the warm lap of the rising ocean. In Devotions upon Emergent Occasions, the poet John Donne wrote that “No man is an island,” but viewed another way, everything is an island, not least of which this planet that we float upon. I’ve claimed that islands are defined by their parallel remoteness and interconnectedness, but even more than that, an island is a microcosm of the world, for the world is the biggest island on which life exists, a small land mass encircled by the infinite oceanic blackness of space.

As the astronomer Carl Sagan famously said in a 1994 address at Cornell University, “Our planet is a lonely speck… it underscores our responsibility to deal more kindly and compassionately with one another and to preserve and cherish that pale blue dot.” On Christmas Eve of 1968, the astronaut William Anders took the first photograph of the entire planet as it rose over the dead surface of the dusty grey moon. In that photo we see all islands, all continents, all oceans subsumed into this isolated, solitary thing. Islands are castles of the imagination, and as thought got us into this ecological mess, so must it be thought that redeems us. Hermitage or prison? Can such an island ever be a utopia, or are we marooned upon its quickly flooding beaches?

Image credit: Unsplash/Benjamin Behre.

Ten Ways to Look at the Color Black

1.One of the most poignant of all passages in English literature occurs in The Life and Opinions of Tristram Shandy, Gentleman, serially published between the years of 1759 and 1767, when its author Laurence Sterne wrote: “████████████████████████████████████ ██████████████████████████████████████████████████████████████████████████████████████” Such is the melancholic shade of the 73rd page of Tristram Shandy, the entirety of the paper taken up with black ink, when the very book itself mourns the death of an innocent but witty parson with the Shakespearean name Yorick. Said black page appears after Yorick went to his doors and “closed them, – and never opened them more,” for it was that “he died… as was generally thought, quite broken hearted.”

Tristam Shandy is more than just an account of its titular character, for as Steven Moore explains in The Novel: An Alternative History 1600-1800, the English writer engaged subjects including “pedantry, pedagogy, language, sex, writing, obsessions… obstetrics, warfare and fortifications, time and memory, birth and death, religion, philosophy, the law, politics, solipsism, habits, chance… sash-windows, chambermaids, maypoles, buttonholes,” ultimately concluding that it would be “simpler to list what it isn’t about.” Sterne’s novel is the sort that spends a substantial portion of its endlessly digressive plot with the narrator describing his own conception and birth. As Tristam says of his story, “Digressions, incontestably, are the sunshine; – & they are the life, the soul of reading; – take them out of this book for instance, – you might as well take the book along with them.”

Eighteenth-century critics didn’t always go in for this sort of thing. Dr. Johnson, with poor prescience, said “Nothing odd will do long. Tristam Shandy did not last,” while Voltaire gave it a rather more generous appraisal, calling it “a very unaccountable book; an original.” Common readers were a bit more adventuresome; Moore records that the “sheer novelty of the first two volumes made Tristam Shandy a hit when they were reprinted in London in the early 1760s.” Sterne arguably produced the first “post-modern” novel, long before Thomas Pynchon’s Gravity’s Rainbow or David Foster Wallace’s Infinite Jest. Central to Tristam Shandy are its typographical eccentricities, which Michael Schmidt in The Novel: A Biography describes: “mock-marbling of the paper, the pointing hands, the expressive asterisks, squiggles, dingbats…the varying lengths of dashes.” None of those are as famous as poor Yorick’s pitch-black page, however.

It’s easy to see Sterne’s black page, its rectangle of darkness, as an oddity, an affectation, an eccentricity, a gimmick. This is woefully inconsiderate to English language’s greatest passage about the blankness of grief. Sober critics have a tendency to mistake playfulness with lack of seriousness, but a reading of Tristram Shandy shows that for all of its strangeness, its scatological prose and its metafictional tricks, Sterne’s goal was always to chart the “mechanism and menstruations in the brain,” as he explained, to describe “what passes in a man’s mind.”

Which is why Tristram Shandy’s infamous black page represents grief more truthfully than the millions of pages that use ink in a more conventional way. Sterne’s prose, or rather the gaping dark absence where prose normally would be, is the closest that he can get to genuinely conveying what loss’s void feels like. What’s clear is that no “reading” or “interpretation” of Yorick’s extinction can actually be proffered, no analysis of any human’s death can be translated into something rationally approachable. Sterne reminds us that grief is not amenable to literary criticism.  For anyone that has ever lost someone they loved, seen that person die, you can understand that there is an inability for mere words to be commensurate with the enormity of that absence. Concerning such emotions beyond emotions, when it comes to “meaning,” the most full and accurate portrayal can only ever be a black hole.

2.Black is the most parsimonious of all colors. Color is a question of what it is we’re seeing when contrasted with that which we can’t, and black is the null zero of the latter. Those Manichean symbolic associations that we have with black and white are culturally relative—they are contingent on the arbitrary associations that a people project onto colors.  Yet true to the ballet of binary oppositions, they are intractably related, for one could never read black ink on black paper, or its converse. If with feigned synesthesia we could imagine what each color would sound like, I’d suspect that they’d either be all piercing intensity and high pitches, or perhaps low, barely-heard thrum—but I’m unsure which would be which.

Their extremity is what haunts, allowing either only absorption or only
reflection, the two colors reject the russet cool of October and the blue chill
of December, or the May warmth of yellow and the July heat of red. Black and
white are both voids, both absences, both spouses in an absolutism. They are
singularities. Hardly anything is ever truly black, even the night sky awash in
the electromagnetic radiation of all those distant suns. Black and white are
abstractions, they are imagined mathematical potentials, for even the darkest
of shades must by necessity reflect something back. Save for one
thing—the black hole.

As early as 1796 the Frenchman Pierre-Simon Laplace conjectured the existence of objects with a gravitational field so strong that not even light could escape. Laplace, when asked of God, famously told Napoleon that he “had no need for that hypothesis,” but he knew of the black hole’s rapacious hunger. It wouldn’t be until 1916 that another scientist, the German Karl Schwarzschild, would use Albert Einstein’s general theory of relativity to surmise the existence of the modern black hole. Physicist Brian Greene explains in The Elegant Universe that Schwarzschild’s calculations implied objects whose “resulting space-time warp is so radical that anything, including light, that gets too close… will be unable to escape its gravitational grip.”

Black holes were first invented as a bit of mathematical book-keeping, a theoretical concept to keep God’s ledger in order. However, as Charles Seife writes in Alpha and Omega: The Search for the Beginning and End of the Universe, though a “black hole is practically invisible, astronomers can infer its presence from the artifacts it has on spacetime itself.” Formed from the tremendous power of a supernova, a blackhole is a lacuna in space and time, the inky corpse of what was once a star, and an impenetrable passage from which no traveler may return.

A black hole is the simplest object in the universe. Even a hydrogen atom is composed of a proton and an electron, but a black hole is simply a singularity and an event horizon. The former is the infinitely dense core of a dead star, the ineffable heart of the darkest thing in existence, and the latter marks the point of no return for any wayward pilgrim. It’s at the singularity itself where the very presuppositions of physics breakdown, where our mathematics tells us that reality has no strictures. Though a black hole may be explained by physics, it’s also paradoxically a negation of physics. Obvious why the black hole would become such a potent metaphor, for physics has surmised the existence of locations for which logic has no dominion. A cosmological incognito if you will, where there be monsters.

God may not play dice with the universe, but as it turns out She is ironic. Stephen Hawking figured that the potent stew of virtual particles predicted by quantum mechanics, general relativity’s great rival in explaining things, meant that at the event horizon of a black hole there would be a slight escape of radiation, as implied by Werner Heisenberg’s infamous uncertainty principle. And so, from Hawking, we learn that though black may be black, nothing is ever totally just that, not even a black hole. Save maybe for death.

3.“Black hole” is the rare physics term that is evocative enough to attract public attention, especially compared to the previous phrase for the concept, “gravitationally collapsed object.” Coined by physicist Robert H. Dicke in the early ’60s, he appropriated it from the infamous dungeon in colonial India that held British prisoners and was known as the “Black Hole of Calcutta.” In Dicke’s mind, that hot, fetid, stinking, torturous hell-hole from which few men could emerge was an apt metaphor for the cosmological singularity that acts as a physical manifestation of Dante’s warning in Inferno to “Abandon hope all ye who enter here.”

Dante was a poet, and the word “black hole” is a metaphor, but it’s important to remember that pain and loss go beyond language, they are not abstractions, but very real. That particular Calcutta hole was in actuality an 18-foot by 14-foot cell in the ruins of Ft. William that held 69 Indian and British soldiers upon the fall of that garrison in 1756, when it was taken by the Nawab of Bengal. According to a survivor of the imprisonment, John Zephaniah Howell, the soldiers “raved, fought, prayed, blasphemed, and many then fell exhausted on the floor, where suffocation put an end to their torments.” On the first night 46 of the men died.

What that enclosure in Calcutta signified was its own singularity, where meaning itself had no meaning. In such a context the absence of color becomes indicative of erasure and negation, such darkness signaling nothing. As Lear echoes Parmenides, “Nothing can come of nothing: speak again.” There have been many black holes, on all continents, in all epochs. During the 18th century the slave ships of the Middle Passage were their own hell, where little light was allowed to escape.

In Marcus Redicker’s The Slave Ship: A Human History, the scholar speaks of the “horror-filled lower deck,” a hell of “hot, crowded, miserable circumstances.” A rare contemporary account of the Middle Passage is found in the enslaved Nigerian Olaudah Equiano’s 1789 The Interesting Narrative of Olaudah Equiano, Or Gustavus Vassa, The African. Penned the year that French Jacobins stormed the Bastille, Equiano’s account is one of the rare voices of the slave ship to have been recorded and survived, an account of one who has been to a hell that they did not deserve and who yet returned to tell tale of that darkness. Equiano described being “put down under the decks” where he “received such a salutation in my nostrils as I had never experience in my life: so that, with the loathsomeness of the stench, and crying together, I became so sick and low that I was not able to eat…I now wished for the last friend, death.”

There’s a risk in using any language, any metaphor, to describe the singularities of suffering endured by humans in such places, a tendency to turn the lives of actual people into fodder for theorizing and abstraction. Philosopher Elaine Scary in The Body in Pain: The Making and Unmaking of the World argues that much is at “stake in the attempt to invent linguistic structures that will reach and accommodate this area of experience normally so inaccessible to language… a project laden with practical and ethical consequence.” Any attempt to constrain such experience in language, especially if it’s not the author’s experience, runs a risk of limiting those stories. “Black hole” is an affective metaphor to an extent, in that implicit within it is the idea of logic and language breaking down, and yet it’s all the more important to realize that it is ultimately still a metaphor as well, what the Soviet dissident Aleksandr Solzhenitsyn in The Gulag Archipelago described as “the dark infinity.”

David King in The Commissar Vanishes: The Falsification of Photographs and Art in Stalin’s Russia provides a chilling warning about what happens when humans are reduced to such metaphor, when they are erased. King writes that that the “physical eradication of Stalin’s political opponents at the hands of the secret police was swiftly followed by their obliteration from all forms of pictorial existence.” What’s most disturbing are the primitively doctored photographs, where being able to see the alteration is the very point. These are illusions that don’t exist to trick, but to warn; their purpose is not to make you forget, but rather the opposite, to remind you of those whom you are never to speak of again. Examine the Damnatio memoriae of Akmal Ikramov, first secretary of the Communist Party of Uzbekistan, who was condemned by Stalin and shot. In the archives his portrait was slathered in black paint. The task of memory is to never forget that underneath that mask there was a real face, that Ikramov’s eyes looked out as yours do now.

4.Even if the favored color of the Bolsheviks was red, black has also had its defenders in partisan fashion across the political spectrum, from the Anarchist flag of the left to the black-shirts of Benito Mussolini’s fascist right and the Hugo Boss-designed uniforms of the Nazi SS. Drawing on those halcyon days of the Paris Commune in 1871, anarchist Louis Michel first flew the black flag at a protest. His implications were clear—if a white flag meant surrender, then a black flag meant its opposite. For all who wear the color black certain connotations, sometimes divergent, can be potentially called upon; including authority, judiciousness, piety, purity, and power. Also, black makes you look thinner.

Recently departed fashion designer, creative director for the House of Chanel, and noted Teutonic vampire Karl Lagerfeld once told a Harper’s Baazar reporter that “Black, like white, is the best color,” and I see no reason to dispute that. Famous for his slicked-back powdered white pony-tail, his completely black suits, starched white detachable collars, black sunglasses, and leather riding gloves, Lagerfeld is part of a long tradition of that fabled French design firm. Coco Chanel, as quoted in The Allure of Chanel by Paul Morand and Euan Cameron, explains that “All those gaudy, resuscitated colors shocked me; those reds, those greens, those electric blues.” Chanel explains rather that she “imposed black; it’s still going strong today.”

Black may be the favored monochromatic palette for a certain school of haute couture; think black tie affairs and little black cocktail dresses—but the look is too good to be left to the elite. Black is the color of bohemians, spartan simplicity as a rebellion against square society. Beats were associated with it, they of stereotypical turtlenecks and thick-framed glasses. It’s always been a color for the avant-garde, signifying a certain austere rejection of the superficial cheerfulness of everyday life. Beats like Allen Ginsberg in his epic poem Howl, with its memorable black cover from City Lights Books, may have dragged himself through the streets at dawn burning for that “ancient heavenly connection to the starry dynamo,” but his friend William S. Burroughs would survey the fashion choices of his black-clad brethren and declare that the Beats were the “movement which launched a million Gaps.”

Appropriated or not, black has always been the color of the outlaw, a venerable genealogy that includes everything from Marlon Brando’s leather jacket in The Wild One to Keanu Reeves’s duster in The Matrix. Fashionable villains too, from Dracula to Darth Vader. That black is the color of rock music, on its wide highway to hell, is a given. There is no imagining goth music without black’s macabre associations, no paying attention to a Marilyn Manson wearing khaki, or the Cure embracing teal. No, black is the color of my true love’s band, for there’s no Alice Cooper, Ozzy Osbourne, or the members of Bauhaus in anything but a monochromatic darkness. When Elvis Presley launched his ’68 comeback he opted for a skin-tight black leather jumpsuit.

Nobody surpasses Johnny Cash though. The country musician is inextricably bound to the color, wearing it as a non-negotiable uniform that expressed radical politics. He sings “I wear the black for the poor and the beaten down, /Livin’ in the hopeless, hungry side of town.” Confessing that he’d “love to wear a rainbow every day,” he swears allegiance to his millennial commitments, promising that he’ll “carry off a little darkness on my back, /’Till things are brighter, I’m the Man in Black.” Elaborating later in Cash: The Autobiography, cowritten with Patrick Carr, he says “I don’t see much reason to change my position today…There’s still plenty of darkness to carry off.”

Cash’s sartorial choices were informed by a Baptist upbringing; his clothes mourned a fallen world, it was the wardrobe of a preacher. Something similar motivates the clothing of a very different prophetic figure, the pragmatist philosopher Cornel West, who famously only wears a black three-piece suit, with matching scarf. In an interview with The New York Times, West calls the suit his “cemetery clothes,” with a preacher’s knowledge that one should never ask for whom the bell tolls, but also with the understanding that in America, the horrifying reality is that a black man may always need to be prepared for his own funeral when up against an unjust state. As he explained, “I am coffin-ready.” West uses his black suit, “my armor” as he calls it, as a fortification.

Black is a liturgical, sacred, divine color. It’s not a mistake that Cash
and West draw from the somber hue of the minister’s attire. Black has often
been associated with orders and clerics; the Benedictines with their black
robes and Roman collared Jesuits; Puritans and austere Quakers, all unified in
little but clothing. Sects as divergent as Hasidic Jews and the Amish are known
for their black hats. In realms of faith, black may as well be its own temple.

5.Deep in the Finsterwalde, the “Dark Forest” of northwestern Switzerland, not far from Zurich, there is a hermitage whose origins go back to the ninth century. Maintained by Benedictine monks, the monastery was founded by St. Meinard. The saint lived his life committed to solitude, to dwelling in the space between words that can stretch to an infinity, a black space that still radiates its own light. In his vocation as a hermit, where he would find the monastery known (and still known) as the Einsiedeln Abbey, he had a single companion gifted to him by the Abbes Hildegard of Zurich—a carved, wooden statue of the Virgin Mary holding the infant Christ, who was himself clutching a small bird as if it was his play companion.

For more than a millennium, that figure, known as the “Lady of Einsiden,” has been visited by millions of pilgrims, as the humble anchorage has grown into a complex of ornate, gilded baroque buildings. These seekers are drawn to her gentle countenance, an eerie verisimilitude projecting some kind of interiority within her walnut head. She has survived both the degradations of entropy and Reformation, and is still a conduit for those who travel to witness that material evidence of that silent world beyond. Our Lady of Einsiden is only a few feet tall; her clothing is variable, sometimes wearing the celestial, cosmic blue of the Virgin, other times in resplendent gold, but the crown of heaven is always upon her brow. One aspect of her remains unchanging, however, and that’s that both her and Christ are painted black.

In 1799, during a restoration of the monastery, it was argued, in the words of one of the workers, that the Virgin’s “color is not attributable to a painter.” Deciding that a dose of revisionism was needed alongside restoration, the conclusion of restorer Johann Adam Fuetscher was that the Mary’s black skin was the result of the “smoke of the lights of the hanging lamps which for so many centuries always burned in the Holy Chapel of Einsideln.”

Fuetscher decided to repaint the statue, but when visitors saw the new Virgin they were outraged, and demanded she be returned to her original color, which has remained her hue for more than 200 years. Our Lady of Einsideln was not alone; depictions of Mary with dark skin can be found the width and breadth of the continent, from the famed Black Madonna of Czestochowa in Poland to Our Lady of Dublin in the Whitefriar Street Carmelite Church; in the Sicilian town of Tindari, to the frigid environs of Lunds Domkyrka Lund Cathedral in Sweden. Depending on how one identifies the statues, there are arguably 500 medieval examples of the Virgin Mary depicted with dark skin.

Recently art historians have admitted that the hundreds of Black Madonnas are probably intentionally so, but there is still debate as to why she is so often that color. One possibility is that the statues are an attempt at realism, that European artists saw no compunctions about rendering the Virgin and Christ with an accurate skin-tone for Jews living in the Levant. Perhaps basing such renderings upon the accounts of pilgrims and crusaders who’d returned from the Holy Land, these craftsmen depicted the Mother of God with a face that wasn’t necessarily a mirror of their own.

Scholar Lucia Chiavola Birnbaum has her own interpretation of these carvings in her study Black Madonnas: Feminism, Religion, and Politics in Italy. For Birnbaum, the statues may represent a multicultural awareness among those who made them, but they also have a deep archetypal significance. She writes that “Black is the color of the earth and of the ancient color of regeneration, a matter of perception, imagination, and beliefs often not conscious, a phenomenon suggested in people’s continuing to call a madonna black even after the image had been whitened by the church.”

China Galland in Longing for Darkness: Tara and the Black Madonna, her account of global pilgrimage from California to Nepal, asks if there was in the “blackness of the Virgin a thread of connection to Tara, Kali, or Durga, or was its mere coincidence?”  These are goddesses, which as Galland writes, have a blackness that is “almost luminous,” beings of a “beneficent and redeeming dark.” Whatever the motivations of those who made the statues, it’s clear that they intended to depict them exactly as they appear now, candle smoke and incense besides. At the il Santuario della Madonna del Tindari in Sicily there is a celebrated Virgin Mary with dark skin. And just to dispel any hypothesis that her color is an accident, restorers in 1990 found inscribed upon her base a quotation from Song of Songs 1:5, when the Queen of Sheba declares to Solomon: “I am black but beautiful.”

6.Very different deities of darkness would come to adorn the walls of the suburban Madrid house that the Spanish painter Francisco Goya moved to 200 years ago, in the dusk of the Napoleonic conflicts (when Laplace had dismissed God). Already an old man, and deaf for decades, Goya would affix murals in thick, black oil to the plaster walls of his villa, a collection intended for an audience of one. As his biographer Robert Hughes would note in Goya, the so-called black paintings “revealed an aspect of Goya even more extreme, bizarre, and imposing” than the violent depictions of the Peninsular War for which he was famous. The black paintings were made for Goya’s eyes only. He was a man who’d witnessed the barbarity of war and inquisition, and now in his convalescence he chose to make representations of witches’ sabbaths and goat-headed Baphomet overseeing a Black Mass, of Judith in the seconds after she decapitated Holofernes, and of twisted, toothless, grinning old men. And, though now it hangs in the Museo del Prado, it was painted originally on the back wall of the first story of the Quinta del Sordo next to one window and perpendicular to another, was his terrifying depiction of a fearsome Saturn devouring his own young.

In the hands of Goya, the myth of the Titan who cannibalized his progeny is
rendered in stark, literal, horrifying reality. For Goya there is no forgetting
the implications of what that story implies, his Chronos appears as shaggy,
wild-eyed, orangish monstrosity; matted, bestial white hair falls uncombed from
his head, and past his scrawny shoulders. Saturn is angular, jutting bones and
knobby kneecaps, as if hunger has forced him to this unthinkable act. His eyes
are wide, and though wild, they’re somehow scared, dwelling in the darkness of
fear.

I wonder if that’s part of Goya’s intent, using this pagan theme to express something of Catholic guilt and death-obsession, that intuitive awareness of original sin. It makes sense to me that Saturn is the scared one; scared of what he’s capable of, scared of what he’s done. Clutching in both hands the dismembered body of a son, whose features and size are recognizably human, Chronos grips his child like a hoagie, his son’s right arm already devoured and his head in Saturn’s stomach, with the Titan biting directly into the final remaining hand. Appropriately enough for what is, after all, an act of deicide, the sacrificed god hangs in a cruciform position. A fringe of blood spills out from inside. His corpse has a pink flush to it, like a medium rare hamburger. That’s the horror of Chronos—of time—emerging from this undifferentiated darkness. When considering our final hour, time has a way of rendering the abstraction of a body into the literalism of meat. Saturn Devouring His Son hung in Goya’s dining room.

His later paintings may be the most striking evocation of blackness, but the
shade haunted Goya his entire life. His print The Sleep of Reason Produces
Monsters, made two decades before those murals in the Quinta del
Sordo, is a cross-hatched study of the somber tones, of black and grey.
Goya draws himself, head down on a desk containing the artist’s implements, and
above him fly the specters of his nocturnal imagination, bats and owls flapping
their wings in the ceaseless drone that is the soundtrack of our subconscious
irrationalities, of the blackness that defines that minor form of extinction we
call sleep.

7.The blackness of sleep both promises and threatens erasure. In that strange state of non-being there is an intimation of what it could mean to be dead. Telling that darkness is the most applicable metaphor when describing both death and sleep, for the bed or the coffin. Sigmund Freud famously said of his subject in The Interpretation of Dreams that they were the “royal road to the unconscious.” Even the laws of time and space seem voided within that nocturnal kingdom, where friends long dead come to speak with us, where hidden rooms are discovered in the dark confines of homes we’ve known our entire lives. Dreams are a singularity of sorts, but there is that more restful slumber that’s nothing but a calm blackness.

This reciprocal comparison between sleep and death is such a cliché precisely because it’s so obvious, from the configuration of our actual physical repose to our imagining of what the experiences might share with one another. Edmund Spenser in the Faerie Queene writing “For next to Death is Sleepe to be compared;” his contemporary the poet Thomas Sackville referring to sleep as the “Cousin of Death;” the immaculate Thomas Browne writing that sleep is the “Brother of Death;” and more than a century later Percy Shelley waxing “How wonderful is Death, Death and his brother Sleep!”

Without focusing too much on how the two have moved closer to one another on the family tree, what seems to unify tenor and vehicle in the metaphorical comparisons between sleep and death is this quality of blackness, non-existence of color the same as non-existence. Both imply a certain radical freedom, for in dreams everyone has an independence, at least for a few hours. Consider that in our own society, where our totalizing system is the consumerism which controls our every waking moment, that the only place where you won’t see anything designed by humans (other than yourself) is in dreams, at least until Amazon finds a way to beam advertisements directly into our skulls.

Then there is Shakespeare, who speaks of sleep as the “ape of death,” who in Hamlet’s monologue writes of the “sleep of death,” and in the Scottish play calls sleep “death’s counterfeit.” If centuries have a general disposition, then my beloved 17th century was a golden age of morbidity when the ars Moriendi of the “good death” was celebrated by essayists like Browne and Robert Burton in the magisterial Anatomy of Melancholy. In my own reading and writing there are few essayists whom I love more, or try to emulate more, than the good Dr. Browne. That under-read writer and physician, he who both coined the terms “literary” and “medical,” among much else besides, wrote one of the most moving and wondrous tracts about faith and skepticism in his 1642 Religio Medici. Browne writes “Sleep is a death, /O make me try, /By sleeping, what it is to die:/And as gently lay my head/On my grave, as now my bed.” Maybe it resonates with me because when I was (mostly) younger, I’d sometimes lay on my back and pretend that I was in my coffin. I still can only sleep in pitch blackness.

8.Far easier to imagine that upon death you go someplace not unlike here, in either direction, or into the life of some future person yet unborn. Far harder to imagine non-existence, that state of being nothing, so that the most accessible way that it can be envisioned is as a field of black, as being the view when you close your eyes. That’s simply blackness as a metaphor, another inexact and thus incorrect portrayal of something fundamentally unknowable. In trying to conceive of non-existence, blackness is all that’s accessible, and yet it’s a blackness where the very power of metaphor ceases to make sense, where language itself breaks down as if it were the laws of physics at the dark heart of the singularity.

In the Talmud, at Brachot 57b, the sages tell us that “Sleep is 1/60th of death,” and this equation has always struck me as just about right. It begs certain questions though: is the sleep that is 1/60th of death those evenings when we have a pyrotechnic, psychedelic panoply of colors before us in the form of surrealistic dreams, or is it the sleep we have that is blacker than midnight, devoid of any being, of any semblance of our waking identities? This would seem to me to be the very point on which all questions of skepticism and faith must hang. That sleep, that strangest of activities, for which neurologists still have no clear answers as to its necessities (though we do know that it is), is a missive from the future grave, a seven-hour slice of death, seems obvious to me. So strange that we mock the “irrationalities” of ages past, when so instrumental to our own lives is something as otherworldly as sleep, when we die for a third of our day and return from realms of non-being to bore our friends with accounts of our dreams.

When we use the darkness of repose as metaphor for death, we brush against the extremity of naked reality and the limitations of our own language. In imagining non-existence as a field of undifferentiated black, we may trick ourselves into experiencing what it would be to no longer be here, but that’s a fallacy. Black is still a thing. Less than encouraging, this inability to conceive of that reality, which may be why deep down all of us, whether we’re to admit it or not, are pretty sure that we’ll never die, or at least not completely. And yet the blackness of non-existence disturbs, how couldn’t it? Epicurus wrote as an argument against fear of our own mortality that “Death… is nothing to us, seeing that, when we are, death is not come, and, when death is come, we are not.”

Maybe that’s a palliative to some people, but it’s never been to me. More of
sophistry than wisdom in the formulation, for it eludes the psychology of being
terrified at the thought of our own non-existence. Stoics and Epicureans have
sometimes asked why we’re afraid of the non-existence of death, since we’ve
already experienced the non-existence before we’re born? When I think back to
the years before 1984, I don’t have a sense of an undifferentiated blackness,
rather I have a sense of…. well…. nothing. That’s not exactly
consoling to me. Maybe this is the height of egocentricity, but hasn’t anyone
ever looked at photographs of your family from before you’re born, and felt a
bit of the uncanny about it? Asking for a friend.

In 1714, the German philosopher Gottfried Wilhelm Leibnitz asked in the Monadology “Why is there something rather than nothing,” and that remains the rub. For Martin Heidegger in the 20th century, that issue remained the “fundamental question of metaphysics.” I proffer no solution to it here, only to notice that when confronted with the enormity of non-existence, prudence forces us to admit the equivalently disturbing question of existence. Physicist Max Delbrück in Mind from Matter: An Essay on Evolutionary Epistemology quotes his colleague Niels Bohr, the father of quantum theory, as having once said that the “hallmark of any deep truth [is] that its negation is also a deep truth.” Certainly, the case with existence and non-existence, equally profound and equally disquieting. If we’re to apply colors to either, I can’t help but see oppositional white and black, with an ambiguity to which is which.

9.If there can be a standard picture of God, I suspect that for most people it is a variation on the bearded, old man in the sky trope, sort of a more avuncular version of Michelangelo’s rendering from the Sistine Chapel ceiling. Such is an embodied deity, of dimensions in length, breadth, and width, and also such is the Lord as defined through that modern heresy of literalism. The ancients were often more sophisticated than both our fundamentalists and our atheists (as connected as black and white). Older methods of speaking about something as intractable as God were too often pass over in silence, with an awareness that to limit God to mere existence was to limit too much.

In that silence there was the ever-heavy blossom of blackness, the all-encompassing field of darkness that contains every mystery to which there aren’t even any questions. Solzhenitsyn observed that “even blackness [can]… partake of the heavens.” Not even blackness, but especially blackness, for dark is the night. Theologians call this way of speaking about God “apophasis.” For those who embrace apophatic language, there is an acknowledgement that a clear definition of the divine is impossible, so that it is better to dwell in sacred, uncertainties. This experience of God can often be a blackness in itself, what St. John of the Cross spoke of in his 1577 Spanish poem “The Dark Night of the Soul.” Content with how an absence can often be more holy than an image, the saint emphasized that such a dark night is “lovelier than the dawn.” A profound equality in undifferentiated blackness, in that darkness where features, even of God, are obscured. Maybe the question of whether or not God is real is as nonsensical as those issues of non-existence and death; maybe the question itself doesn’t make any sense, understanding rather that God isn’t just black. God is blackness.

10. On an ivory wall within the National Gallery, in Andrew Mellon’s palace constructed within this gleaming white city, there is a painting made late in life by Mark Rothko entitled Black on Grey. Measuring some 80 inches by 69.1 inches, the canvas is much taller than the average man, and true to its informal title it is given over to only two colors—a dark black on top fading into a dusty lunar grey below. Few among Rothko’s contemporaries in his abstract expressionist circle, that movement that moved the capital of the art world from Paris to New York, had quite the sublimity of color as he did. Jackson Pollock certainly had the kinetic frenzy of the drip, Willem de Kooning the connection to something still figurative in his pastel swirl. But Rothko, he had a panoply of color, from his nuclear oranges and reds to those arctic blues and pacific greens, what he described to Selden Rodman in Conversations with Artists as a desire to express “basic human emotions—tragedy, ecstasy, doom.”

Black on Grey looks a bit like what I imagine it would be to survey the infinity of space from the emptiness of the moon’s surface. These paintings towards the end of the artist’s life, made before he committed suicide by barbiturate and razor blade in his East 69th Street studio, took an increasingly melancholic hue. Perhaps Rothko experienced what his friend the poet Frank O’Hara had written about as the “darkness I inhabit in the midst of sterile millions.” Rothko confirmed that his black paintings were, as with Goya, fundamentally about death.

In a coffee-table book, Rothko’s work can look like something from a paint
sample catalog. It does no justice compared to standing before the images
themselves, of what Rothko described as the phenomenon of how “people break
down and cry when confronted with my pictures.” For Rothko, such reactions were
a type of communion, these spectators were “having the same religious experience
I had when I painted them.” When you stand before Black on Grey, when
it’s taken out from the sterile confines of the art history book or the
reductions of digital reproduction, you’re confronted with a blackness that
dominates your vision, as seeing with your eyes closed, as experiencing death,
as standing in the empty Holy of Holies and seeing God.

With a giant field of black, the most elemental abstraction that could be
imagined, this Jewish mystic most fully practiced the stricture to not make any
graven image. He paradoxically arrived at the most accurate portrayal of God
ever committed to paint. For all of their oppositions, both Infinity and
Nothing become identical, being the same shade of deep and beautiful black, so
that any differences between them are rendered moot.

Image credit: Unsplash/David Jorre.