Ten Ways to Live Forever

1.Before ISIS toppled the minaret of the Great Mosque of al-Nuri in Anbar Province, or threaded the Mosul tombs of Daniel and Jonah with incendiary, Utnapishtim was somewhere in the desert. He was there before the Americans with their hubristic occupation, in some cave while soldiers in Kevlar patrolled the banks of the Tigris, M1 tanks of the Third Infantry rolling toward Baghdad and F-22s of the 101st Airborne cutting across the skies of Karbala. Utnapishtim survived Saddam’s reign, with his mustard gas, torture chambers, and the invasion of Kuwait; he’d seen men burnt alive on Highway 80 by the Americans; he’d endured the brutal war with Iran, when tanks got stuck in the mud of Dezful and Khorramshahr was turned into a city of blood; he was there when the Ba’athists overthrew the Hashemite monarchy.

Utnapishtim lived through the British Mandate of Mesopotamia, when Sir Percy Cox drank G&T’s at the officer’s club; he’d lived when Iraq was a backwater of the Ottomans, and he saw Mamluks, Jalayirids, and Mongols steer their horses across the desert; he witnessed Genghis Khan in Khwarizmi, more centaur than man. He snuck unseen into the Baghdad of the Abbasids, city of gardens and astrolabes, where he discussed Hadith with the humane Mu’tazila and parsed Aristotle with Ibn Sina. Prior to the Islamic Golden Age, Utnapishtim was in Ctesiphon when Yazdegerd III fled as the Arabs marched into the Sasanian Empire, the Zoroastrian mages unable to prevent the course of history (true of all of us). He was there when Trajan marched columns of iron-armored Roman centurions into Parthia, and when Alexander the Great established Seleucid.

Witness to when Cyrus the Great freed the Jews of Babylon, and when Hammurabi’s scribes chiseled the law into stone. Utnapishtim had endured Chaldeans, Babylonians, Assyrians, Akkadians. Our primogeniture, the oldest of men, born in Sumer; as old as cuneiform pressed into wet clay, as old as sunbaked cities and the farming of wheat on the Euphrates’s banks, as old as the words themselves. Enki of the stars and An of the sky, Enlil of the wind and Ninhursag of the mountains molded Sumer, and by the banks of Eden birthed humans like Utnapishtim. Our only refugee of that before-time, the only person to survive when the fickle gods conspired to destroy the world by flood shortly after having created it.

He dwelled when Iraq was Uruk, before civilization’s keystone was set, when the firmament was new. Utnapishtim survived leaders and conquerors, presidents and dictators. Breathing before Abu Bakr al-Baghdadi, Muqtada al-Sadr, George W. Bush, Saddam Hussein, and the Ayatollah Khomeini; talking before King George V and Kaiser Wilhelm II; walking before Mehmed II, the Abbasid caliphs, and Genghis Khan; older than the Prophet Muhammad; older even than Yazdegerd III, Alexander the Great, Cyrus the Great, Darius II, Sargon of Akkad, and Ashurbanipal. He witnessed the inundation of ziggurats, the collapse of towers, the immolation of temples. For the thousands of reigns he lived through, the kings innumerable and emperors forgotten, only one had ever sought his counsel. Despite being two-thirds divine, a king who would ultimately die like the rest of us; a fearsome ruler named Gilgamesh.

2.The tale of a righteous man visited by a god who warned him of rising waters—who in response builds an ark, venturing forth when a dove that he’s released confirms that dry land has reemerged—may strike you as a story that you’ve heard before. Norman Cohn parses the purpose of these stories, writing in Noah’s Flood: The Genesis Story in Western Thought that “large areas of what used to be Mesopotamia…were frequently devastated by flood. When torrential rain combined with the melting of the snows… the Tigris and Euphrates could burst their banks.” Cohn explains that in “ancient times this phenomenon gave rise to a powerful tradition: it was believed that here had once been a flood so overwhelming that nothing was ever the same again.” But if Genesis focuses on sin and punishment, the anonymously written Epic of Gilgamesh has no moral, save for a brief on why we must die at all (or at least why most of us must).  

Unlike Noah, who even with the antediluvian extremes of 950 years did ultimately die, Utnapishtim was gifted (or cursed) by the gods with immortality. Some other differences with the Bible’s account, for Genesis records nothing of having to fight scorpion-monsters to reach Utnapishtim. Gilgamesh was stricken over the death of his best (and arguably only) friend, Enkidu, the wild man domesticated by the priestess Shamhat with sex and beer (as so many people are). For the ruler of Uruk, Utnapishtim promised something that can’t be purchased in gold, the possibility of Enkidu’s resurrection and Gilgamesh’s immortality. In Stephen Mitchell’s reimagining Gilgamesh: A New English Version, Utnapishtim queries the ruler: “who will assemble/the gods for your sake? Who will convince them/to grant you the eternal life that you seek?”

Utnapishtim tasks Gilgamesh, the man who defeated the mighty ogre Humbaba, that immortality is his if he simply stays awake indefinitely. Despite quasi-divinity, powers temporal and physical, authority and prestige, Gilgamesh can’t defeat slumber. Utnapishtim mocks the king, “Look at this fellow! He wanted to live / forever, but the very moment he sat down, / sleep swirled over him, like a fog.” All of human weakness and desires—our need to eat, our need to shit and piss, our need to fuck—signal that our lot is not that of Utnapishtim or of the gods who created him. Even if you can defeat Humbaba, the Epic of Gilgamesh reminds us, sooner or later you’ll nod off.

Finally, Gilgamesh is informed that the only means of living forever is to acquire a magic plant growing at the bottom of all rivers’ sources, which the ruler promptly finds, only to have the wily serpent (at the start of an auspicious career) snatch the fruit away from him. Enkidu’s death has left the king raw and lonely, but Utnapishtim’s example is illusory and dangerous, for it “postpones Gilgamesh’s necessary acceptance until a time when he is more ready for it,” as Mitchell writes. Gilgamesh realizes that immortality is not literal; one does not live forever at the world’s eastern edge, but rather in deeds, memories, and in words. We’re told by less mature voices to rage against the dying of the light, but the earliest story has Gilgamesh confront immortality’s mirage, understanding how “now that I stand / Before you, now that I see who you are, / I can’t fight.”

Ironically, Utnapishtim’s story was forgotten for millennia (if filtered through other myths). “Though it is one of the earliest explorations of these perennial themes,” writes David Damrosch in The Buried Book: The Loss and Rediscovery of the Great Epic of Gilgamesh, “this haunting poem isn’t a timeless classic.” Hidden just as surely as Utnapishtim in his orchard, the influence of The Epic of Gilgamesh is subliminal in our cultural memory. Preserved on a few broken kiln-burnt tablets strewn about the floor of the Akkadian king Ashurbanipal’s library, The Epic of Gilgamesh wasn’t rediscovered until the 19th century by British archeologists. Of that, Utnapishtim’s discourse on eternity occupied only a few lines on the 11th tablet.

Literary historian Michael Schmidt in Gilgamesh: The Life of a Poem describes the epic as constituting “the first road novel, the first trip to hell, the first Deluge.” So much has come after what that nameless scribe wrote; it predates Homer and Virgil, Dante and John Milton, William Shakespeare and Jane Austen, Anne Bradstreet and Emily Dickinson. Even though ignorant of Gilgamesh, they worked and aspired to the same timelessness, for as Schmidt writes, it “prefigures almost every literary tone and trope and suggests all genres, from dramatic to epic, from lament to lyrics and chronicle, that have followed it.”

The Epic of Gilgamesh reminds us that there have been many floods, many apocalypses, many deaths, and virtually nobody has ever come out the other side alive. To live forever may be a myth, yet for our lack of eternity, even after all these millennia, we are still “Wandering, always eastward, in search / of Utnapishtim, whom the gods made immortal.” 

3.Sex evolved before death. Arguably the former was a prerequisite for the latter. Sexual reproduction, genetic material exchange resulting in a new individual, was first practiced among simple prokaryotes—unicellular organisms lacking membrane and nuclei—about two billion years ago. Birds do it, bees do it, educated fleas do it, and apparently even prokaryotes do it. Such hobbies introduce beneficial genetic variations that the date-night loneliness of asexual reproduction simply doesn’t allow for. When celibate organisms reproduce through mitosis, they’re cloning themselves—the individual is the species. If you squish an asexual prokaryote, there are millions more just like it—death is meaningless; it is fundamentally immortal. But once sex exists, the loss of any one thing can be considered the irretrievable death of something completely unique, no matter how simple it may be. As the old joke at my alma matter has it, “Sex kills. If you want to live forever, go to Carnegie Mellon.”

Immunologist William R. Clark explains in Sex and the Origins of Death that “Obligatory death as a result of senescence—natural aging—may not have come into existence for more than a billion years after life…programmed death seems to have arisen at about the same time that cells began experimenting [with] sex…It may be the ultimate loss of innocence.” From the first orgasm came the first death gasp—at least proverbially. Human culture has subsequently been one long reaction to that reality, debating whether it was a fall or a Felix culpa.

That sex precedes death isn’t just a biological fact, but it has the gloss of theological truth about it as well. Such is the chronology as implied by Genesis; though there was debate as to if Adam and Eve did have sex in Eden, there seemed little doubt that they could have (though St. Augustin said it could only be facilitated through pure rationality, and not fallen passion). That all changes once Utnapishtim’s wily serpent makes a reappearance, and God tells Eve that He will “greatly multiply thy sorrow and thy conception; in sorrow thou shalt bring forth children.” Only three verses later, and God tells Adam that “dust thou art, and unto dust shalt thou return.” Sent beyond Eden’s walls to live out their finite days in the desert, their only consolations are sex and death.

Eros and Thanatos endures in the human psyche. Since the 16th century, the French have referred to orgasm as la petite mort, the “little death.” When that phrase first appeared, Europe was in the midst of syphilitic panic. A disease of replacement: silver noses pressed into the viscus putty of a rancid face, and of the tics and mutterings of those who’ve gone insane. Anthropologist Jared Diamond writes in Guns, Germs, and Steel: The Fates of Human Societies that when syphilis “was first definitely recorded…in 1495, its pustules often covered the body from the head to the knees, caused flesh to fall from people’s faces, and led to death within a few months.” If scolds were looking for the connection between sex and death, syphilis was a ready-made villain. Always a moralizing faith, Christianity was made a bit more so with the arrival of syphilis; in 15th-century Florence the fanatical Dominican Girolamo Savonarola taught that it was God’s punishment for decadent humanism; a generation later and the Protestant Martin Luther would concur with his Catholic forebear.

In Naples they called it the “French disease,” and in Paris it was an Italian one, but epidemiologists have configured it as American, noting its arrival shortly after Christopher Columbus’s return from the Caribbean. Syphilis was an export alongside potatoes and tomatoes; an unwitting revenge for the smallpox introduced into the Western Hemisphere. If modernity signals its own fall, than syphilis was perhaps an indiscriminate punishment, for as historian Roy Porter writes in The Greatest Benefit to Mankind: A Medical History of Humanity, syphilis “should be regarded as typical of the new plagues of an age of conquest and turbulence, one spread by international warfare, rising population density…[and] the migrations of soldiers and traders.” Sacrificed immortality was the price that living creatures paid for the possibility of connection, for if eternity was once a biological process, then its opposite was as well.

4.Ponce de Leon looked for immortality in Florida, it’s true. Somewhere near where tourists stroll eroding Miami Beach, soccer moms pick their children up from Broward County strip malls, or hearty adventurers visit Pensacola gator-parks, the conquistador had obsessively searched for the Fountain of Youth. According to (apocryphal) legend, de Leon was fixated on stories told by Native Americans of a mythic source water whose curative properties would restore men to youth—indefinitely. Immortality by water fountain if you will. “Ponce de Leon went down in history as a wishful graybeard seeking eternal youth, like so many Floridians today,” quips Tony Horowitz in A Voyage Long and Strange: On the Trail of Vikings, Conquistadors, Lost Colonists, and Other Adventures in Early America.

Arawak and Taino spoke of a spring named “Bimini,” located everywhere from the Bahamas to the Yucatan. De Leon looked for it in the future Sunshine State, though you’ll note that he did not find it. If you’ve ever visited the Old Town of San Juan, Puerto Rico with its charming, crooked stone streets that meander by colonial buildings painted in pinks and blues, you’ll find that far from achieving immortality, de Leon is buried inside of the white-walled Cathedral of San Juan Bautista. In 1521, somewhere between Florida’s hidden Fountain of Youth and immortality, de Leon found himself in the way of a manchineel poisoned arrow wielded by a Calusa warrior.  

When de Leon envisioned a bubbling creek that could restore him to lost youth (probably better to have just enjoyed the original more), by what mechanism did he see such a thing as working? Chemical or alchemical, natural or supernatural? Horowitz writes that a “Spanish historian later claimed that Ponce de Leon [searched]…as a cure for his impotence,” which gives us an emotional register for the conquistador’s obsession, if not the pharmaceutical specifics. Since no such fountain actually exists, the question of whether it’s magic or science is easier to answer—it’s neither. Or, perhaps, better to think of it as a magical belief about science; the idea that the fountain’s waters have mineralogical or medical properties is fundamentally just a mask for our supernatural inclinations, the fountain not so different from Gilgamesh’s restorative plant.

As early as the fifth century before the Common Era, and Herodotus would declaim in The Histories that the mythic Macrobians living on the Horn of Africa could live as long as 120 years with the assistance of a particularly pure spring. He writes of a “fountain, wherein when they had washed, they found their flesh all glossy and sleek, as if they had bathed in oil—and a scent came from the spring like that of violets…their constant use of the water from it [is] what makes them so long-lived” (gerontologists agree that 120 years seems to be the upper-limit natural expiration date for humans, if free of disease and accident).

Alexander the Great and the imaginary Christian king Prester John, whose realm was supposedly somewhere deep in either pagan Asia or Africa, are associated with the myth. The 14th-century faux-exploration narrative The Travels of Sir John Mandeville, a post-modern picaresque or autofiction written before postmodernism and autofiction were things, claims that a similar spring exists in India, “a beautiful well, whose water has a sweet taste and smell, as if of different kinds of spices.” Some of the author’s accounts of Egypt and China conform to what we know about those places during the Middle Ages, and yet Mandeville’s claims about whole tribes that have Cynocephaly (they’re dog-headed) or of groups of Epiphagi (people with heads in their chests) strain credulity. How are we to trust such a source about the Fountain of Youth?

What these examples should demonstrate is that de Leon had a ready-made script. Not a dissimilar process to how Spanish colonists saw the ancient Greek legend of the Amazons in South America, or the Portuguese fable of the Seven Cities of Cibola in the red-baked American southwest. Theirs was never a process of discovery, but of the endless use and reuse of their own dusty legends. Who knows what Bimini really was? The conquistadors had their imaginings from Herodotus and Mandeville, and they were going to impose such a concept onto America. Immortality, regardless of whether it’s to be found in India, the Horn of Africa, or south Florida, is an obsession. Yet what the obsessive obsesses over tells us more about the obsessed than the obsessee.

5.Virginia Woolf didn’t wish to acquire immortality, far from it. A disservice to Woolf, not to mention the millions of those who suffer from depression, to reduce her 1941 suicide to allegory or symbol. When weighted down with rocks she walked into the Ouse near her East Sussex home, Woolf was not providing us with gloss on life and death. “I am doing what seems the best thing to do,” Woolf writes in the note left for her husband, “I don’t think two people could have been happier till this terrible disease came.” That, in its straightforwardness, says the most important thing about Wolfe’s suicide—that it was the result of her disease. Depression is not allegory, it is a disease, and oftentimes it kills people. Woolf did, however, supply her thoughts on immortality some 13 years earlier, in one of the greatest meditations ever supplied on the subject, her novel Orlando: A Biography.

Woolf depicts an English courtier born during the reign of the Tudors, who, despite the limitations of having a finite body, is somehow able to will himself (and then herself) into immortality. Orlando may be a subject of Queen Elizabeth I, but by the end of the novel she’s able to fly over her manor house in an airplane. In the interim, the character experiences the Frost Faire of 1608 (when the Thames froze over and merchants plied candied apples and roasted walnuts on its surface), an affair with a beautiful Russian noblewoman, placement in an English embassy in Constantinople, adoption by a group of Roma, and marriage to an eccentric gender nonconforming sea captain with the unimaginably great name of Marmaduke Bonthrop Shelmerdine. And then around the age of 30, Orlando transforms from being a man into a woman as she slept one night.

No magic plants or springs, rather a sense that Orlando’s personality is so overabundant that it can’t be constrained through either sexuality or time. “Orlando had become a woman,” Woolf writes simply, “there’s no denying it.” A masterpiece of both temporal and gender ambiguity, in Orlando the author doesn’t desire immortality for herself, but she imagines it for her character, what her biographer Hermione Lee describes in Virginia Woolf as its quality of being a “masterpiece of playful subterfuge.” Unlike Gilgamesh with his overweening bloodlust, or de Leon with his immature obsession, Woolf envisions immortality as a radical, subversive, creative state; as Hill puts it, a “magnificent, surrealist erection.” For Woolf, immortality is better understood as an aesthetic act, living one’s life so fully, with such pure, unmitigated, wondrous agency that the contours of normal years and decades simply must expand to fit our enormity. With relativistic insight Woolf observes that “An hour, once it lodges in the queer element of the human spirit, may be stretched to fifty or a hundred times its clock length; on the other hand, an hour may be accurately represented on the timepiece of the mind by one second,” a phenomenon that she cheekily claims “is less known than it should be and deserves fuller investigation.”

Orlando’s great negative capability is that Woolf describes depression without the novel losing sight of a certain wondrous enchantment. She writes that “At the age of thirty…this young nobleman had not only had every experience that life has to offer, but had seen the worthlessness of them all. Love and ambition, women and poets were all equally vain.” Such emptiness and disinterestedness—the sheer fact of being so tired—is the medium of depression. But the narrative itself is what demonstrates the illusoriness of such emotions, even if in the moment they feel to us to be inviolate. Orlando thinks that they have experienced everything, but have they been to Constantinople? Have they flown over Sussex in a plane? Not yet—and therein lay the rub. Life has a charged abundance, even if brain chemistry and circumstance sometimes deny us that.

Orlando was a roman à clef upon the great love of Woolf’s life—Vita Sackville-West. The character shares Sackville-West’s androgynous beauty, her poetic brilliance, and her aristocratic forbearance. Most of all, Orlando and Sackville-West are united in having lived their lives with such jouissance, with such unbridled life, that death itself seems to indefinitely pause to take them. An existence where we can observe in the Thames “frozen to a depth of some twenty fathoms, a wrecked wherry boat… lying on the bed of the river where it had sunk last autumn, overladen with apples,” the frozen corpse of the saleswoman visible in her blue-lipped magnificence at the bottom. What a strange, terrible, and beautiful thing this life is, that if we were to fully inhabit every single blessed second of it, we’d be as eternal as it were ever possible to be, within the very universe of a moment. How fortunate we are.  

6.On the road from Santiago de Compostela in 1378, the French alchemist Nicholas Flamel and his wife Perenelle met eternity. According to almost certainly fabricated accounts written in the 17th century, the wealthy Parisian bookseller’s study of magic helped him to (among other things) derive the philosopher’s stone, and thus generate the so-called “Elixir of Life,” which granted him immortality. Flamel had journeyed to Spain, that liminal place between east and west, Europe and Africa, Christian, Jewish, and Islamic, under the assumption that some mage could interpret a manuscript he purchased in Paris. Flamel wasn’t so fortunate in finding assistance while actually in Spain, but on the road home a Jewish converso recognized the occult text for what it was—an original copy of the powerful grimoire The Book of Abramelin.

As with all such guides, The Book of Abramelin makes big promises. Within there are “the actual rules to acquire this Divine and Sacred Magic…therein find certain examples and other matters which be none the less useful and profitable unto thee.” It parsed the intricacies of summoning your guardian angel, how to bind demons, and how to walk underwater (which as impressive as it is, doesn’t really match the first two). There are a lot of magic squares scattered throughout. And, of course, there is the recipe for the philosopher’s stone. As with the stories about Flamel himself, The Book of Abramelin seems to be another 17th-century invention. The author is supposedly the German Kabbalist Abraham of Worms, who traveled to Egypt to confer with the reputed mystical master Abramelin himself. “And having written this with mine own hand,” writes the author (since we’re unsure of whose hand penned that line), “I have placed it within this casket, and locked it up, as a most precious treasure; in order that when thou hast arrived at a proper age thou mayest be able to admire, to consider, and to enjoy the marvels.”

No version of the text has been found that predates 1608, and the earliest copies are in German rather than Hebrew; all of which seems to indicate that just as with Flamel, the reputation of The Book of Abramelin is a fantasy borrowing authority from medieval exoticism. A fashionable Hebraism developed during the Italian Renaissance, and quickly spread throughout Europe, so that references to the Kabbalah could impart a degree of authenticity for Christian occultists. Gershom Scholem writes in Alchemy and Kabbalah that the “name of this arcane discipline became a popular catchword in Renaissance and Baroque theosophical and occult circles, having been declared and revered as the guardian of the oldest and highest mystical wisdom of mankind by its fist Christian mediators.” All the greatest stories about Kabbalah may be set in the Middle Ages, but for the Christian occultists who appropriated it, the subject was very much a Renaissance affair.

For alchemists and occultists like Paracelsus, Johann Reuchlin, or John Dee, Flamel was an instructive example. Knowledge was supposed to be the road to eternity, as surely as a Parisian scribe could return from Compostela, and perhaps the bookseller was somewhere wandering like the converso who gave him the secret to never dying. Could Flamel be glimpsed, in the court of the occult emperor Rudolf II in red-roofed Prague, discoursing on astronomy with Johannes Keppler and Kabbalah with Rabbi Judah ben Lowe? Would he be found on those curving stones of dark Cambridge with Thomas Vaughan, or among the sun-dappled courtyards of Florence with Giordano Bruno?

In reality, Flamel was moldering underneath the nave of the Church of Saint-Jacques-de-la-Boucherie in the fourth arrondissement. His tombstone now sits in the Musée de Cluny; the year of Flamel’s expiration was 1418. By all actual accounts, he was a loving husband and a successful merchant. Flamel’s fate, in all of its beauty, was the same as everybody’s. The psychoanalyst Marie-Louise von France gives a charitable reading of the Renaissance theorists of immortality, explaining in Alchemy: An Introduction to the Symbolism and Psychology that the desire for this knowledge “was actually the search for an incorruptible essence in man which would survive death, an essential part of the human being which could be preserved.” An accurate definition for poetry.

7.Enoch’s story is recounted across only four lines in Genesis. The King James Version of the Bible sets the cryptic tale of a man who ascended bodily to heaven, presumably having never died and still living immortally somewhere in the astral realm, in just 53 words. Father of Methuselah, who was himself so remarkably long-lived that his name has long been conflated with extreme seniority, Enoch simply never died. We’re told at Genesis 5:24 that “Enoch walked with God: and he was not; for God took him.” Such is the entire explanation of what happened to this man of the seventh generation. What an odd bit of poetry this is. For. God. Took. Him.

Even if the Bible is tight-lipped about Enoch, the copious fan-fic about him (which scholars call “apocrypha”) lacked a similar reticence. From the mystics of antiquity to the occultists of today, Enoch achieved not just immortality but actual apotheosis, seated next to a God who liked him so much that he transformed the mortal into a “lesser Yahweh.” Such is a description of his career change from a pseudographical rabbinic text called 3 Enoch, which is dated to the fifth century after the Common Era. Scholem provides gloss on this unusual book in his Major Trends in Jewish Mysticism, writing that “[his] flesh was turned to flame, his veins to fire, his eye-lashes to flashes of lightning, his eye-balls to flaming torches, and who God has placed on a throne next to the throne of glory, received after this heavenly transformation the name Metatron.” There is a venerable occult tradition that holds that Enoch become immortal, was elevated above even the archangels, became the very voice of the Lord, and was given a name that sounds like that of a Transformer.

Enochian literature can be traced back to three apocryphal texts from the first centuries of the Common Era that all elaborated on the terse passage from Genesis. 3 Enoch (also amazingly called The Revelation of Metatron) is joined by the Book of Enoch, written in Ge’ez and still held canonical by the Orthodox Tewahedo Church in Ethiopia, and the Second Book of Enoch which only survives in Old Bulgarian (I’m not making that up). The last book’s alternate title is actually even better than The Revelation of Metatron, it is often referred to as The Book of Secrets. From translator Willis Barnstone’s version of The Book of Secrets, as included in his incredible anthology The Other Bible, Enoch speaks in the first person, telling us that “I know all things and have written them into books concerning the heavens and their end, their plentitude, their armies, and their marching. I have measured and described the stars, their great and countless multitude. What man has seen their revolutions and entrances?”  

Metatron was the amanuenses of God’s thoughts, the librarian of reality who noted all that had, would, or could be done. A scribe as surely as Flamel was—Metatron was a writer. Enoch was the first figure in scripture to ascend to heaven, though he was not the first. Midrash actually records eight people as having achieved immortality this way, including the prophet Elijah who is consumed entirely up into a whirlwind; Sarah, whom is blessed by her grandfather Jacob with “May you live forever and never die;” and Ebed-Melech the Ethiopian who saved the prophet Jeremiah’s life during the Siege of Jerusalem. Catholics believe that the Virgin Mary ascended bodily, though after her death on Earth (a minority claim she was taken while still alive). Christianity more generally teaches that Christ rose to heaven, though he also died first, of course; while Muslims teach that both Muhammad and Jesus ascended.

Immortality, these accounts remind us, is a miracle. Perhaps no more so than with Enoch, for those other examples concern the ascension of prophets and the messiah, but the lowly man of the seventh generation was just some guy. A quiet beauty to the account, for why did Enoch walk with God? What about Enoch was so agreeable to the Lord that He would take him? What cracked beauty is there in a human gifted the ability to see the universe in its resplendence, so that as concerns the stars, Enoch speaks in a voice of awe from the Book of Secrets that “Not even the angels see their number, yet I have recorded all their names.”

8.While writing theater reviews for the Dublin Evening Mail, a 28-year-old graduate of Trinity University named Abraham Stoker would be the unlikely author of a gushing fan letter sent on Valentine’s Day 1876 to an American poet with an address in the distinctly unglamorous locale of Camden, New Jersey. That wasn’t Stoker’s first attempt at writing to Walt Whitman; he’d penned an effusive, borderline-erotic missive some four years earlier but kept the epistle in his desk out of embarrassment, before finally sending the original with a new note of introduction.

“Do not think me cheeky for writing this,” Stoker, who now went by “Bram,” wrote in the new letter, but “I believe you will like it,” he said regarding his original message. Whitman is a “man who can write, as you have written, the most candid words that ever fell from the lips of a mortal man.” For Stoker, only 9 when Leaves of Grass was first printed (and as of then completely unknown in Britain or Ireland), Whitman had “shaken off the shackles and your wings are free.” With a tragic pathos still clear more than a century later, Stoker confesses (which has afforded no shortage of literary gossip) that “I have the shackles on my shoulders still—but I have no wings.”

As obsequious as Renfield, Stoker tells Whitman that “You are a true man, and I would like to be one myself, and so I would be towards you as a brother and as a pupil to his master.” Perhaps he was, as there is something almost vampiric in Whitman’s 1891 revision of his poem “Trickle Drops,” done a year before his death and six before his protégé would publish his most famous novel. “Trickle drops! my blue veins leaving!…drip bleeding drops, / From wounds made to free you when you were prison’d / From my face, from my forehead and lips, / my breast…Stain every page, stain every song I sing, every word I saw, bloody drops,” Whitman enthuses. Stoker’s titular character in Dracula concurs with the American bard: “The blood is the life!”

Strange to think of the consummate rugged individualist with his broad shoulders and his Old Testament beard as influencing Stoker, but as an unconventional bohemian, Whitman may have shared more with Dracula than has been supposed. Biographer Barbara Belford notes in Bram Stoker and the Man Who Was Dracula that “the vampire at times resembles Whitman. Each has long white hair, a heavy moustache, great height and strength, and a leonine bearing.” Perhaps less superficially, “Whitman’s poetry celebrates the voluptuousness of death and the deathlike quality of love.” Whitman, with the gleam of the vampire in his eyes, promises in his preface to Leaves of Grass that the “greatest poet…drags the dead out of their coffins and stands them again on their feet.”

Leaves of Grass is a work that enthuses about immortality, albeit more in the transcendentalist sense than in the vampiric one. “The smallest sprout shows there is really no death,” Whitman writes, “And if there was it led forward life, and does not wait at the end to arrest it…All goes onward and outward, nothing collapses, / And to die is different from what anyone supposed.” Whitman fully expected a metaphysical immortality whereby his very atoms mingle into the streams and stones, the rocks and the rambles. Admittedly a different type of immortality than that surmised by Stoker, yet he borrowed from Whitman the poet’s charged, fully realized, erotic, bohemian persona. The Irishman noted that Whitman was the “quintessential male,” and its hard not to see some of that projection onto Dracula.

The immediate historical influence for Dracula was the 15th-century Wallachian prince Vlad Tepes, more popularly known as the “Impaler” after his favored pastime. Eros and Thanatos again, a bit of sex and death in that nickname. Radu Florescu and Raymond T. McNally note in their classic In Search of Dracula that the “ruler notorious for mass impalements of his enemies…was in fact called Dracula in the fifteenth century, and we found that he even signed his name that way.” From Whitman, Stoker took the transcendent nature of immortality, and from Vlad the blessed violence, bound together in the transgressive, bohemian personality of the aesthete. Literary scholars Joanna Levin and Edward Whitley write in Whitman Among the Bohemians that from the “bohemians to contemporary hipsters, Whitman still commands center stage, providing an ever-magnetic focal point for countercultural self-fashionings,” something that any goth can tell you is true of Dracula as well. As a reader, Stoker is able to comprehend that Whitman’s celebration of immortality must by necessity also have its drawbacks, that the vampiric can’t help but pulse through any conception of life beyond the grave.

With the smallest sprout in mind, Stoker writes that it’s a “strange world, a sad world, a world full of miseries, and woes, and troubles.” Yet we can “all dance to the tune…Bleeding hearts, and dry bones of the churchyard, and tears that burns as they fall—all dance together to the music.” Immortality kindled in the space of human connection, our lives able to exist indefinitely through others. Dracula does this literally, sucking upon the blood of innocents, but we ideally all do it when we ingest the words of others, and respond in kind. Whitman wrote back to Stoker. “I am up and dress’d, and get out every day a little, live here quite lonesome, but hearty, and good spirits.” He concluded the letter with, “Write to me again.”

9.Many lines are on the CV of the biomedical gerontologist Aubrey de Grey: graduate of Trinity College Cambridge with a Ph.D. awarded for his dissertation The Mitochondrial Free Radical Theory of Aging; Fellow of the Gerontological Association of America, Institute for Ethics and Emerging Technology, and the American Aging Institute; adjunct professor at the Moscow Institute of Physics and Technology, and most famously the chief science officer at the California-based Strategies for Engineered Negligible Senescence. Added to that, under “Skills,” could be “Still Alive.” Don’t knock it as an entry; the vast majority of people who have ever lived can no longer claim the same. De Grey, whose name is almost ridiculously on the nose, would argue that “Still Alive” could be easily translated into “(Effectively) Immortal,” for the researcher claims that death is a terminal illness that will one day be preventable, and that any dismissiveness to that is a case of sublimated religious nostalgia.

He looks the part of an immortal, more prophet than scientist. With a long, tangled, greying auburn beard that is positively druidic, de Grey appears as if he were Merlin or Galahad, some Arthurian immortal. If anything, that epic beard calls to mind those who’ve joined us already—the good, grey bearded ruddy complexioned poet Whitman, and de Leon with his face burnt from the Florida sun with unshorn hair poking out from his metal cap; Enoch’s cascading white mane (or so one imagines) and Utnapishtim’s curled black beard hanging in plaits from his gaunt, severe face.

De Gray has an advantage over all of these men, and that’s that he is still alive (or even exists in the first place). That may, however, be his ultimate disadvantage, for unreality has a type of immortality that biology can’t approach. Of no concern to de Grey, writing alongside Michael Rae in Ending Aging: The Rejuvenation Breakthroughs that Could Reverse Human Aging in Our Lifetime, he argues that his field is “inhibited by the deeply ingrained belief that aging was ‘natural’ and ‘inevitable,’ biogerentologists had set themselves apart from the rest of the biomedical community by allowing themselves to be overawed by the complexity of the phenomenon that they were observing.” Not without some justification, de Grey argues that aging and death are biological problems and thus have biological solutions.

Utnapishtim had his magic plant and de Leon his spring of rejuvenation, but for de Grey immortality was an issue of his “own idea for eliminating intercellular garbage like lipofuscin [combined with]…making mitochondrial mutations harmless…for addressing glycation, amyloid accumulation, cell loss, senescent cells and cancer.” When it comes to the hodgepodge of techno-utopians who fall under the broad term of “transhumanism,” de Grey is positively a traditionalist in that he’s still focused on these meat bags filled with blood, piss, shit, and phlegm. More radical transhumanists have gone digital, arguing that consciousness could be downloaded to computers, the eternal soul an issue of making sure that your files are backed up.

Engineer Ray Kurzweil is one such evangelist for the coming of robot-Jesus, when Artificial Intelligence will be able to assist in the downloading of your mind, and the resurrection of those who’ve already passed before us (through purely material, scientific, technological means of course). He writes in The Singularity Is Near: When Humans Transcend Biology that when that eschaton arrives (always in just a few decades), it will “allow us to transcend these limitations of our biological bodies and brains. We will gain power over our fates. Our mortality will be in our own hands. We will be able to live as long as we want.” Apparently, such a project is easier than halting climate change, or at least the hyper libertarian funders of such transhumanist schemes, from Elon Musk to Peter Thiel, would have you believe such. The desire for immortality is a deeply human one, but with the irony that its achievement would serve to eliminate the human entirely. Ask not for whom the computer chimes, simply upload your soul to the cloud.

10.About two weeks ago from the time of my writing, Speaker of the House Nancy Pelosi announced that the legislature was “moving forward with an official impeachment inquiry” of Donald J. Trump. Pelosi’s announcement was broadcast on all major networks, on PBS and MSNBC, CNN and (even) FOX. In a vacuum, electromagnetic radiation travels at 186,000 miles per second; Albert Einstein’s theory of special relativity tells us that nothing may go faster. That means that this happy bit of news can be heard as far as 225,327,187,473.16 miles away, and counting, though unfortunately what’s in that space is mostly dust and rock. The closest star system to us is Alpha Centauri, which is a positively minuscule 4.37 light years away, meaning that for any lucky extraterrestrials there Barack Obama is still president.

In EZ Aquarii, they just heard President Obama’s acceptance address in Grant Park; any planets near Procyon will have just been informed of the 2008 financial collapse, and at LPP 944-020 they’re leaning of the invasion of Iraq. At MU Arae they’ve discovered that humans made the puny jump to the Moon (as well as listening to Abbey Road for the first time), HR 4864 just heard Walter Cronkite deliver the sad news about President John F. Kennedy’s assassination, and Zeta Virginis is now aware that the Second World War is over. In just a little less than a decade, assuming that such weak electromagnetic waves hadn’t been absorbed by the dust and rock that reigns supreme in interstellar space, Guglielmo Marconi’s first transatlantic radio broadcast of the letter “s” repeatedly tapped out in Morse code would be arriving at K2-18b, a massive “super-earth” exoplanet some 120 light years away.

Earth is surrounded by an electromagnetic halo, our missives in radio and light that grow ever weaker with distance, but which send our thoughts ever further into interstellar space with every passing year. Music, entertainment, news, communication, all of it sent out like so many dandelion spores into the reaches of the black cosmos. The continual thrum of that pulsating meaning—what Whitman could have described as “Out of the cradle endlessly rocking, / Out of the mocking-bird’s throat, the musical shuttle”—a record of our having been here that can never be erased, though it’s ambiguous if there is anyone out there to listen. “O rising stars!” Whitman wrote, “Perhaps… [I] will rise with some of you,” an offering made up in a frequency between 88MHz-108Mhz. There is an immortality, disembodied and ethereal, that does turn to be out in the heavens—just in a form that may have been difficult for Enoch to imagine.

A harder chunk of our finality exists out there as well, the Golden Record included on both of the Voyager 1 and Voyager II spacecraft launched by NASA, which having passed into interstellar space beyond our solar system in respectively 2012 and 2018 are the furthest things that have ever been touched by human hands. Conceived of by the astrophysicist Carl Sagan, the Golden Record is a phonographic LP encoded with both images and a little under six and a half hours of sounds, meant to express our sheer enormity. For any extraterrestrials that should happen to find the record—Voyager 1 is about 40,000 years out from Gliese 445—Sagan and his committee’s record may serve as the only tangible example of our eternity, our only vehicle for immortality. In being able to select the contents of such a canon, Sagan is arguably the most influential human to ever live.

Any future listeners will be able to hear pianist Glen Gould’s transcendent interpretation of Johan Sebastian Bach’s mathematically perfect Brandenburg Concerto, the mournful cry of blues-singer Blind Willie Johnson’s “Dark was the Night, Cold was the Ground,” the Bavarian State Orchestra playing Wolfgang Amadeus Mozart’s The Magic Flute, and Chuck Berry absolutely shredding it on “Johnny B. Goode.” Sagan reminisces in Pale Blue Dot: A Vision of the Human Future in Space that “any alien ship that finds it will have another standard by which to judge us.” Astronomer Jim Bell writes in The Interstellar Age: The Story of the NASA Men and Women Who Flew the Forty-Year Voyager Mission that “If the messages aboard the Voyagers ended up being the last surviving artifacts of our world, they would signify the brighter sigh of human nature…[we] wanted to send out sign of our hopes, not our regrets.” Unless Voyager 1 or 2 should slam into some random star or fall into a hidden blackhole, unless some bit of flotsam should smash it up or some wayward creature should use it for target practice, both probes will continue unimpeded for a very long time. Space being mostly empty, their lifespans will be such that they’re effectively immortal.

Fifty-thousand years from now, after climate change renders us extinct, the interglacial period will end and a new ice age will descend on Earth. In two million years, the coral reefs of the world will have had time to recover from ocean acidification. Sixty million years from now, and the Canadian Rockies will have eroded away. Geologists predict that all of the Earth’s continents will coalesce into a supercontinent 250 million years from now. Five hundred million years in the future they’ll have separated again. A little more than a billion years from now, and stellar fluctuations will increase temperatures so that the oceans will be boiled away. In 1.6 billion years the last of our friends the prokaryotes will most likely be extinct. By 7.59 billion years, the sun will reach its Red Giant phase, and what remains of the Earth will most likely fall into our star. Through all of that, slowly moving along in blackness, will be the Golden Record. “Better was it to go unknown and leave behind you an arch, a potting shed, a wall where peaches ripen, than to burn like a meteor and leave no dust,” Woolf wrote. Voyager, however, leaves dust, no matter how scant.

Our solar system will be dead, but somewhere you’ll still be able to hear Ludwig von Beethoven’s Symphony Number 5 in C Minor. Sagan’s satellite is as if Ashurbanipal’s library was buried in the desert of space. As these things have moved us, perhaps somehow, someway they will move minds that have yet to exist. In a lonely universe, the only immortality is in each other, whomever we may be. Included within the Golden Record are a series of sentences in various languages, including Akkadian. Only 19 seconds into the record, and for billions of years listeners may hear a human speaking in the language of Utnapishtim, delivering the benediction that “May all be very well.”

Images: Mark Tegethoff, Greg Rakozy, Kristopher Roller, Aron Visuals, Ewan Robertson, NASA, Franck V., Drew Graham, Maksym Gryshchenko

The Greeks Aren’t Done with Us: Simon Critchley on Tragedy

We know that ghosts cannot speak until they have drunk blood; and the spirits which we evoke demand the blood of our hearts.—Ulrich von Wilamowitz-Moellendorff, Greek Historical Writing, and Apollo (1908)

Thirteen years ago, when I lived briefly in Glasgow, I made it a habit to regularly attend the theater. An unheralded cultural mecca in its own right, overshadowed by charming, medieval Edinburgh to the east, the post-industrial Scottish capitalI was never lacking in good drama. Also, they let you drink beer during performances. Chief among those plays was a production of Sophocles’s Antigone, the final part of his tragic Theban Cycle, and one of the most theorized and staged of dramas from that Athenian golden age four centuries before the Common Era, now presented in the repurposed 16th-century Tron Church. Director David Levin took the Attic Greek of Sophocles and translated it into the guttural brogue of Lowlands Scotts, and in a strategy now deployed almost universally for any production of a play older than a century, the chitons of the ancient world were replaced with business suits, and the decrees of Creon were presented on television screen, as the action was reimagined not in 441 BCE but in 2007.

Enough to remind me of that headline from The Onion which snarked: “Unconventional Director Sets Shakespeare Play in Time, Place that Shakespeare Intended.” The satirical newspaper implicitly mocks adaptations like Richard Loncraine’s Richard III which imagined the titular character (devilishly performed by Ian McKellen) as a sort of Oswald Mosley-like fascist, and Derek Jarman’s masterful version of Christopher Marlowe’s Edward II, which makes a play about the Plantagenet line of succession into a parable about gay rights and the Act Up movement. By contrast, The Onion quips that its imagined “unconventional” staging of The Merchant of Venice is one in which “Swords will replace guns, ducats will be used instead of the American dollar or Japanese yen, and costumes, such as…[the] customary pinstripe suit, general’s uniform, or nudity, will be replaced by garb of the kind worn” in the Renaissance. The dramaturgical perspective behind Levin’s Antigone was definitely what the article parodied; there was nary a contorted dramatic mask to be found, no Greek chorus chanting in dithyrambs, and, as I recall, lots of video projection. The Onion aside, British philosopher Simon Critchley would see no problem with Levin’s artistic decisions, writing in his new book Tragedy, the Greeks, and Us that “each generation has an obligation to reinvent the classics. The ancients need our blood to revise and live among us. By definition, such an act of donation constructs the ancients in our image.”

Antigone, coming from as foreign a culture as it does, still holds our attention for some reason. The story of the titular character—punished by her uncle Creon for daring to defy his command that her brother Polynices’s corpse be left to fester as carrion for the buzzards and worms in the field where he died because he has raised arms against Thebes—would seem to have little to do with Tony Blair’s United Kingdom. When a Glaswegian audience hears Sophocles’s words, however, that “I have nothing but contempt for the kind of governor who is afraid, for whatever reason, to follow the course the he knows is best for the State; and as for the man who sets private friendship above the public welfare—I have no use for him either” a bit more resonance may be heard. Critchley argues that at the core of Greek tragedy is a sublime ambivalence, an engagement with contradiction that classical philosophy can’t abide;as distant as Antigone’s origins may be, its exploration of the conflict between the individual and the state, terrorism and liberation, surveillance and freedom seemed very of the millennium’s first decade. Creon’s countenance of the unthinkable punishment of his niece, to be bricked up behind a wall, was delivered in front of a camera as if George W. Bush announcing the bombing of Iraq from the Oval Office on primetime television. “Evil sometimes seems good / To a man whose mind / A god leads to destruction,” Sophocles wrote. This was a staging for the era of the Iraq War and FOX News, of the Patriot Act and NSA surveillance, and of the coming financial collapse. Less than a year later, and I’d be back in my apartment stateside watching Barack Obama deliver his Grant Park acceptance speech. It was enough to make one think of Antigone’s line: “Our ship of fate, which recent storms have threatened to destroy, has come to harbor at last.” I’m a bad student of the Greeks; I should have known better than to embrace that narcotic hope that pretends tragedy is not the omnipresent condition of humanity.

What could Sophocles, Euripides, and Aeschylus possibly have to say in our current, troubled moment? Tragedy, the Greeks, and Us is Critchley’s attempt to grapple with those disquieting 32 extant plays that whisper to us from an often-fantasized collective past. What survives of Greek tragedy is four less plays than all of those written by Shakespeare; an entire genre of performance for which we have titles referenced by philosophers like Plato and Aristotle, with only those three playwrights’ words enduring, and where often the most we can hope for are a few fragments preserved on some surviving papyri. Critchley emphasizes how little we know about plays like Antigone, or Aeschylus’s Oresteia, or Euripides’s Medea; that classicists often hypothesized that they were born from the Dionysian rituals, or that they focused on satyr psalms, the “song of the goats,” giving tragedy the whiff of the demonic, of the demon Azazel to whom sacrifices of the scapegoat must be made in the Levantine desert.

Beyond even tragedy’s origin, which ancient Greek writers themselves disagreed about, we’re unsure exactly how productions were staged or who attended. What we do have are those surviving 32 plays themselves and the horrific narratives they recount—Oedipus blinded in grief over the patricide and incest that he unknowingly committed but prophetically ensured because of his hubris; Medea slaughtering her children as a revenge on the unfaithfulness of her husband; Pentheus ripped apart by her frenzied Maenads in ecstatic thrall to Dionysius because the Theban ruler couldn’t countenance the power of irrationality. “There are at least thirteen nouns in Attic Greek for words describing grief, lamentation, and mourning,” Critchley writes about the ancients; our “lack of vocabulary when it comes to the phenomenon of death speaks volumes about who we are.” Tragedy, the Greeks, and Us is Critchley’s attempt to give us a bit of their vocabulary of excessive lamentation so as to better approach our predicament.

Readers shouldn’t mistake Tragedy, the Greeks, and Us as a conservative defense of the canon; this is no paean to the superior understanding of the ancients, nor is its highfalutin’ self-help. Critchley’s book isn’t Better Living Through Euripides. Easy to misread the (admittedly not great) title as an advertisement for a book selling the snake-oil of traditionalist cultural literacy, that exercise in habitus that confuses familiarity with the “Great Books” as a type of wisdom. Rather, Critchley explores the Greek tragedies in all of their strange glory, as an exercise in aesthetic rupture, where the works of Sophocles, Aeschylus, and Euripides configure a different type of space that renders a potent critique against oppressive logic. His task is thus the “very opposite of any and all kinds of cultural conservatism.” Critchley sees the plays not as museum pieces, or as simple means of demonstrating that you went to a college with diplomas written in Latin, but rather as a “subversive traditionalism” that helps us to critique “ever more egregious forms of cultural stupefaction that arise from being blinded by the myopia of the present.” This is all much larger than either celebrating or denouncing the syllabi of St. John’s College; Critchley has no concern for boring questions about “Western Civilization” or “Defending the Canon,” rather he rightly sees the tragedies as an occasion to deconstruct those idols of our current age—of the market, of society, of law, of religion, of state. He convincingly argues that any honest radical can’t afford to ignore the past, and something primal and chthonic calls to us from those 32 extant plays, for “We might think we are through with the past, but the past isn’t through with us.”

Critchley explains that the contemporary world, perhaps even more so than when I watched Antigone in Glasgow, is a “confusing, noisy place, defined by endless war, rage, grief, ever-growing inequality. We undergo a gnawing moral and political uncertainty in a world of ambiguity.” Our moment, the philosopher claims, is a “tragicomedy defined by war, corruption, vanity, and greed,” for if my Antigone was of its moment, then Tragedy, the Greeks, and Us could only have been written after 2016. That year, and the characters it ushered into our national consciousness, can seem a particular type of American tragedy, but Critchley’s view (even while haunted by a certain hubristic figure with a predilection for the misspelled tweet) is more expansive than that. In his capable analysis, Critchley argues that tragedy exists as a mode of representing this chaos; a type of thinking at home with inconsistency, ambiguity, contradiction, and complexity. It’s those qualities that have made the form suspicious to philosophers.

Plato considered literature in several of his dialogues, concluding in Gorgias that the “effect of speech upon the structure of the soul / Is as the structure of drugs over the nature of bodies” (he wasn’t wrong), and famously having his puppet Socrates argue in The Republic that the just city-state would ban poets and poetry from their affairs for the aforementioned reason. Plato’s disgruntled student Aristotle was more generous to tragedy, content rather to categorize and explain its effects in Poetics, explaining that performance is the “imitation of an action that is serious, and also, as having magnitude, complete in itself…with incidents arousing pity and fear, wherewith to accomplish its catharsis of such emotions.” Aristotle’s view has historically been interpreted as a defense of literature in opposition to Plato, whereby that which the later found so dangerous—the passions and emotions roiled by drama—were now justified as a sort of emotional pressure gauge that helped audiences purge their otherwise potentially destructive emotions. By the 19th century a philosopher like Friedrich Nietzsche would anticipate Critchley (though the latter might chaff at that claim) when he exonerated tragedy as more than mere moral instruction, coming closer to Plato’s claim about literature’s dangers while ecstatically embracing that reality. According to Nietzsche, tragedy existed in the tension between “Apollonian” and “Dionysian” poles; the first implies rationality, order, beauty, logic, and truth; the second signifies the realm of chaos, irrationality, ecstasy, and intoxication. Nietzsche writes in The Birth of Tragedy that the form “sits in sublime rapture amidst this abundance of life, suffering and delight, listening to a far-off, melancholy song…whose names are Delusion, Will, Woe.” For the German philologist that’s a recommendation, to “join me in my faith in this Dionysiac life and the rebirth of tragedy.”

As a thinker, Critchley Agonistes is well equipped in joining these predecessors in systematizing what he argues is the unsystematizable. Faculty at the New School for Social Research,and coeditor for The New York Times philosophy column “The Stone” (to which I have contributed), Critchley has proven himself an apt scholar who engages the wider conversation. Not a popularizer per se, for Critchley’s goal isn’t the composition of listicles enumerating whacky facts about Hegel, but a philosopher in the truest sense of being one who goes into the Agora and grapples with the circumstances of meaning as they manifest in the punk rock venue, at the soccer stadium, and in the movie theater. Unlike most of his countrymen who recline in the discipline, Critchley is a British scholar who embraces what’s called “continental philosophy,” rejecting the arid, logical formulations of analytical thought in favor of the Parisian profundities of thinkers like Jacques Derrida, Emanuel Levinas, and Martin Heidegger. Critchley has written tomes with titles like The Ethics of Deconstruction: Derrida and Levinas and Ethics-Politics-Subjectivity: Essays on Derrida, Levinas, & Contemporary French Thought, but he’s also examined soccer in What We Think About When We Think About Football (he’s a Liverpool fan) and in Bowie he analyzed, well, Bowie. Add to that his provocative take on religion in Faith of the Faithless: Experiments in Political Theology and on death in The Book of Dead Philosophers (which consists of short entries enumerating the sometimes bizarre ways in which philosophers died, from jumping into a volcano to love potion poisoning) and Critchley has announced himself as one of the most psychedelically mind-expanding of people to earn their lucre by explaining Schopenhauer and Wittgenstein to undergraduates.  

What makes Critchley such an engaging thinker about the subjects he examines is both his grounding in continental philosophy (which asks questions about being, love, death, and eternity, as opposed to its analytical cousin content to enumerate all the definitions of the word “is”) and his unpretentious roots in working class Hertfordshire, studying at the glass-and-concrete University of Essex as opposed to tony Oxbridge. Thus, when Critchley writes that “there is an ancient quarrel between philosophy and poetry,” it seems pretty clear that he’s a secret agent working for the latter against the former. He rejects syllogism for stanza and embraces poetics in all of its multitudinous and glorious contradictions. The central argument of Tragedy, the Greeks, and Us is that the form “invites its audience to look at such disjunctions between two or more claims to truth, justice, or whatever without immediately seeking a unifying ground or reconciling the phenomena into a higher unity.” What makes Antigone so devastating is that the title character’s familial obligation justifies the burial of her brother, but the interests of the state validates Creon’s prohibition of that same burial. The tragedy arises in the irreconcilable conflict of two right things, with Critchley explaining that Greek drama “presents a conflictually constituted world defined by ambiguity, duplicity, uncertainty, and unknowability, a world that cannot be rendered rationally fully intelligible through some metaphysical first principles or set of principles, axioms, tables of categories, or whatever.”

This is the central argument: that the “experience of tragedy poses a most serious objection to that invention we call philosophy.” More accurately, Critchley argues that tragedy’s comfort with discomfort, its consistent embrace of inconsistency, its ordered representation of disorder, positions the genre as a type of radical critique of philosophy, a genre that expresses the anarchic rhetoric of the sophists, rather than their killjoy critic Socrates and his dour student Plato. As a refresher, the sophists were the itinerant and sometimes fantastically successful rhetoricians who taught Greek politicians a type of disorganized philosophy that, according to Socrates, had no concern with the truth, but only with what was convincing. Socrates supposedly placed “Truth” at the core of his dialectical method, and, ever since, the discipline has taken up the mantle of “a psychic and political existence at one with itself, which can be linked to ideas of self-mastery, self-legislation, autonomy, and autarchy, and which inform the modern jargon of authenticity.” Tragedy is defined by none of those things; where philosophy strives for order and harmony, tragedy dwells in chaos and division; where syllogism strives to eliminate all contradiction as irrational, poetry understands that it’s in the complexity of inconsistency, confusion, and even hypocrisy that we all dwell. Sophistry and tragedy, to the recommendation of both, are intimately connected; both being methods commensurate with the dark realities of what it means to be alive. Critchley claims that “tragedy articulates a philosophical view that challenges the authority of philosophy by giving voice to what is contradictory about us, what is constricted about us, what is precarious about us, and what is limited about us.”

Philosophy is all arid formulations, dry syllogisms, contrived Gedankenexperiments; tragedy is the knowledge that nothing of the enormity of what it means to be alive can be circumscribed by mere seminar argument. “Tragedy slows things down by confronting us with what we do not know about ourselves,” Critchley writes. If metaphysics is contained by the formulations of the classroom, then the bloody stage provides a more accurate intimation of death and life. By being in opposition to philosophy, tragedy is against systems. It becomes both opposite and antidote to the narcotic fantasy that everything will be alright. Perhaps coming to terms with his own discipline, Critchley argues that “it is necessary to try and think theatrically and not just philosophically.” Tragedy, he argues, provides an opportunity to transcend myths of progress and comforts of order, to rather ecstatically enter a different space, an often dark, brutal, and subterranean place, but one which demonstrates the artifice of our self-regard.

A word conspicuous in its absence from Tragedy, the Greeks, and Us is that of the “sacred.” If there is any critical drawback to Critchley’s argument, it seems to be in the hesitancy, or the outright denial, that what he claims in his book has anything to do with something quite so wooly as the noumenal. Critchley gives ample space to argue that, “Tragedy is not some Dionysian celebration of the power of ritual and the triumph of myth over reason,” yet a full grappling with his argument seems to imply the opposite. The argument that tragedy stages contradiction is one that is convincing, but those sublime contradictions are very much under the Empire of Irrationality’s jurisdiction. Critchley is critical of those that look at ancient tragedy and “imagine that the spectators…were in some sort of prerational, ritualistic stupor, some intoxicated, drunken dumbfounded state,” but I suppose much of our interpretation depends on how we understand ritual, religion, stupor, and intoxication.

His claims are invested in an understanding of the Greeks as not being fundamentally that different from us, writing that “there is a lamentable tendency to exoticize Attic tragedy,” but maybe what’s actually called for is a defamiliarization of our own culture, an embrace of the irrational weirdness at the core of what it means to be alive 2019, where everything that is solid melts into air (to paraphrase Marx). Aeschylus knew the score well; “Hades, ruler of the nether sphere, / Exactest auditor of human kind, / Graved on the tablet of his mind,” as he describes the prince of this world in Eumenides. Critchley, I’d venture, is of Dionysius’s party but doesn’t know it. All that is argued in Tragedy, the Greeks, and Us points towards an awareness, however sublimated, of the dark beating heart within the undead cadaver’s chest. “To resist Dionysius is to repress the elemental in one’s own nature,” writes the classicist E.R. Dodds in his seminal The Greeks and the Irrational, “the punishment is the sudden complete collapse of the inward dykes when the elemental breaks through…and civilization vanishes.”

Absolutely correct that tragedy is in opposition to philosophy; where the latter offers assurances that reason can see us through, the former knows that it’s never that simple. The abyss is patient and deep, and no amount of analysis, of interpretation, of calculation, of polling can totally account for the hateful tragic pulse of our fellow humans. Nietzsche writes “what changes come upon the weary desert of our culture, so darkly described, when it is touched by…Dionysius! A storm seizes everything decrepit, rotten, broken, stunted; shrouds it in a whirling red cloud of dusty and carries it into the air like a vulture.” If any place best exemplifies that experience, and this moment, it’s Euripides’s The Bacchae, to which Critchley devotes precious little attention. That play depicts the arrival of that ambiguous god Dionysius to Thebes, as his followers thrill to the divine and irrational ecstasies that he promises. It ends with a crowd of those followers, the Maenads, mistaking the ruler Pentheus for a sacrificial goat and pulling him apart, his bones from their sockets, his organs from their cavities. Until his murder, Pentheus simultaneously manifested a repressed thrill towards the Dionysian fervor and a deficiency in taking the threat of such uncontained emotion seriously. “Cleverness is not wisdom,” Euripides writes, “And not to think mortal thoughts is to see few days.” If any didactic import comes from The Bacchae, it’s to give the devil as an adversary his due, for irrationality has more power than the clever among us might think.

Circling around the claims of Critchley’s book is our current political situation, alluded to but never engaged outright. In one sense, that’s for the best; those demons’ names are uttered endlessly all day anyhow. It’s desirable to at least have one place where you need not read about them. But in another manner, fully intuiting the Dionysian import of tragedy becomes all the more crucial when we think about what that dark god portends in our season of rising authoritarianism. “Tragedy is democracy turning itself into a spectacle,” and anyone with Twitter will concur with that observation of Critchley’s. Even more important is Critchley’s argument about those mystic chords of memory connecting us to a past that we continually reinvent; the brilliance of his claim about why the Greeks matter to us now, removing the stuffiness of anything as prosaic as canonicity, is that tragedy encapsulates the way in which bloody trauma can vibrate through the millennia and control us as surely as the ancients believed fate controlled humans. Critchley writes that “Tragedy is full of ghosts, ancient and modern, and the line separating the living from the dead is continually blurred. This means that in tragedy the dead don’t stay dead and the living are not fully alive.” We can’t ignore the Greeks, because the Greeks aren’t done with us. If there is anything that hampers us as we attempt to extricate the Dionysian revelers in our midst, it’s that many don’t acknowledge the base, chthonic power of such irrationality, and they refuse to see how violence, hate, and blood define our history in the most horrific of ways. To believe that progress, justice, and rationality are guaranteed, that they don’t require a fight commensurate with their worthiness, is to let a hubris fester in our souls and to court further tragedy among our citizens.

What Medea or The Persians do is allow us to safely access the Luciferian powers of irrationality. They present a more accurate portrayal of humanity, based as we are in bloodiness and barbarism, than the palliatives offered by Plato in The Republic with his philosopher kings. Within that space of the theater, Critchley claims that at its best it “somehow allows us to become ecstatically stretched out into another time and space, another way of experiencing things and the world.” Far from the anemic moralizing of Aristotelian catharsis—and Critchley emphasizes just how ambiguous that word actually is—that is too often interpreted as referring to a regurgitative didacticism, tragedy actually makes a new world by demolishing and replacing our world, if only briefly. “If one allows oneself to be completely involved in what is happening onstage,” Critchley writes, “one enters a unique space that provides an unparalleled experience of sensory and cognitive intensity that is impossible to express purely in concepts.” I recall seeing a production of Shakespeare’s Othello at London’s National Theatre in 2013, directed by Nicholas Hytner and starring Adrian Lester as the cursed Moor and Rory Kinear as a reptilian Iago. Dr. Johnson wrote that Othello’s murder of Desdemona was the single most horrifying scene in drama, and I concur; the play remains the equal of anything by Aeschylus or Euripides in its tragic import.

When I watched Lester play the role, lingering over the dying body of his faithful wife, whispering “What noise is this? Not dead—not yet quite dead?” I thought of many things. I thought about how Shakespeare’s play reflects the hideous things that men do to women, and the hideous things that the majority do to the marginalized. I thought about how jealousy noxiously fills every corner, no matter how small, like some sort of poison gas. And I thought about how unchecked malignancy can shatter our souls. But mostly what I thought wasn’t in any words, but was better expressed by Lester’s anguished cry as he confronted the evil he’d done. If tragedy allows for an audience to occasionally leave our normal space and time, then certainly I felt like I was joined with those thousand other spectators on that summer night at South Bank’s Olivier Theatre. The audience’s silence after Othello’s keening subsided was as still as the space between atoms, as empty as the gap between people.

King, God, Smart-Ass Author: Reconsidering Metafiction

“Whoever is in charge here?” -Daffy Duck, Merrie Melodies (1953)

Like all of us, Daffy Duck was perennially put upon by his Creator. The sputtering, stuttering, rageful water fowl’s life was a morass of indignity, embarrassment, anxiety, and existential horror. Despite all of the humiliation Daffy had to contend with, the aquatic bird was perfectly willing to shake his wings at the unfair universe. As expertly delivered by voice artist Mel Blanc, Daffy could honk “Who is responsible for this? I demand that you show yourself!” In animator Chuck Jones’s brilliant and classic 1953 episode of Merrie Melodies titled “Duck Amuck,” he presents Daffy as a veritable Everyduck, a sinner in the hands of a smart-assed illustrator. “Duck Amuck” has remained a canonical episode in the Warner Brothers cartoon catalog, its postmodern, metafictional experimentation heralded for its daring and cheekiness. Any account of what critics very loosely term “postmodern literature”—with its playfulness, its self-referentiality, and it’s breaking of the fourth wall—that considers Italo Calvino, Jorge Luis Borges, Vladimir Nabokov, and Paul Auster but not Jones is only telling part of the metafictional story.  Not for nothing, but two decades ago, “Duck Amuck” was added to the National Film Registry by the Library of Congress as an enduring piece of American culture.

Throughout the episode, Jones depicts increasingly absurd metafictional scenarios involving Daffy’s sublime suffering. Jones first imagines Daffy as a swordsman in a Three Musketeers parody, only to have him wander into a shining, white abyss as the French Renaissance background fades away. “Look Mac,” Daffy asks, never one to let ontological terror impinge on his sense of personal justice, “what’s going on here?” Jones wrenches the poor bird from the musketeer scenery to the blinding whiteness of the nothing-place, then to a bucolic pastoral, and finally to a paradisiacal Hawaiian beach. Daffy’s admirable sense of his own integrity remains intact, even throughout his torture. Pushed through multiple parallel universes, wrenched, torn, and jostled through several different realities, Daffy shouts “All right wise guy, where am I?”  

But eventually not even his own sense of identity is allowed to continue unaffected, as the God-animator turns him into a country-western singer who can only produce jarring sound effects from his guitar, or as a transcendent paintbrush recolors Daffy blue. At one point the animator’s pencil impinges into Daffy’s world, erasing him, negating him, making him nothing. Daffy’s very being, his continued existence depends on the whims of a cruel and capricious God; his is in the world of Shakespeare’s King Lear, where the Duke of Gloucester utters his plaintive cry, “As flies are to wanton boys are we to th’ gods; / They kill us for their sport.” Or at least they erase us. Finally, like Job before the whirlwind, Daffy implores, “Who is responsible for this? I demand that you show yourself!” As the view pans upward, into that transcendent realm of paper and ink where the animator-God dwells, it’s revealed to be none other than the trickster par excellence, Bugs Bunny. “Ain’t I a stinker?” the Lord saith.

Creation, it should be said, is not accomplished without a certain amount of violence. According to one perspective, we can think of Daffy’s tussling with Bugs as being a variation on that venerable old Aristotelian narrative conflict of “Man against God.” If older literature was focused on the agon (as the Greeks put it) between a human and a deity, and modernist literature concerned itself with the conflict that resulted as people had to confront the reality of no God, then the wisdom goes that our postmodern moment is fascinated with the idea of a fictional character searching out his or her creator. According to narrative theorists, that branch of literary study that concerns itself with the structure and organization of story and plot (not synonyms incidentally), such metafictional affectations are technically called metalepsis. H. Porter Abbot in his invaluable The Cambridge Introduction to Narrative explains that such tales involve a “violation of narrative levels” when a “storyworld, is invaded by an entity or entities from another narrative level.”

Metalepsis can be radical in its execution, as when an “extradiegetic narrator” (that means somebody from outside the story entirely) enters into the narrative, as in those narratives where an “’author appears and starts quarreling with one of the characters,” Abbot writes. We’ll see that there are precedents for that sort of thing, but whether interpreted as gimmick or deep reflection on the idea of literature, the conceit that has a narrator enter into the narrative as if theophany is most often associated with something called, not always helpfully, “postmodernism.” Whatever that much-maligned term might mean, in popular parlance it has an association with self-referentiality, recursiveness, and metafictional playfulness (even if readers might find cleverness such as that exhausting). The term might as well be thought of as referring to our historical preponderance of literature that knows that it is literature.

With just a bit of British disdain in his critique, The New Yorker literary critic James Wood writes in his pithy and helpful How Fiction Works that “postmodern novelists… like to remind us of the metafictionality of all things.” Think of the crop of experimental novelists and short story writers from the ’60s, such as John Barth in his Lost in the Funhouse, where one story is to be cut out and turned into an actual Moebius strip; Robert Coover in the classic and disturbing short story “The Babysitter,” in which a variety of potential realities and parallel histories exist simultaneously in the most mundane of suburban contexts; and John Fowles in The French Lieutenant’s Woman, in which the author also supplies multiple “forking paths” to the story and where the omniscient narrator occasionally appears as a character in the book. Added to this could be works where the actual first-person author themselves becomes a character, such as Auster’s New York Trilogy, or Philip Roth’s Operation Shylock (among other works where he appears as a character). Not always just as a character, but as the Creator, for if the French philosopher Roland Barthes killed off the idea of such a figure in his seminal 1967 essay “The Death of the Author,” then much of the period’s literature resurrected Him. Wood notes, perhaps in response to Barthes, that “A certain kind of postmodern novelist…is always lecturing us: ‘Remember, this character is just a character. I invented him.’” Metafiction is when fiction thinks about itself.

Confirming Wood’s observation, Fowles’s narrator writes in The French Lieutenant’s Woman, “This story I am telling is all imagination. These characters I create never existed outside my own mind…the novelist stands next to God. He may not know all, yet he tries to pretend that he does.” Metafictional literature like this is supposed to interrogate the idea of the author, the idea of the reader, the very idea of narrative. When the first line to Calvino’s If on a Winter’s Night a Traveler is “You are about to begin reading Italo Calvino’s new novel, If on a Winter’s Night a Traveler,” it has been signaled that the narrative you’re entering is supposed to be different from those weighty tomes of realism that supposedly dominated in previous centuries. If metalepsis is a favored gambit of our experimental novelists, then it’s certainly omnipresent in our pop culture as well, beyond just “Duck Amuck.”

A list of sitcoms that indulge the conceit includes 30 Rock, Community, Scrubs, and The Fresh Prince of Bel-Air. The final example, which after all was already an experimental narrative about a wholesome kid from West Philly named Will played by a wholesome rapper from West Philly named Will Smith, was a font of avant-garde fourth-wall breaking deserving of Luigi Pirandello or Bertolt Brecht. Prime instances would include the season five episodes “Will’s Misery,” which depicts Carlton running through the live studio audience, and “Same Game, Next Season,” in which Will asks “If we so rich, why we can’t afford no ceiling,” with the camera panning up to show the rafters and lights of the soundstage. Abbot writes that metafiction asks “to what extent do narrative conventions come between us and the world?” which in its playfulness is exactly what The Fresh Prince of Bel-Air is doing, forcing its audience to consider how “they act as invisible constructors of what we think is true, shaping the world to match our assumptions.”

Sitcoms like these are doing what Barth, Fowles, and Coover are doing—they’re asking us to examine the strange artificiality of fiction, this illusion in which we’re asked by a hidden author to hallucinate and enter a reality that isn’t really there. Both audience and narrator are strange, abstracted constructs; their literal corollaries of reader and writer aren’t much more comprehensible. When we read a third-person omniscient narrator, it would be natural to ask “Who exactly is supposed to be recounting this story?” Metafiction is that which does ask that question. It’s the same question that the writers of The Office confront us with when we wonder, “Who exactly is collecting all of that documentary footage over those nine seasons?”

Far from being simply a postmodern trick, metalepsis as a conceit and the metafiction that results have centuries’ worth of examples. Interactions between creator and created, and certainly author and audience, have a far more extensive history than both a handful of tony novelists from the middle of the 20th century and the back catalog of Nick at Nite. For those whose definition of the novel doesn’t consider anything written before 1945, it might come as a shock that all of the tricks we associate with metafiction thread so deep into history that realist literature can seem the exception rather than the rule. This is obvious in drama; the aforementioned theater term “breaking the fourth wall” attests to the endurance of metalepsis in literature. As a phrase, it goes back to Molière in the 17th century, referring to when characters in a drama acknowledge their audience, when they “break” the invisible wall that separates the action of the stage from that of the observers in their seats. If Molière coined the term, it’s certainly older than even him. In all of those asides in Shakespeare—such as that opening monologue of Richard III when the title villain informs all of us who are joining him on his descent into perdition that “Now is the winter of our discontent”—we’re, in some sense, to understand ourselves as being characters in the action of the play itself.  

As unnatural as Shakespearean asides may seem, they don’t have the same sheer metaleptic import of metafictional drama from the avant-garde theater of the 20th century. Pirandello’s classic experimental play Six Characters in Search of an Author is illustrative here, a high-concept work in which unfinished and unnamed characters arrive at a Pirandello production asking their creator to more fully flesh them out. As a character named the Father explains, the “author who created us alive no longer wished…materially to put us into a work of art. And this was a real crime.” A real crime because to be a fictional character means that you cannot die, even though “The man, the writer, the instrument of the creation will die, but his creation does not die.” An immaculate creation outliving its creator, more blessed than the world that is forever cursed to be ruled over by its God. But first Pirandello’s unfortunates must compel their God to grant them existence; they need a “fecundating matrix, a fantasy which could rise and nourish them: make them live forever!” If this seems abstract, you should know that such metaleptic tricks were staged long before Pirandello, and Shakespeare for that matter. Henry Medwall’s 1497 Fulgens and Lucrece, the first secular play in the entire English canon, has two characters initially named “A” and “B” who argue about a play only to have it revealed that the work in question is actually Medwall’s, which the audience is currently watching. More than a century later, and metafictional poses were still explored by dramatists, a prime and delightful example being Shakespeare’s younger contemporary and sometimes-collaborator Francis Beaumont’s The Knight of the Burning Pestle. In that Jacobean play of 1607, deploying a conceit worthy of Thomas Pynchon’s The Crying of Lot 49, Beaumont imagines the production of a play-within-a-play entitled The London Merchant. In the first act, two characters climb the stage from the audience, one simply called “Citizen” and the other “Wife,” and begin to heckle and critique The London Merchant, and its perceived unfairness to the rapidly ascending commercial class. The Knight of the Burning Pestle allows the audience to strike back, the Citizen cheekily telling the actor reading the prologue, “Boy, let my wife and I have a couple of stools / and then begin; and let the grocer do rare / things.”

Historical metalepsis can also be seen in what are called “frame tales,” that is, stories-within-stories that nestle narratives together like Russian dolls. Think of the overreaching narrative of Geoffrey Chaucer’s 14th-century The Canterbury Tales with its pilgrims telling each other their stories as they make their way to the shrine of Thomas Becket, or of Scheherazade recounting her life-saving anthology to her murderous husband in One Thousand and One Nights, as compiled from folktales during the Islamic Golden Age from the eighth to 14th centuries. Abbot describes frame tales by explaining that “As you move to the outer edges of a narrative, you may find that it is embedded in another narrative.” Popular in medieval Europe, and finding their structure from Arabic and Indian sources that go back much further, frame tales are basically unified anthologies where an overreaching narrative supplies its own meta-story. Think of Giovanni Boccaccio’s 14th-century Decameron, in which seven women and three men each tell 10 stories to pass the time while they’re holed up in a villa outside of Florence to await the passage of the Black Death through the city. The 100 resulting stories are ribald, earthy, and sexy, but present through all of their telling is an awareness of the tellers, this narrative about a group of young Florentines in claustrophobic, if elegant, quarantine. “The power of the pen,” one of Boccaccio’s characters says on their eighth day in exile, “is far greater than those people suppose who have not proved it by experience.” Great enough, it would seem, to create a massive sprawling world with so many stories in it. “In my father’s book,” the character would seem to be saying of his creator Boccaccio, “there are many mansions.”

As metaleptic as frame tales might be, a reader will note that Chaucer doesn’t hitch up for that long slog into Canterbury himself, nor does Boccaccio find himself eating melon and prosciutto while quaffing chianti with his aristocrats in The Decameron. But it would be a mistake to assume that older literature lacks examples of the “harder” forms of metalepsis, that writing before the 20th century is devoid of the Author-God appearing to her characters as if God on Sinai. So-called “pre-modern” literature is replete with whimsical experimentation that would seem at home in Nabokov or Calvino; audiences directly addressed on stage and books speaking as themselves to their readers, authors appearing in narratives as creators, and fictions announcing their fictionality.

Miguel de Cervantes’s 17th-century Don Quixote plays with issues of representation and artificiality when the titular character and his trusty squire, Sancho Panza, visit a print shop that is producing copies of the very book you are reading, the errant knight and his sidekick then endeavoring to prove that it is an inferior plagiarism of the real thing. Cervantes’s narrator reflects at an earlier point in the novel about the novel itself, enthusing that “we now enjoy in this age of ours, so poor in light entertainment, not only the charm of his veracious history, but also of the tales and episodes contained in it which are, in a measure, no less pleasing, ingenious, and truthful, than the history itself.” Thus Cervantes, in what is often considered the first novel, can lay claim to being the primogeniture of both realism and metafictionality.

Following Don Quixote’s example could be added other metafictional works that long precede “postmodernism,” including Laurence Sterne’s 18th-century The Life and Opinions of Tristram Shandy, Gentleman, where the physical book takes time to mourn the death of a central character (when an all-black page is printed); the Polish count Jan Potocki’s underread late-18th-century The Manuscript Found in Saragossa, with not just its fantastic caste of Iberian necromancers, kabbalists, and occultists, but its intricate frame structure and forking paths (not least of which include reference to the book that you’re reading); James Hogg’s Satanic masterpiece The Private Memoirs and Confessions of a Justified Sinner, in which the author himself makes an appearance; and Jane Austen’s Northanger Abbey, in which the characters remark on how it feels as if they’re in a gothic novel (or perhaps a parody of one). Long before Barthes killed the Author, writers were conflating themselves as creator with the Creator. As Sterne notes, “The thing is this. That of all the several ways of beginning a book which are now in practice throughout the known world, I am confident my own way of doing it is the best—I’m sure it is the most religious—for I begin with writing the first sentence—and trusting to Almighty God for the second.”

Sterne’s sentiment provides evidence as to why metafiction is so alluring and enduring, despite its minimization by some critics who dismiss it as mere trick while obscuring its long history. What makes metalepsis such an intellectually attractive conceit goes beyond simply that it makes us question how literature and reality interact, but rather what it implies about the Author whom Sterne gestures toward—“Almighty God.” The author of Tristram Shandy understood, as all adept priests of metafiction do (whether explicitly or implicitly), that at its core, metalepsis is theological. In questioning and confusing issues of characters and writers, narrators and readers, actors and audience, metafiction experiments with the very idea of creation. Some metafiction privileges the author as being the supreme-God of the fiction, as in The French Lieutenant’s Woman, and some castes its lot with the characters, as in The Knight of the Burning Pestle. Some metafiction is “softer” in its deployment, allowing the characters within a narrative to give us stories-within-stories; other is “harder” in how emphatic it is about the artifice and illusion of fiction, as in Jones’s sublime cartoon. What all of them share however, is an understanding that fiction is a strange thing, an illusion whereby whether we’re gods or penitents, we’re all privy to a world spun from something as ephemeral as letters and breath. Wood asks, “Is there a way in which all of us are fictional characters, parented by life and written by ourselves?” And the metaphysicians of metafiction answer in the affirmative.

As a final axiom, to join my claim that metafiction is when literature thinks about itself and that metalepsis has a far longer history than is often surmised, I’d finally argue that because all fiction—all literature—is artifice, that all of it is in some sense metafiction. What defines fiction, what makes it different from other forms of language, is that quality of metalepsis. Even if not explicitly stated, the differing realms of reality implied by the very existence of fiction imply something of the meta. Abbot writes “World-making is so much a part of most narratives that some narrative scholars have begun to include it as a defining feature of narrative,” and of that I heartily concur. Even our scripture is metafictional, for what else are we to call the Bible in which Moses is both author and character, and where his death itself is depicted? In metafiction perspective is confused, writer turns to reader, narrator to character, creator to creation. No more apt a description of metafiction, of fiction, of life than that which is offered by Prospero at the conclusion of The Tempest: “Our revels now are ended. These our actors, / As I foretold you, were all spirits and / Are melted into air, into thin air.” For Prospero, the “great globe itself…all which it inherit, shall dissolve / And, like this insubstantial pageant faded…We are such stuff / As dreams are made on, and our little life / Is rounded with a sleep.” Nothingness before and nothingness after, but with everything in between, just like the universe circumscribed by the cover of a book. Metafiction has always defined literature; we’ve always been characters in a novel that somebody else is writing.

Judging God: Rereading Job for the First Time

“And can you then impute a sinful deed/To babes who on their mothers’ bosoms bleed?” —Voltaire, “Poem on the Lisbon Disaster” (1755)

“God is a concept/By which we measure our pain.” —John Lennon, “God” (1970)

The monks of the Convent of Our Lady of Mount Carmel would have shortly finished their Terce morning prayers of the Canonical hours, when an earthquake struck Lisbon, Portugal, on All Saints Day in 1755. A fantastic library was housed at the convent, more than 5,000 volumes of St. Augustin and St. Thomas Aquinas, Bonaventure and Albertus Magnus, all destroyed as the red-tiled roof of the building collapsed. From those authors there had been endless words produced addressing theodicy, the question of evil, and specifically how and why an omniscient and omnibenevolent God would allow such malevolent things to flourish. Both Augustin and Aquinas affirmed that evil had no positive existence in its own right, that it was merely the absence of good in the way that darkness is the absence of light. The ancient Church Father Irenaeus posited that evil is the result of human free will, and so even natural disaster was due to the affairs of women and men. By the 18th century, a philosopher like Gottfried Leibnitz (too Lutheran and too secular for the monks of Carmo) would optimistically claim that evil is an illusion, for everything that happens is in furtherance of a divine plan whereby ours is the “best of all possible worlds,” even in Lisbon on November 1, 1755. On that autumn day in the Portuguese capital, the supposedly pious of the Carmo Convent were faced with visceral manifestations of that question of theodicy in a city destroyed by tremor, water, and flame.

No more an issue of scholastic abstraction, of syllogistic aridness, for in Lisbon perhaps 100,000 of the monks’ fellow subjects would die in what was one of the most violent earthquakes ever recorded. Death marked the initial collapse of the houses, palaces, basilicas, and cathedrals, in the tsunami that pushed in from the Atlantic and up the Tagus River, in the fires that ironically resulted from the preponderance of votive candles lit to honor the holy day, and in the pestilence that broke out among the debris and filth of the once proud capital. Lisbon was the seat of a mighty Catholic empire, which had spread the faith as far as Goa, India, and Rio de Janeiro, Brazil; its inhabitants were adherents to stern Iberian Catholicism, and the clergy broached no heresy in the kingdom. Yet all of that faith and piety appeared to make no difference to the Lord; for the monks of the Carmo Convent who survived their home’s destruction, their plaintive cry might as well have been that of Christ’s final words upon the cross in the Book of Matthew: “My God, my God, why hast thou forsaken me?”

Christ may have been the Son of God, but with his dying words he was also a master of intertextual allusion, for his concluding remarks are a quotation of another man, the righteous gentile from the land of Uz named Job. If theodicy is the one insoluble problem of monotheism, the viscerally felt and empirically verifiable reality of pain and suffering in a universe supposedly divine, then Job remains the great brief for those of us who feel like God has some explaining to do. Along with other biblical wisdom literature like Ecclesiastes or Song of Songs, Job is one of those scriptural books that can sometimes appear as if some divine renegade snuck it into the canon. What Job takes as its central concern is the reality described by journalist Peter Trachtenberg in his devastating The Book of Calamities: Five Questions about Suffering, when he writes that “Everybody suffers: War, sickness, poverty, hunger, oppression, prison, exile, bigotry, loss, madness, rape, addiction, age, loneliness.” Job is what happens when we ask about why those things are our lot with an honest but broken heart.

I’ve taught Job to undergraduates before, and I’ve sometimes been surprised by their lack of shock when it comes to what’s disturbing about the narrative. By way of synopsis, the book tells the story of a man whom the poem makes clear is unassailably righteous, and how Satan, in his first biblical appearance (and counted as a son of God to boot), challenges the Creator, maintaining that Job’s piety is only a result of his material and familiar well-being. The deity answers the devil’s charge by letting the latter physically and psychologically torture blameless Job, so as to demonstrate that the Lord’s human servant would never abjure Him. In Bar-Ilan University bible professor Edward Greenstein’s masterful Job: A New Translation, the central character is soberly informed that “Your sons and your daughters were eating and drinking wine in/the house of their brother, the firstborn,/When here: a great wind came from across the desert;/it affected the four corners of the house;/it fell on the young people and they died.”—and the final eight words of the last line convey in their simplicity the defeated and personal nature of the tragedy. Despite the decimation of Job’s livestock, the death of his children, the rejection of his wife, and finally the contraction of an unsightly and painful skin ailment (perhaps boils), “Job did not commit-a-sin – /he did not speak insult” against God.

Job didn’t hesitate to speak against his own life, however. He bemoans his own birth, wishing the very circumstances that his life could be erased, declaring “Let the day disappear, the day I was born, /And the night that announced: A man’s been conceived!” Sublimely rendered in both their hypocrisy and idiocy are three “friends” (a later interpolation that is the basis of the canonical book adds a fourth) who come to console Job, but in the process they demonstrate the inadequacy of any traditional theodicy in confronting the nature of why awful things frequently happen to good people. Eliphaz informs Job that everything, even the seemingly evil, takes part in God’s greater and fully good plan, that the Lord “performs great things too deep to probe, /wondrous things, beyond number.” The sufferer’s interlocutor argues, as Job picks at his itchy boils with a bit of pottery, perhaps remembering the faces of his dead children when they were still infants, that God places “the lowly up high, /So the stooped attain relief.” Eliphaz, of whom we know nothing other than that he speaks like a man who has never experienced loss, is the friend whom counsels us that everything works out in the end even when we’re at our child’s funeral.

Bildad, on the other hand, takes a different tact, arguing that if Job’s sons “committed a sin against him, /He has dispatched them for their offense,” the victim-blaming logic that from time immemorial has preferred to ask what the raped was wearing rather than why the rapist rapes. Zophar angrily supports the other two, and the latecomer Elihu emphasizes God’s mystery and Job’s impudence to question it. To all of these defenses of the Lord, Job responds that even “were I in the right, his mouth would condemn me. /(Even) were I innocent, he would wrong me… It is all the same. /And so, I declare:/The innocent and the guilty he brings to (the same) end.” In an ancient world where it’s taken as a matter of simple retributive ethics that God will punish the wicked and reward the just, Job’s realism is both more world-weary and more humane than the clichés offered by his fair-weather friends. “Why do the wicked live on and live well,/Grow old and gain in power and wealth?” asks Job, and from 500 BCE unto 2019 that remains the central question of ethical theology. As concerns any legitimate, helpful, or moving answer from his supposed comforters, Greenstein informs us that “They have nothing to say.”

Finally, in the most dramatic and figuratively adept portion of the book, God Himself arrives from a whirlwind to answer the charges of Job’s “lawsuit” (as Greenstein renders the disputation). The Lord never answers Job’s demand to know why he has been forced to suffer, rather answering in non-sequitur about his own awesomeness, preferring to rhetorically answer the pain of a father who has buried his children by asking “Where were you when I laid earth’s foundations?/Tell me – if you truly know wisdom!/Who set its dimensions? Do you know? /Who stretched the measuring line?… Have you ever reached the sources of Sea, /And walked on the bottom of Ocean?” God communicates in the rhetoric of sarcastic grandeur, mocking Job by saying “You must know… Your number of days is so many!” Of course, Job had never done any of those things, but an unempathetic God also can’t imagine what it would be like to have a Son die—at least not yet. That gets ahead of ourselves, and a reader can’t help but notice that for all of His poetry from the whirlwind, and all of His frustration at the intransigent questioning of Job, the Lord never actually answers why such misfortune has befallen this man. Rather God continually emphasizes His greatness to Job’s insignificance, His power to Job’s feebleness, His eternity to Job’s transience.

The anonymous author’s brilliance is deployed in its dramatic irony, for even if Job doesn’t know why he suffers, we know. Greenstein explains that readers “know from the prologue that Job’s afflictions derive from the deity’s pride, not from some moral calculus.” Eliphaz, Bildad, Zophar, and Elihu can pontificate about the unknowability of God’s reasons while Job is uncertain as to if he’s done anything wrong that merits such treatment, but the two omniscient figures in the tale—God and the reader—know why the former did what He did. Because the Devil told him to. Finally, as if acknowledging His own culpability, God “added double to what Job had had,” which means double the livestock, and double the children. A cruelty in that, the grieving father expected to have simply replaced his dead family with a newer, shinier, fresher, and most of all alive brood. And so, both Job and God remain silent for the rest of the book. In the ordering of the Hebrew scriptures, God remains silent for the rest of the Bible, so that when “Job died old and sated with days” we might not wonder if it isn’t the deity Himself who has expired, perhaps from shame. The wisdom offered in the book of Job is the knowledge that justice is sacred, but not divine; so that justice must ours, meaning that our responsibility to each other is all the more important.

Admittedly an idiosyncratic take on the work, and one that I don’t wish to project onto Greenstein. But the scholar does argue that a careful philological engagement with the original Hebrew renders the story far more radical than has normally been presented. Readers of Job have normally been on the side of his sanctimonious friends, and especially the Defendant who arrives out of His gassy whirlwind, but the author of Job is firmly on the side of the plaintiff. If everyone from medieval monks to the authors of Midrash, from Sunday school teachers to televangelists, have interpreted Job as being about the inscrutability of God’s plans and the requirement that we passively take our undeserved pain as part of providence, than Greenstein writes that “there is a very weak foundation in biblical parlance for the common rendering.” He argues that “much of what has passed as translation of Job is facile and fudged,” having been built upon the accumulated exegetical detritus of centuries rather than returning to the Aleph, Bet, and Gimmel of Job itself.

Readers of a more independent bent have perhaps detected sarcasm in Job’s response, or a dark irony in God’s restitution of new and better replacement children for the ones that He let Satan murder. For my fellow jaundiced and cynical heretics who long maintained that Job still has some fight in him even after God emerges from the whirlwind to chastise Him for daring to question the divine plan, Greenstein has good news. Greenstein writes that the “work is not mainly about what you thought it was; it is more subversive than you imagined; and it ends in a manner that glorifies the best in human values.” He compares a modern translation of Job 42:6, where Job speaks as a penitent before the Lord, exclaiming “Therefore I despise myself/and repent in dust and ashes.” Could be clearer–>Such a message seems straightforward—because he questions the divine plan, Job hates himself, and is appropriately humbled. Yet Greenstein contrasts the modern English with a more accurate version based in an Aramaic text found in the Dead Sea Scrolls, where the man of Uz more ambiguously says “Therefore I am poured out and boiled up, and I will become dust.” This isn’t a declaration of surrender; this is an acknowledgement of loss. Greenstein explains that while both the rabbis and the Church wished to see Job repentant and in surrender, that’s not what the original presupposes. Rather, “Job is parodying God, not showing him respect. If God is all about power and not morality and justice, Job will not condone it through acceptance.” Thus, when we write that the book’s subject is judging God, we should read the noun as the object and not the subject of that sentence.

As a work, the moral universe of Job is complex; when compared to other ancient near Eastern works that are older, even exemplary ones like Gilgamesh, its ambiguities mark it as a work of poetic sophistication. Traditionally dated from the period about a half-millennium before the Common Era, Job is identified with the literature of the Babylonian Exile, when the Jews had been conquered and forcibly extricated to the land of the Euphrates. Such historical context is crucial in understanding Job’s significance, for the story of a righteous man who suffered for no understandable reason mirrored the experience of the Jews themselves, while Job’s status as a gentile underscored a dawning understanding that God was not merely a deity for the Israelites, but that indeed his status was singular and universal. When the other gods are completely banished from heaven, however, and the problem of evil rears its horned head, for when the Lord is One, who then must be responsible for our unearned pain?

Either the most subversive or the most truthful of scriptural books (maybe both), Job has had the power to move atheist and faithful alike, evidence for those who hate God and anxious warning for those that love Him. Former Jesuit Jack Miles enthuses in God: A Biography, that “exegetes have often seen the Book of Job as the self-refutation of the entire Judeo-Christian tradition.” Nothing in the canon of Western literature, be it Sophocles’s Oedipus Rex or William Shakespeare’s Hamlet, quite achieves the dramatic pathos of Job. Consider its terrors, its ambiguities, its sense of injustice, and its impartation that “our days on earth are a shadow.” Nothing written has ever achieved such a sense of universal tragedy. After all, the radicalism of the narrative announces itself, for Job concerns itself with the time that God proverbially sold his soul to Satan in the service of torturing not just an innocent man, but a righteous one. And that when questioned on His justification for doing such a thing, the Lord was only able to respond by reiterating His own power in admittedly sublime but ultimately empty poetry. For God’s answer of theodicy to Job is not an explanation of how, but not an explanation of why—and when you’re left to scrape boils off your self with a pottery shard after your 10 children have died in a collapsing house, that’s no explanation at all.

With Greenstein’s translation, we’re able to hear Job’s cry not in his native Babylonian, or the Hebrew of the anonymous poet who wrote his tale, or the Aramaic of Christ crucified on Golgotha, or even the stately if antiquated early modern English of the King James Version, but rather in a fresh, contemporary, immediate vernacular that makes the tile character’s tribulations our own. Our Job is one who can declare “I am fed up,” and something about the immediacy of that declaration makes him our contemporary in ways that underscore the fact that billions are his proverbial sisters and brothers. Greenstein’s accomplishment makes clear a contention that is literary, philosophical, and religious: that the Book of Job is the most sublime masterpiece of monotheistic faith, because what its author says is so exquisitely rendered and so painfully true. Central to Greenstein’s mission is a sense of restoration. This line needs to be clearer: Job is often taught and preached as simply being about humanity’s required humility before the divine, and the need to prostrate ourselves before a magnificent God whose reasons are inscrutable.

By restoring Job its status as a subversive classic, Greenstein does service to the worshiper and not the worshiped, to humanity and not our oppressors. Any work of translation exists in an uneasy stasis between the original and the
adaptation, a one-sided negotiation across millennia where the original author has no say. My knowledge of biblical Hebrew is middling at best, so I’m not suited to speak towards the transgressive power of whomever the anonymous poet of Job was, but regardless of what those words chosen by her or him were, I can speak to Greenstein’s exemplary poetic sense. At its core, part of what makes this version of Job so powerful is how it exists in contrast to those English versions we’ve encountered before, from the sublime plain style of King James to the bubblegum of the Good News Bible. Unlike those traditional translations, the words “God” and “Lord,” with their associations of goodness, appear nowhere in this translation. Rather Greenstein keeps the original “Elohim” (which I should point out is plural), or the unpronounceable,
vowelless tetragrammaton, rendered as “YHWH.” Job is made new through the deft use of the literary technique known as defamiliarization, making that which is familiar once again strange (and thus all the more radical and powerful).

Resurrecting the lightning bolt that is Job’s poetry does due service to the original. Job’s subject is not just theodicy, but the failures of poetry itself. When Job defends himself against the castigation of Eliphaz, Bildad, Zophar, and Elihu, it’s not just a theological issue, but a literary critical one as well. The suffering man condemns their comforts and callousness, but also their clichés. That so much of what his supposed comforters draw from are biblical books like Psalms and Proverbs testifies to Job’s transgressive knack for scriptural self-reflection. As a poet, the author is able to carefully display exactly what she or he wishes to depict, an interplay between the presented and hidden that has an uncanny magic. When the poet writes that despite all of Job’s tribulations, “Job did not commit-a-sin with his lips,” it forces us to ask if he committed sin in his heart and his head, if his soul understandably screamed out at God’s injustice while his countenance remained pious. This conflict between inner and outer appearance is the logic of the novelist, a type of interiority that we associate more with Gustave Flaubert or Henry James than we do with the Bible.

When it comes to physical detail, the book is characteristically minimalist. Like most biblical writing, the author doesn’t present much of a description of either God or Satan, though that makes their presentation all the more eerie and disturbing. When God asks Satan where he has been, the arch-adversary responds “From roving the earth and going all about it.” Satan’s first introduction in all of Western literature, for never before was that word used for the singular character, thus brings with it those connotations of ranging, stalking, and creeping that have so often accrued about the reptilian creature who is always barely visible out of the corner of our eyes. Had we been given more serpentine exposition on the character, cloven hooves and forked tales, it would lack the unsettling nature of what’s actually presented. But when the author wants to be visceral, she or he certainly can be. Few images are as enduring in their immediacy than how Job “took a potsherd with which to scratch himself as he sits in/the ash-heap.” His degradation, his tribulation, his shame still resonates 2,500 years later.

The trio (and later the quartet) of Job’s judgmental colleagues render theodicy as a poetry of triteness, while Job’s poetics of misery is commensurate with the enormity of his fate. “So why did you take me out of the womb?” he demands of God, “Would I had died, with no eye seeing me!/Would I had been as though I had been;/Would I had been carried from womb to tomb”—here Greenstein borrowing a favored rhyming congruence from the arsenal of English’s Renaissance metaphysical poets. Eliphaz and Elihu offer maudlin bromides, but Job can describe those final things with a proficiency that shows how superficial his friends are. Job fantasizes about death as a place “I go and do not return, /To a land of darkness and deep-shade;/A land whose brightness is like pitch-black, /Deep-shade and disorder;/That shines like pitch-black.” That contradictory image, of something shining in pitch-black, is an apt definition of God Himself, who while He may be master of an ordered, fair, and just universe in most of the Bible, in Job He is creator of a “fundamentally amoral world,” as Greenstein writes.

If God from the whirlwind provides better poetry than his defenders could, His theology is just as empty and callous. Greenstein writes that “God barely touches on anything connected to justice or the providential care of humanity,” and it’s precisely the case that for all of His literary power, the Lord dodges the main question. God offers description of the universe’s creation, and the maintenance of all things that order reality, he conjures the enormities of size and time, and even provides strangely loving description of his personal pets, the fearsome hippopotamus Behemoth and the terrifying whale/alligator Leviathan. Yet for all of that detail, exquisitely rendered, God never actually answers Job. Bildad or Elihu would say that God has no duty to explain himself to a mere mortal like Job, that the man of Uz deserves no justification for his punishment in a life that none of us ever chose to begin. That, however, obscures the reality that even if Job doesn’t know the reasons behind what happened, we certainly know.

Greenstein’s greatest contribution is making clear that not only does God not answer Job’s pained query, but that the victim knows that fact. And he rejects it. Job answers God with “I have spoken once and I will not repeat…I will (speak) no more.” If God answers with hot air from the whirlwind, the soon-to-be-restored Job understands that silent witness is a more capable defense against injustice, that quiet is the answer to the whims of a capricious and cruel Creator. God justifies Himself by bragging about His powers, but Job tells him “I have known you are able to do all;/That you cannot be blocked from any scheme.” Job has never doubted that God has it within his power to do the things that have been done, rather he wishes to understand why they’ve been done, and all of the beautiful poetry marshaled from that whirlwind can’t do that. The Creator spins lines and stanzas as completely as He once separated the night from the day, but Job covered in his boils could care less. “As a hearing by ear I have heard you,” the angry penitent says, “And now my eye has seen you. That is why I am fed up.”

Traditional translations render the last bit so as to imply that a repentant Job has taken up dust and ashes in a pose of contrition, but the language isn’t true to the original. More correct, Greenstein writes, to see Job as saying “I take pity on ‘dust and ashes,’” that Job’s sympathy lay with humanity, that Job stands not with God but with us. Job hated not God, but rather His injustice. Such a position is the love of everybody else. Miles puts great significance in the fact that the last time the deity speaks (at least according to the ordering of books in the Hebrew Scriptures) is from the whirlwind.  According to Miles, when the reality of Job’s suffering is confronted by the empty beauty of the Lord’s poetry, a conclusion can be drawn: “Job has won. The Lord has lost.” That God never speaks to man again (at least in the ordering of the Hebrew Scriptures) is a type of embarrassed silence, and for Miles the entirety of the Bible is the story of how humanity was able to overcome God, to overcome our need of God. Job is, in part, about the failure of beautiful poetry when confronted with loss, with pain, with horror, with terror, with evil. After Job there can be no poetry. But if Job implies that there can be no poetry, it paradoxically still allows for prayer. Job in his defiance of God teaches us a potent form of power, that dissent is the highest form of prayer, for what could possibly take the deity more seriously? In challenging, castigating, and criticizing the Creator, Job fulfills the highest form of reverence that there possibly can be.

Image credit: Unsplash/Aaron Burden.

The Universe in a Sentence: On Aphorisms

“A fragment ought to be entirely isolated from the surrounding world like a little work of art and complete in itself like a hedgehog.” —Friedrich Schlegel, Athenaeum Fragments (1798)

“I dream of immense cosmologies, sagas, and epics all reduced to the dimensions of an epigram.” —Italo Calvino, Six Memos for the Next Millennium (1988)

From its first capital letter to the final period, an aphorism is not a string of words but rather a manifesto, a treatise, a monograph, a jeremiad, a sermon, a disputation, a symposium. An aphorism is not a sentence, but rather a microcosm unto itself; an entrance through which a reader may walk into a room the dimensions of which even the author may not know. Our most economic and poetic of prose forms, the aphorism does not feign argumentative completism like the philosophical tome, nor does it compel certainty as does the commandment—the form is cagey, playful, and mysterious. To either find an aphorism in the wild, or to peruse examples in a collection that mounts them like butterflies nimbly held in place with push-pin on Styrofoam, is to have a literary-naturalist’s eye for the remarkable, for the marvelous, for the wondrous. And yet there has been, at least until recently, a strange critical lacuna as concerns aphoristic significance. Scholar Gary Morson writes in The Long and Short of It: From Aphorism to Novel that though they “constitute the shortest [of] literary genres, they rarely attract serious study. Universities give courses on the novel, epic, and lyric…But I know of no course on…proverbs, wise sayings, witticisms and maxims.”

An example of literary malpractice, for to consider an aphorism is to imbibe the purest distillation of a mind contemplating itself. In an aphorism every letter and word counts; every comma and semicolon is an invitation for the reader to discover the sacred contours of her own thought. Perhaps answering Morson’s observation, critic Andrew Hui writes in his new study A Theory of the Aphorism: From Confucius to Twitter that the form is “Opposed to the babble of the foolish, the redundancy of bureaucrats, the silence of mystics, in the aphorism nothing is superfluous, every word bear weight.” An aphorism isn’t a sentence—it’s an earthquake captured in a bottle. It isn’t merely a proverb, a quotation, an epigraph, or an epitaph; it’s fire and lightning circumscribed by the rules of syntax and grammar, where rhetoric itself becomes the very stuff of thought. “An aphorism,” Friedrich Nietzsche aphoristically wrote, “is an audacity.”

If brevity and surprising disjunction are the elements of the aphorism, then in some ways we’re living in a veritable renaissance of the form, as all of the detritus of our fragmented digital existence from texting to Twitter compels us toward the ambiguous and laconic (even while obviously much of what’s produced is sheer detritus). Hui notes the strangely under-theorized nature of the aphorism, observing that at a “time when a presidency can be won and social revolutions ignited by 140-character posts…an analysis of the short saying seems to be crucial as ever” (not that we’d put “covfefe” in the same category as Blaise Pascal).

Despite the subtitle to Hui’s book, A Theory of the Aphorism thankfully offers neither plodding history nor compendium of famed maxims. Rather Hui presents a critical schema to understand what exactly the form does, with its author positing that the genre is defined by “a dialectical play between fragments and systems.” Such a perspective on aphorism sees it not as the abandoned step-child of literature, but rather the very thing itself, a discursive and transgressive form of critique that calls into question all received knowledge. Aphorism is thus both the substance of philosophy and the joker that calls the very idea of literature and metaphysics into question.  The adage, the maxim, and the aphorism mock completism. Jesus’s sayings denounce theology; Parmenides and Heraclitus deconstruct philosophy. Aphoristic thought is balm and succor against the rigors of systemization, the tyranny of logic. Hui writes that “aphorisms are before, against, and after philosophy.” Before, against, and after scripture too, I’ll add. And maybe everything else as well. An aphorism may feign certainty, but the very brevity of the form ensures it’s an introduction, even if its rhetoric wears the costume of conclusion.

Such is why the genre is so associated with gnomic philosophy, for any accounting of aphoristic origins must waylay itself in both the fragments of the pre-Socratic metaphysicians, the wisdom literature of the ancient Near East, and the Confucian and Taoist scholars of China. All of this disparate phenomenon can loosely be placed during what the German philosopher Karl Jaspers called the “Axial Age,” a half-millennium before the Common Era when human thought began to move towards the universal and abstract. From that era (as very broadly constituted) we have Heraclitus’s “Nature loves to hide,” Lao-Tzu’s “The spoken Tao is not the real Tao,” and Jesus Christ’s “The kingdom of God is within you.” Perhaps the aphorism was an engine for that intellectual transformation, the sublimities of paradox breaking women and men out of the parochialism that marked earlier ages. Regardless of why the aphorism’s birth coincides with that pivotal moment in history, that era was the incubator for some of our earliest (and greatest) examples.

From the Greek philosophers who pre-date Socrates, and more importantly his systematic advocate Plato, metaphysics was best done in the form of cryptic utterance. It shouldn’t be discounted that the gnomic quality of thinkers like Parmenides, Heraclitus, Democritus, and Zeno might be due to the disappearance of their full corpus over the past 2,500 years. Perhaps erosion of stone and fraying of papyrus has generated such aphorisms. Entropy is, after all, our final and most important teacher. Nevertheless, aphorism is rife in the pre-Socratic philosophy that remains, from Heraclitus’s celebrated observation that “You can’t step into the same river twice” to Parmenides’s exactly opposite contention that “It is indifferent to me where I am to begin, for there shall I return again.” Thus is identified one of the most difficult qualities of the form—that it’s possible to say conflicting things and that by virtue of how you say them you’ll still sound wise. A dangerous form, the aphorism, for it can confuse rhetoric for knowledge. Yet perhaps that’s too limiting a perspective, and maybe its better to think of the chain of aphorisms as a great and confusing conversation; a game in which both truth and its opposite can still be true.

A similar phenomenon is found in wisdom literature, a mainstay of Jewish and Christian writing from the turn of the Common Era, as embodied in both canonical scripture such as Job, Ecclesiastes, Proverbs, and Song of Songs, as well as in more exotic apocryphal books known for their sayings like The Gospel of Thomas. Wisdom literature was often manifested in the form of a listing of sayings or maxims, and since the 19th century, biblical scholars who advocated the so-called “two-source hypothesis,” have argued that the earliest form of the synoptic New Testament gospels was a (as yet undiscovered) document known simply as “Q,” which consisted of nothing but aphorisms spoken by Christ. This conjectured collection of aphorisms theoretically became the basis for Matthew and Luke whom (borrowing from Mark) constructed a narrative around the bare-bones sentences. Similarly, the eccentric, hermetic, and surreal Gospel of Thomas, which the book of John was possibly written in repost towards, is an example of wisdom literature at its purest, an assemblage of aphorisms somehow both opaque and enlightening. “Split a piece of wood; I am there,” cryptically says Christ in the hidden gospel only rediscovered at Nag Hamadi, Egypt, in 1945, “Lift up the stone, and you will find me there.”

From its Axial-Age origins, the aphorism has a venerable history. Examples were transcribed in the self-compiled Commonplace Books of the Middle Ages and the Renaissance, in which students were encouraged to record their favorite fortifying maxims for later consultation. The logic behind such exercises was, as anthologizer John Gross writes in The Oxford Book of Aphorisms, that the form does “tease and prod the lazy assumptions lodged in the reader’s mind; they wars us how insidiously our vices can pass themselves off as virtues; they harp shamelessly on the imperfections and contradictions which we would rather ignore.” Though past generations were instructed on proverbs and maxims, today “Aphorisms are often derided as trivial,” as Aaron Haspel writes in the introduction to Everything: A Book of Aphorisms, despite the fact that “most people rule their lives with four or five of them.”

The last five centuries have seen no shortage of aphorists who are the originators of those four or five sayings that you might live your life by, gnomic authors who speak in the prophetic utterance of the form like Desiderius Erasmus, François de La Rochefoucauld, Pascal, Benjamin Franklin, Voltaire, William Blake, Nietzsche, Ambrose Bierce, Oscar Wilde, Gustave Flaubert, Franz Kafka, Ludwig Wittgenstein, Dorothy Parker, Theodor Adorno, and Walter Benjamin. Hui explains that “Aphorisms are transhistorical and transcultural, a resistant strain of thinking that has evolved and adapted to its environment for millennia. Across deep time, they are vessels that travel everywhere, laden with fraught yet buoyant.” In many ways, modernity has proven an even more prodigious environment for the digressive and incomplete form, standing both in opposition to the systemization of knowledge that has defined the last half-millennia, while also embodying the aesthetic of fragmented bricolage that sometimes seems as if it was our birthright. Contemporary master of the form Yahia Lababidi writes in Where Epics Fail: Meditations to Live By that “An aphorist is not one who writes in aphorisms but one who thinks in aphorisms.” In our fractured, fragmented, disjointed time, where we take wisdom where we find it, we’re perhaps all aphorists, because we can’t think in any other way.

Anthologizer James Lough writes in his introduction to Short Flights: Thirty-Two Modern Writers Share Aphorisms of Insight, Inspiration, and Wit that “Our time is imperiled by tidal waves of trivia, hectored by a hundred emails a day, colored by this ad slogan or that list of things to do, each one sidetracking our steady, focused awareness.” What might seem to be one of the most worrying detriments of living in post-modernity—our digitally slackening attention spans and inattention to detail—is the exact same quality that allows contemporary aphorists the opportunity to dispense arguments, insight, enlightenment, and wisdom in a succinct package, what Lough describes as a “quickly-digested little word morsel, delightful and instructive, that condenses thought, insight, and wordplay.”

Our century has produced brilliant aphorists who have updated the form while making use of its enduring and universal quality of brevity, metaphor, arresting detail, and the mystery that can be implied by a few short words that seem to gesture towards something slightly beyond our field of sight, and who embody Gross’s description of the genre as one which exemplifies a “concentrated perfection of phrasing which can sometimes approach poetry in its intensity.” Authors like Haspel, Lababidi, Don Paterson in The Fall at Home: New and Selected Aphorisms, and Sarah Manguso in 300 Arguments may take as their subject matter issues of current concern, from the Internet to climate change, but they do it in a form that wouldn’t seem out of place on a bit of frayed pre-Socratic papyrus.

Consider the power and poignancy of Manguso’s maxim “Inner beauty can fade, too.” In only five words, and one strategically placed comma that sounds almost like a reserved sigh, Manguso demonstrates one of the uncanniest powers of the form. That it can remind you of something that you’re already innately aware of, something that relies on the nature of the aphorism to illuminate that which we’d rather obscure. Reading 300 Arguments is like this. Manguso bottles epiphany, the strange acknowledgment of encountering that which you always knew but could never quite put into words yourself, like discovering that the gods share your face. Lababidi does something similar in Where Epics Fail, giving succinct voice to the genre’s self-definition, writing that “Aphorisms respect the wisdom of silence by disturbing it, but briefly.” Indeed, self-referentiality is partially at the core of modern aphorisms; many of Paterson’s attempts are of maxims considering themselves like an ouroboros biting its tail. In the poet’s not unfair estimation, an aphorism is “Hindsight with murderous purpose.”

Lest I be accused of uncomplicated enthusiasms in offering an encomium for the aphorism, let it be said that the form can be dangerous, that it can confuse brevity and wit with wisdom and knowledge, that rhetoric (as has been its nature for millennia) can pantomime understanding as much as express it. Philosopher Julian Baggini makes the point in Should You Judge This Book by Its Cover: 100 Takes on Familiar Sayings and Quotations, writing that aphorisms “can be too beguiling. They trick us into thinking we’ve grasped a deep thought by their wit and brevity. Poke them, however, and you find they ride roughshod over all sorts of complexities and subtleties.” The only literary form where rhetoric and content are as fully unified as the aphorism is arguably poetry proper, but ever fighting Plato’s battle against the versifiers, Baggini is correct that there’s no reason why an expression in anaphora or chiasmus is more correct simply because the prosody of its rhetoric pleases our ears.

Economic statistician Nassim Nicholas Taleb provides ample representative examples in The Bed of Procrustes: Philosophical and Practical Aphorisms of how a pleasing adage need not be an accurate statement. There’s an aesthetically harmonious tricolon in his contention that the “three most harmful addictions are heroin, carbohydrates, and a monthly salary,” and though of those I’ve only ever been addicted to bread, pizza, and pasta, I’ll say from previous addictions that I could add a few more harmful vices to the list made by Taleb. Or, when he opines that “Writers are remembered for their best work, politicians for their worst mistakes, and businessmen are almost never remembered,” I have to object. I’d counter Taleb’s pithy aphorism by pointing out that both John Updike and Philip Roth are remembered for their most average writing, that an absurd preponderance of things in our country are named for Ronald Reagan whose entire political career was nothing but mistakes, and that contra the claim that businessmen are never remembered, I’ll venture that my entire Pittsburgh upbringing where everything is named after a Carnegie, Frick, Mellon, or Heinz demonstrates otherwise. The Bed of Procrustes is like this, lots of sentences that are as if they came from Delphi, but when you spend a second to contemplate Taleb’s claim that “If you find any reason why you and someone are friends, you are not friends” you’ll come to the conclusion that far from experience-tested wise maxim, it’s simply inaccurate.

Critic Susan Sontag made one of the best arguments against aphoristic thinking in her private writings, as published in As Consciousness Is Harnessed to Flesh: Journals and Notebooks, 1964-1980, claiming that “An aphorism is not an argument; it is too well-bred for that.” She makes an important point—that the declarative confidence of the aphorism can serve to announce any number of inanities and inaccuracies as if they were true by simple fiat. Sontag writes that “Aphorism is aristocratic thinking: this is all the aristocrat is willing to tell you; he thinks you should get it fast, without spelling out all the details.” Providing a crucial counter-position to the uncomplicated celebration of all things aphoristic, Sontag rightly observes that “To write aphorisms is to assume a mask—a mask of scorn, of superiority,” which would certainly seem apropos when encountering the claim that if you can enumerate why you enjoy spending time with a friend they’re not really your friend. “We know at the end,” Sontag writes, that “the aphorist’s amoral, light point-of-view self-destructs.”

An irony in Sontag’s critique of aphorisms, for embedded within her prose like any number of shining stones found in a muddy creek are sentences that themselves would make great prophetic adages. Aphorisms are like that though; even with Sontag’s and Baggini’s legitimate criticism of the form’s excesses, we can’t help but approach, consider, think, and understand in that genre for which brevity is the essence of contemplation. In a pose of self-castigation, Paterson may have said that the “aphorism is already a shadow of itself,” but I can’t reject that microcosm, that inscribed reality within a few words, that small universe made cunningly. Even with all that I know about the risks of rhetoric, I can not pass sentence on the sentence. Because an aphorism is open-ended, it is disruptive; as such, it doesn’t preclude, but rather opens; the adage both establishes and abolishes its subject, simultaneously. Hui writes that the true subject of the best examples of the form is “The infinite,” for “either the aphorism’s meaning is inexhaustible or its subject of inquiry—be it God or nature of the self—is boundless.”

In The Aphorism and Other Short Forms, author Ben Grant writes that in the genre “our short human life and eternity come together, for the timelessness of the truth which the aphorism encapsulates can only be measured against our own ephemerality, of which the brevity of the aphorism serves as an apt expression.” I agree with Grant’s contention, but I would amend one thing—the word “truth.” Perhaps that’s what’s problematic about Baggini’s and Sontag’s criticism, for we commit a category mistake when we assume that aphorisms exist only to convey some timeless verity. Rather, I wonder if what Hui describes as the “most elemental of literary forms,” those “scattered lines of intuition…[moving by] arrhythmic leaps and bounds” underscored by “an atomic quality—compact yet explosive” aren’t defined by the truth, but rather by play.

Writing and reading aphorisms is the play of contemplation, the joy of improvisation; it’s the very nature of aphorism. We read such sayings not for the finality of truth, but for the possibilities of maybe. For a short investment of time and an economy of words we can explore the potential of ideas that even if inaccurate, would sink far longer and more self-serious works. That is the form’s contribution. An aphorism is all wheat and no chaff, all sweet and no gaffe.

Image credit: Unsplash/Prateek Katyal.

The Lion and the Eagle: On Being Fluent in “American”

“How tame will his language sound, who would describe Niagara in language fitted for the falls at London bridge, or attempt the majesty of the Mississippi in that which was made for the Thames?” —North American Review (1815)

“In the four quarters of the globe, who reads an American book?” —Edinburgh Review (1820)

Turning from an eastern dusk and towards a western dawn, Benjamin Franklin miraculously saw a rising sun on the horizon after having done some sober demographic calculations in 1751. Not quite yet the aged, paunchy, gouty, balding raconteur of the American Revolution, but rather only the slightly paunchy, slightly gouty, slightly balding raconteur of middle-age, Franklin examined the data concerning the births, immigration, and quality of life in the English colonies by contrast to Great Britain. In his Observations Concerning the Increase of Mankind, Franklin noted that a century hence, and “the greatest Number of Englishmen will be on this Side of the Water” (while also taking time to make a number of patently racist observations about a minority group in Pennsylvania—the Germans). For the scientist and statesman, such magnitude implied inevitable conclusions about empire.


Whereas London and Manchester were fetid, crowded, stinking, chaotic, and over-populated, Philadelphia and Boston were expansive, fresh, and had room to breathe. In Britain land was at a premium, but in America there was the seemingly limitless expanse stretching towards an unimaginable West (which was, of course, already populated by people). In the verdant fecundity of the New World, Franklin imagined (as many other colonists did) that a certain “Merry Old England” that had been supplanted in Europe could once again be resurrected, a land defined by leisure and plenty for the largest number of people. Such thoughts occurred, explains Henry Nash Smith in his classic study Virgin Land: The American West as Symbol and Myth, because “the American West was nevertheless there, a physical fact of great if unknown magnitude.”

As Britain expanded into that West, Franklin argued that the empire’s ambitions should shift from the nautical to the agricultural, from the oceanic to the continental, from sea to land. Smith describes Franklin as a “far-seeing theorist who understood what a portentous role North America” might play in a future British Empire. Not yet an American, but still an Englishmen—and the gun smoke of Lexington and Concord still 24 years away—Franklin enthusiastically prophesizes in his pamphlet that “What an Accession of Power to the British Empire by Sea as well as Land!”

A decade later, and he’d write in a 1760 missive to a one Lord Kames that “I have long been of opinion, that the foundations of the future grandeur and stability of the British empire lie in America.” And so, with some patriotism as a trueborn Englishmen, Benjamin Franklin could perhaps imagine the Court of St. James transplanted to the environs of the Boston Common, the Houses of Parliament south of Manhattan’s old Dutch Wall Street, the residence of George II of Great Britain moved from Westminster to Philadelphia’s Southwest Square. After all, it was the Anglo-Irish philosopher and poet George Berkeley, writing his lyric “Verses on the Prospect of Planting Arts and Learning in America” in his Providence, Rhode Island manse, who could gush that “Westward the course of empire takes its way.”

But even if aristocrats could perhaps share Franklin’s ambitions for a British Empire that stretched from the white cliffs of Dover to San Francisco Bay (after all, christened “New Albion” by Francis Drake), the idea of moving the capital to Boston or Philadelphia seemed anathema. Smith explained that it was “asking too much of an Englishmen to look forward with pleasure to the time when London might become a provincial capital taking orders from an imperial metropolis somewhere in the interior of North America.” Besides, it was a moot point, since history would intervene. Maybe Franklin’s pamphlet was written a quarter century before, but the gun smoke of Lexington and Concord was wafting, and soon the idea of a British capital on American shores seemed an alternative history and a historical absurdity.

Something both evocative and informative, however, in this counterfactual; imagining a retinue of the Queen’s Guard processing down the neon skyscraper canyon of Broadway, or the dusk splintering off of the gold dome of the House of Lords at the crest of Beacon Hill overlooking Boston Common. Don’t mistake my enthusiasms for this line of speculation as evidence of a reactionary monarchism, I’m very happy that such a divorce happened, even if less than amicably. What does fascinate me is the way in which the cleaving of America from Britain affected how we understand each other, the ways in which we become “two nations divided by a common language,” as variously Mark Twain or George Bernard Shaw have been reported as having waggishly once uttered.

Even more than the relatively uninteresting issue that the British spell their words with too many u’s, or say weird things like “lorry” when they mean truck, or “biscuit” when they mean cookie, is the more theoretical but crucial issue of definitions of national literature. Why are American and British literature two different things if they’re mostly written in English, and how exactly do we delineate those differences? It can seem arbitrary that the supreme Anglophiles Henry James and T.S. Eliot are (technically) Americans, and their British counterparts W.H. Auden and Dylan Thomas can seem so fervently Yankee. Then there is what we do with the early folks; is the “tenth muse,” colonial poet Anne Bradstreet, British because she was born in Northampton, England, or was she American because she died in Ipswich, Mass.? Was Thomas More “American” because Utopia is in the Western Hemisphere, was Shakespeare a native because he dreamt of The Tempest? Such divisions make us question how language relates to literature, and how literature interacts with nationality, and what that says vis-à-vis the differences between Britain and America, the lion and the eagle.

A difficulty emerges in separating two national literatures that share a common tongue, and that’s because traditionally literary historians equated “nation with a tongue,” as critic William C. Spengemann writes in A New World of Words: Redefining Early American Literature, explaining how what gave French literature or German literature a semblance of “identity, coherence, and historical continuity” was that they were defined by language and not by nationality. By such logic, if Dante was an Italian writer, Jean-Jacques Rousseau a French one, and Franz Kafka a German author, then it was because those were the languages in which they wrote, despite them respectively being Swiss, Czech, and Florentine.

Americans, however, largely speak in the tongue of the country that once held dominion over the thin sliver of land that stretched from Maine to Georgia, and as far west as the Alleghenies. Thus, an unsettling conclusion has to be drawn; perhaps the nationality of Nathaniel Hawthorne, Herman Melville, and Emily Dickinson was American, but the literature they produced would be English, so that with horror we realize that Walt Whitman’s transcendent “barbaric yawp” would be growled in the Queen’s tongue. Before he brilliantly complicates that logic, Spengemann sheepishly concludes, “writings in English by Americans belong, by definition, to English literature.”

Sometimes misattributed to linguist Noam Chomsky, it was actually the scholar of Yiddish Max Weinreich who quipped that a “language is a dialect with an army and a navy.” If that’s the case, then a national literature is the bit of territory that that army and navy police. But what happens when that language is shared by two separate armies and navies? To what nation does the resultant literature then belong? No doubt there are other countries where English is the lingua franca; all of this anxiety over the difference between British and American literature doesn’t seem so quite involved as regards British literature and its relationship to poetry and prose from Canada, Australia, or New Zealand for that matter.

Sometimes those national literatures, and especially English writing from former colonies like India and South Africa, are still folded into “British Literature.” So Alice Munro, Les Murray, Clive James, J.M. Coetzee, and Salman Rushdie could conceivably be included in anthologies or survey courses focused on “British Literature”—though they’re Canadian, Australian, South African, and Indian—where it would seem absurd to similarly include William Faulkner and Toni Morrison in the same course. Some anthologizers who are seemingly unaware that Ireland isn’t Great Britain will even include James Joyce and W.B. Yeats alongside Gerard Manley Hopkins and D.H. Lawrence as somehow transcendentally “English,” but the similar (and honestly less offensive) audacity of including Robert Lowell or Sylvia Plath as “English” writers is unthinkable.

Perhaps it’s simply a matter of shared pronunciation, the superficial similarity of accent that makes us lump Commonwealth countries (and Ireland) together as “English” literature, but something about that strict division between American and British literature seems more deliberate to me, especially since they ostensibly do share a language. An irony in that though, for as a sad and newly independent American said in 1787, “a national language answers the purpose of distinction: but we have the misfortune of speaking the same language with a nation, who, of all people in Europe, have given, and continue to give us thefewest proofs of love.”

Before embracing English, or pretending that “American” was something different than English, both the Federal Congress and several state legislatures considered making variously French, German, Greek, and Hebrew our national tongue, before wisely rejecting the idea of one preferred language. So unprecedented and so violent was the breaking of America with Britain, it did after all signal the end of the first British Empire, and so conscious was the construction of a new national identity in the following century, that it seems inevitable that these new Federalists would also declare an independence for American literature.

One way this was done, shortly after the independence of the Republic, is exemplified by the fantastically named New York Philological Society, whose 1788 charter exclaimed their founding “for the purpose of ascertaining and improving the American Tongue.” If national literatures were defined by language, and America’s language was already the inheritance of the English, well then, these brave philologists would just have to transform English into American. Historian Jill Lepore writes in A Is for American: Letters and Other Characters in the Newly United States that the society was created in the “aftermath of the bloody War for Independence” in the hope that “peacetime America would embrace language and literature and adopt…a federal, national language.”

Lepore explains the arbitrariness of national borders; their contingency in the time period that the United States was born; and the manner in which, though language is tied to nationality, such an axiom is thrust through with ambiguity, for countries are diverse of speech and the majority of a major language’s speakers historically don’t reside in the birthplace of that tongue. For the New York Philological Society, it was imperative to come up with the solution of an American tongue, to “spell and speak the same as one another, but differently from people in England.” So variable were (and are) the dialects of the United States, from the non-rhotic dropped “r” of Boston and Charleston, which in the 18th century was just developing in imitation of English merchants to the guttural, piratical swagger of Western accents in the Appalachians, that this lexicographer hoped to systematize such diversity into an unique American language. As one of those assembled scholars wrote, “Language, as well as government should be national…America should have her own distinct from all the world.” His name was Noah Webster and he intended his American Dictionary to birth an entirely new language.

It didn’t of course, even if “u” fell out of “favour.” Such was what Lepore describes as an orthographic declaration of independence, for “there iz no alternativ” as Webster would write in his reformed spelling. But if Webster’s goal was the creation of a genuine new language, he inevitably fell short, for the fiction that English and “American” are two separate tongues is as political as is the claim that Bosnian and Serbian, or Swedish and Norwegian, are different languages (or that English and Scots are the same, for that matter). British linguist David Crystal writes in The Stories of English that “The English language bird was not freed by the American manoeuvre,” his Anglophilic spelling a direct rebuke to Webster, concluding that the American speech “hopped out of one cage into another.”  

Rather, the intellectual class turned towards literature itself to distinguish America from Britain, seeing in the establishment of writers who surpassed Geoffrey Chaucer or Shakespeare a national salvation. Melville, in his consideration of Hawthorne and American literature more generally, predicted in 1850 that “men not very much inferior to Shakespeare are this day born on the banks of the Ohio.” Three decades before that, and the critic Sydney Smith had a different evaluation of the former colonies and their literary merit, snarking interrogatively in the Edinburgh Review that “In the four quarters of the globe, who reads an American book?” In the estimation of the Englishmen (and Scotsmen) who defined the parameters of the tongue, Joel Barlow, Philip Freneau, Hugh Henry Brackenridge, Washington Irving, Charles Brockden Brown, and James Fenimore Cooper couldn’t compete with the sublimity of British literature.

Yet, by 2018, British critics weren’t quite as smug as Smith had been, as more than 30 authors protested the decision four years earlier to allow Americans to be considered for the prestigious Man Booker Prize. In response to the winning of the prize by George Saunders and Paul Beatty, English author Tessa Hadley told The New York Times “it’s as though we’re perceived…as only a subset of U.S. fiction, lost in its margins and eventually, this dilution of the community of writers plays out in the writing.” Who in the four corners of the globe reads an American novel? Apparently, the Man Booker committee. But Hadley wasn’t, in a manner, wrong in her appraisal. By the 21st century, British literature is just a branch of American literature. The question is how did we get here?

Only a few decades after Smith wrote his dismissal, and writers would start to prove Melville’s contention, including of course the author of Benito Cereno and Billy Bud himself. In the decade before the Civil War there was a Cambrian Explosion of American letters, as suddenly Romanticism had its last and most glorious flowering in the form of Ralph Waldo Emerson, Henry David Thoreau, Hawthorne, Melville, Emily Dickinson, and of course Walt Whitman. Such was the movement whose parameters were defined by the seminal scholar F.O. Matthiessen in his 1941 classic American Renaissance: Art and Expression in the Age of Emerson and Whitman, who included studies of all of those figures saving (unfortunately) Dickinson.

The Pasadena-born-but-Harvard-trained Matthiessen remains one of the greatest professors to ever ask his students to close-read a passage of Emerson. American Renaissance was crucial in discovering American literature as something distinct and great in its own right from British literature. Strange to think now, but for most of the 20th century the teaching of American literature was left for either history classes, or the nascent (State Department funded) discipline of American Studies. English departments, true to the very name of the field, tended to see American poetry and novels as beneath consideration in the study of “serious” literature. The general attitude of American literary scholars about their own national literature can be summed up by Victorian critic Charles F. Richardson, who, in his 1886 American Literature, opined that “If we think of Shakespeare, Bunyan, Milton, the seventeenth-century choir of lyrists, Sir. Thomas Browne, Jeremy Taylor, Addison, Swift, Dryden, Gray, Goldsmith, and the eighteenth-century novelists…what shall we say of the intrinsic worth of most of the book written on American soil?” And that was a book by an American about American literature!

On the eve of the World War II and Matthiessen approached his subject in a markedly different way, and while scholars had granted the field a respectability, American Renaissance was chief in radically altering the manner in which we thought about writing in English (or “American”). Matthiessen surveyed the diversity of the 1850s from the repressed Puritanism of Hawthorne’s The Scarlet Letter to the Pagan-Calvinist cosmopolitan nihilism of Moby-Dick and the pantheistic manual that is Thoreau’s Walden in light of the exuberant homoerotic mysticism of Whitman’s Leaves of Grass. Despite such diversity, Matthiessen concluded that the “one common denominator of my five writers…was their devotion to the possibilities of democracy.” Matthiessen was thanked for his discovery of American literature by being hounded by the House Un-American Activities Committee over his left-wing politics, until the scholar jumped from the 12th floor of Boston’s Hotel Manger in 1950. The critic’s commitment to democracy was all the more poignant in light of his death, for Matthiessen understood that it’s in negotiation, diversity, and collaboration that American literature truly distinguished itself. Here was an important truth, what Spengemann explains when he argues that American literature is defined not by the language in which it is written (as with British literature), but rather “American literature had to be defined politically.”

When considering the American Renaissance, an observer might be tempted to whisper “Westward the course of empire,” indeed. Franklin’s literal dream of an American capital to the British Empire never happened. Yet by the decade when the United States would wrench itself apart in an apocalyptic civil war, ironically America would become figurative capital of the English language. From London the energy, vitality, and creativity of the English language would move westward to Massachusetts in the twilight of Romanticism, and after a short sojourn in Harvard and Copley Squares, Concord and Amherst, it would migrate towards New York City with Whitman, which if not the capital of the English became the capital of the English language.  From the 1850s onward, British literature became a regional variation of American literature, a branch on the latter’s tree, a mere phylum in its kingdom.

English literary critics of the middle part of the 19th century didn’t note this transition; arguable if British reviewers in the mid part of the 20th century did either, but if they didn’t, it’s an act of critical malpractice. For who would trade the epiphanies in ballad-meter that are the lyrics of Dickinson in favor of the arid flatulence of Alfred Lord Tennyson? Who would reject the maximalist experimentations of Melville for the reactionary nostalgia of chivalry in Walter Scott? Something ineffable crossed the Atlantic during the American Renaissance, and the old problem of how we could call American literature distinct from British since both are written in English was solved—the later is just a subset of the former.

Thus, Hadley’s fear is a reality, and has been for a while. Decades before Smith would mock the pretensions of American genius, and the English gothic novelist (and son of the first prime minister) Horace Walpole would write, in a 1774 letter to Horace Mann, that the “next Augustan age will dawn on the other side of the Atlantic. There will, perhaps, be a Thucydides at Boston, a Xenophon at New York, and, in time, a Virgil at Mexico, and an Isaac Newton at Peru. At last, some curious traveler from Lima will visit England and give a description of the ruins of St. Paul’s.” Or maybe Walpole’s curious traveler was from California, as our language’s literature has ever moved west, with Spengemann observing that if by the American Renaissance the “English-speaking world had removed from London to the eastern seaboard of the United States, carrying the stylistic capital of the language along with it…[then] Today, that center of linguistic fashion appears to reside in the vicinity of Los Angeles.” Franklin would seem to have gotten his capital, staring out onto a burnt ochre dusk over the Pacific Palisades, as westward the course of empire has deposited history in that final location of Hollywood.

John Leland writes in Hip: The History that “Three generations after Whitman and Thoreau had called for a unique national language, that language communicated through jazz, the Lost Generation and the Harlem Renaissance…American cool was being reproduced, identically, in living rooms from Paducah to Paris.” Leland concludes it was “With this voice, [that] America began to produce the popular culture that would stamp the 20th century as profoundly as the great wars.” What’s been offered by the American vernacular, by American literature as broadly constituted and including not just our letters but popular music and film as well, is a rapacious, energetic, endlessly regenerative tongue whose power is drawn not by circumscription but in its porous and absorbent ability to draw from a variety of languages that have been spoken in this land. Not just English, but languages from Algonquin to Zuni.

No one less than the great grey poet Whitman himself wrote, “We American have yet to really learn our own antecedents, and sort them, to unify them. They will be found ampler than has been supposed, and in widely different sources.” Addressed to an assembly gathered to celebrate the 333th anniversary of Santa Fe, Whitman surmised that “we tacitly abandon ourselves to the notion that our United States have been fashion’d from the British Islands only, and essentially form a second England only—which is a very great mistake.” He of the multitudinous crowd, the expansive one, the incarnation of America itself, understood better than most that the English tongue alone can never define American literature. Our literature is Huck and Jim heading out into the territories, and James Baldwin drinking coffee by the Seine; Dickinson scribbling on the backs of envelopes and Miguel Piñero at the Nuyorican Poets Café. Do I contradict myself? Very well then. Large?—of course. Contain multitudes?—absolutely.

Spengemann glories in the fact that “if American literature is literature written by Americans, then it can presumably appear in whatever language an American writer happens to use,” be it English or Choctaw, Ibo or Spanish. Rather than debating what the capital of the English language is, I advocate that there shall be no capitals. Rather than making borders between national literatures we must rip down those arbitrary and unnecessary walls. There is a national speech unique to everyone who puts pen to paper; millions of literatures for millions of speakers. How do you speak American? By speaking English. And Dutch. And French. And Yiddish. And Italian. And Hebrew. And Arabic. And Ibo. And Wolof. And Iroquois. And Navajo. And Mandarin. And Japanese. And of course, Spanish. Everything can be American literature because nothing is American literature. Its promise is not just a language, but a covenant.

Image credit: Freestock/Joanna Malinowska.

From Father Divine to Jim Jones: On the Phenomenon of American Messiahs

“And you can call me an egomaniac, megalomaniac, or whatever you wish, with a messianic complex. I don’t have any complex, honey. I happen to know I’m the messiah.” —Rev. Jim Jones, FBI Tape Q1059-1

“There’s a Starman, waiting in the sky/He’d like to come and meet us/But he thinks he’d blow our minds.” —David Bowie

Philadelphia, once the second-largest city in the British Empire, was now the capital of these newly united states when the French diplomat and marquis François Barbé-Marbois attended a curious event held at the Fourth Street Methodist Church in 1782. Freshly arrived in the capital was a person born to middling stock in Cuthbert, R.I., and christened as Jemima Wilkinson, but who had since becoming possessed with the spirit of Jesus Christ in October of that revolutionary year of 1776 been known as “The Comforter,” “Friend of Sinners, “The All-Friend,” and most commonly as “Public Universal Friend,” subsequently wearing only masculine garments and answering only to male pronouns. Such is the description of the All-Friend as given by Adam Morris in American Messiahs: False Prophets of a Damned Nation, where the “dark beauty of the Comforter’s androgynous countenance” appeared as a “well-apportioned female body cloaked in black robes along with a white or purple cravat, topped by a wide-brimmed hat made of gray beaver fur.” Reflecting back on the theophany that was responsible for his adoption of the spirit of Christ, the Public Universal Friend wrote about how the “heavens were open’d And She saw too [sic] Archangels descending from the east, with golden crowns upon there heads, clothed in long white Robes, down to their feet.”

Even though the Quakers, with their reliance on revelation imparted from an “Inner Light,” were already a progressive (and suspect) denomination, heresy such as All-Friend’s earned his rebuke and excommunication from the church of his childhood. But that was of no accordance, as the Public Universal Friend would begin a remarkably successful campaign of evangelization across war-torn New England. Preaching a radical gospel that emphasized gender equality and communalism, All-Friend was the leader of an emerging religious movement and a citizen of the new Republic when he arrived at the very heart of Quakerism that was Philadelphia, where he hoped to convert multitudes towards this new messianic faith.  

Barbé-Marbois was excited to see the new transgendered messiah, writing that “Jemima Wilkinson has just arrived here…Some religious denomination awaited her with apprehension, others with extreme impatience. Her story is so odd, her dogmas so new, that she has not failed to attract general attention.” All-Friend was not impressed by rank or pedigree, however, and having been tipped off to the presence of the marquise among the congregation, he preceded to castigate Barbé-Marbois, declaring that “Do these strangers believe that their presence in the house of the Lord flatter me? I disdain their honors, I scorn greatness and good fortune.” For all of the All-Friend’s anger at the presence of this sinner in his temple, Barbé-Marbois was still charmed by the messiah, writing of how All-Friend “has chosen a rather beautiful body for its dwelling…beautiful features, a fine mouth, and animated eyes,” adding that his “travels have tanned her a little.” Though he also notes that they “would have burned her in Spain and Portugal.”

Such is the sort of spectacular story that readers will find in Morris’s deeply research, well-argued, exhaustive, and fascinating new book. A San Francisco-based translator and freelance journalist whose work has appeared in The Believer, The Los Angeles Review of Books, and Salon, Morris’s first book provides a counter-history of a shadow America whose importance and influence are none-the-less for its oddity. In a narrative that stretches from the All-Friend and his heterodox preaching as the last embers of the First Great Awakening died out, through case-studies that include the 19th-century Spiritualist messiahs of Thomas Lake Harris and Cyrus Teed (the later known to his followers as “Koresh” after the Persian king in the Hebrew Scriptures); the massive real-estate empire of Harlem-based Father Divine and his racially egalitarian Peace Mission; and finally the dashed promise and horror of Jim Jones and the massacre of the Peoples Temple Agricultural Project at Jonestown, Guyana in 1978, which took 900 lives and was the most brutal religiously based murder in American history until 9/11. Morris doesn’t just collect these figures at random, he argues that “a direct lineage connects Anne Lee, the Shaker messiah who arrived to American shores in 1774, to Jim Jones, the holy-rolling Marxist,” making his claim based not just on similar ideological affinities but often times with evidence of direct historical contact as well, a chain of messianic influences starting at the very origin of the nation and functioning as a subversive counter-melody to the twin American idols of the market and evangelicalism.

Often dismissively rejected as “cult leaders,” these disparate figures, Morris argues, are better understood as the founders of new religions, unusual though they may seem to us. In this contention, Morris draws on a generation of religious studies scholars who’ve long chafed at the analytical inexactitude of the claim that some groups are simply “cults” composed of easily brain-washed followers and paranoid charlatan leaders with baroque metaphysical claims; a sentiment that was understandable after the horror of Jonestown, but which neglects the full complexities and diversity of religion as a site for human meaning. America has had no shortage of groups, both malicious and benign, that sought to spiritually transform the world, and denounce the excesses and immoralities of American culture, while sometimes surpassing them and embracing convoluted theological claims.

In new religious movements there is a necessary radical critique of inequity and injustice; as Jonestown survivor Laura Kohl recalls, the utopian impulse that motivated the Peoples’ Temple was originally “at the very cutting edge of the way we wanted society to evolve. So we wanted to be totally integrated and we wanted to have people of every socio-economic level, every racial background, everything, all included under one roof.” When their experiment began, people like Kohl couldn’t anticipate that it would end with mass suicide instigated by an increasingly paranoid amphetamine addict, because in the beginning “we were trying to be role models of a society, of a culture that was totally inclusive and not discriminatory based on education or race or socio-economic level. We all joined with that in mind.”

Such is the inevitable tragedy of messianism—it requires a messiah. Whatever the idealistic import of their motivations, members of such groups turned their hopes, expectations, and faith towards a leader who inevitably would begin to fashion himself or herself as a savior. Whether you slur them as “cults,” or soberly refer to them as “new religions,” such groups stalk the American story, calling to mind not just industrious Shakers making their immaculate wooden furniture in the 19th century, but also Bhagwan Shree Rajneesh looking positively beatified in his follower-funded Mercedes Benz, Marshall Applewhite dead in his Nikes awaiting the Hale Bop Comet to shepherd Heaven’s Gate to astral realms, and David Koresh immolated with 75 of his fellow Branch Davidians after an eight-week standoff with the FBI and the ATF.

While it would be a mistake to obscure the latent (and sometimes not-so-latent) darkness that can lay at the core of some of these groups—the exploitation, megalomania, and extremism—Morris convincingly argues that simply rejecting such groups as cults does no service to understanding them. This sentiment, that some religions are legitimate while others are irretrievably “cults,” is often mouthed by so-called “deprogrammers,” frequently representatives of evangelical Christian denominations or sham psychologists whose charlatanry could compete with that of the founders of these new faiths themselves. Morris claims that part of what’s so threatening about the figures he investigates, and not other equally controlling phenomena from Opus Dei to the Fellowship, is that unlike those groups, leaders like Ann Lee, Father Divine, and even Jones provided a “viable alternative to the alienation of secularized industrial urbanism, and a politico-spiritual antidote to the anodyne mainline Protestantism that increasingly served as a handmaiden to big business.”

For Morris, such figures and groups are genuinely countercultural, and for all of their oftentimes tragic failings, they’ve provided the only genuine resistance to the forward march of capitalism in American history. These groups are seen as dangerous in a way that Opus Dei and the Fellowship aren’t because they so fully interrogate the basic structures of our society in a manner that those more accepted cults don’t, in part because those mainstream groups exist precisely to uphold the ruling class. As Teed wrote, “Christianity… is but the dead carcass of a once vital and active structure. It will rest supine till the birds of prey, the vultures and cormorants of what is falsely called liberalism have picked its bones of its fleshly covering leaving them to dry to bleach and decompose.”

“Far more than their heretical beliefs,” Morris writes, it is the “communistic and anti-family leanings of American messianic movements [that] pose a threat to the prevailing socio-economic order.” Lee may have thought that she was the feminine vessel for the indwelling presence of Christ, but she also rejected the speculation-based industrialization wreaking havoc on the American countryside; Teed believed that the Earth was hollow and that we lived in its interior, but he also penned cognoscente denunciations of Gilded Age capitalism, and modeled ways of organizing cooperative communities that didn’t rely on big business; Father Divine implied that he ascended bodily from heaven, but he also built a communally owned empire that included some of the first integrated businesses and hotels in the United States; and Jones claimed that he was variously the reincarnation of the Pharaoh Akhenaten, the Buddha, Christ, and Vladimir Lenin, but he was also a staunch fighter against segregation and he and his wife were the first Indiana couple to adopt an African-American child. Such inconsistencies don’t invalidate these figures legitimate beliefs, but they make the ultimate conclusion of how many of these movements all the more tragic because of them.

Though not expressly stated this way by Morris, there is a sense in American Messiahs that the titular figures are both the most and least American of us. The least because they embrace a utopian communism anathema to the national character, and the most because what could be more boot-strapping in its rugged individualism than declaring yourself to be a god? Such are the paradoxes inherent in a commune run by a king (or queen). These new religions steadfastly reject the cutthroat avarice that defines American business in favor of a subversive collectivism. At the center of the social theory and economics of every group, from the Universal Friends to the Peoples’ Temple, is a model of communal living, with Morris writing that such organization “represents the ultimate repudiation of the values and institutions that Americans historically hold dear: it rejects not only the sacrosanct individualism on which American culture thrives, but also the nuclear family unit that evolved alongside industrial capitalism.”

In this way, whether in a Shaker commune or one of Father Divine’s Peace Missions, the intentional community offers “an escape from the degrading alienation of capitalist society by returning—once more—to cooperative and associative modes of living modeled by the apostolic church.” But at the same time, there is the inegalitarian contradiction of allowing a woman or man to be preeminent among the rest, to fashion themselves as a messiah, as a type of absolute CEO of the soul. Such is the marriage of the least American of collectivisms combined with a rugged individualism pushed to the breaking point. And so, America is the land of divine contradictions, for as Harris wrote in his manifesto Brotherhood of the New Life “I have sought to fold the genius of Christianity, to fathom its divine import, and to embody its principles in the spirit and body of our own America.”

Despite the antinomian excesses of theses messiahs, Morris is correct in his argument that the social and economic collectivism promoted and explored by such groups are among the most successful instances of socialism in the United States. A vibrant radical left in America has always been under attack by both capital and the government, so that enduring instances of socialism more often find themselves in a religious context than in a secular one, as failed communes from Brook Farm to Fruitland can attest to. Additionally, because it was the reform-minded French sociologist “Charles Fourier, rather than Karl Marx, who set the agenda of socialist reform in antebellum America,” as Morris writes, there was never the opportunity for a working-class, secular communist party to develop in the United States as it did in Europe. Into that void came sects and denominations that drew freely from America’s religious diversity, marrying Great Awakening evangelism to early modern occultism and 19th-century spiritualism so as to develop heterodox new faiths that often cannily and successfully questioned the status quo; what Morris describes as a “nonsectarian, ecumenical Christian church rebuilt around Christian principles of social justice [that] could be an instrument of radical social reform.”

And though their struggles for racial, gender, and class egalitarianism are often forgotten, Morris does the important job of resuscitating an understanding of how groups from the Shakers to Teed’s Koreshan Unity functioned as vanguards for progressive ideals that altered the country for the better. In ways that are both surprising and crucial to remember, such groups were often decades, if not centuries, ahead of their fellow Americans in embracing the axiom that all people are created equal, and in goading the nation to live up to its highest ideals, while demonstrating the efficacy of religious movements to effect social change and to demonstrate that there are alternative ways to structure our communities. After all, it was in the 18th century that Shakers taught a form of nascent feminism and LGBTQ equality, with their founder Mother Ann Lee publicly praying to “God, Our Mother and Father.”

In groups like the Shakers and the Society of Universal Friends there was an “incipient feminism, which continued to develop” and that defined American messianism over the centuries in the attraction of a “predominantly female following often through promises of equal rights among the faithful.” Women were able to find leadership positions that existed nowhere else in American society at the time, and they also would be emancipated from mandated domestic drudgery and abuse, allowing them an almost unparalleled freedom. As Morris notes, “For the tens of thousands of Americans who attended [these revivals], it was likely the first time any had ever seen a woman permitted to stand at a public lectern.” To read the radicalism of such a spectacle as mere performance, or to ignore their subversive language as simple rhetoric, is to deny that which is transgressive, and perhaps even redemptive, in figures otherwise marginalized in our official histories. An 18th-century church acknowledges the evils of misogyny, and made its rectification one of its chief goals. Nineteenth-century new religions denounced institutional racism from the pulpit long before emancipation. There is admittedly something odd in Spiritualist churches holding seances where the spirits of Thomas Jefferson and George Washington would possess mediums and speak to the assembled, but the fact that those founders then “tended to express regret for their participation in the slave economy, and advanced that a more temperate and egalitarian society would ease the heavenward path of American Christians” isn’t just an attempt at rationalization or absolution, but an acknowledgement of deep historical malignancies in our society that people still have trouble understanding today.

Regarding America’s shameful history of racial inequality, many of these so-called messiahs were often far ahead of conventional politics as well. Father Divine’s Peace Missions are an illustrative example. Possibly born to former slaves in Rockville, Md., and christened George Baker Jr., the man who would come to be called Father Divine studied the emerging loose movement known as “New Thought” and developed his own metaphysics grounded in egalitarianism and the same adoptionist heresy that other pseudo-messiahs had embraced, namely that it was possible to be infused with the spirit of Christ in a way that made you equivalent to Christ. Furthermore, as radical as it was to see a woman preaching, it was just as subversive for Americans to imagine Christ as a black man, but Father Divine’s mission was suffused with the belief that “if God first chose to dwell among a people dispossessed of their land and subject to the diktats of a sinful empire, it was logical for him to return as an African American in the twentieth century.”

Morris explains that it was Father Divine’s contention, as well as that of the other messiahs, that “Such a church might rescue the nation from its spiritual stagnation and the corresponding failure to live up to its democratic ideals.” At the height of the Great Depression, Father Divine built a racially integrated fellowship in a series of communally owned Peace Missions that supplied employment and dignity to thousands of followers, composed of “black and white, rich and poor, illiterate and educated.” His political platform, which he agitated for in circles that included New York City Republican (and sometimes Socialist) mayor Fiorello LaGuardia, as well as President Franklin Delano Roosevelt, included laws requiring insurance, caps on union dues, the abolition of the death penalty, banning of assault weapons, and the repeal of racially discriminatory laws. Even while Morris’s claim that “Father Divine was the most well-known and influential civil rights leader on the national stage between the deportation of Marcus Garvey and the emergence of Martin Luther King Jr.” might need a bit more exposition to be convincing, he does make his case that this enigmatic and iconoclastic figure deserves far more attention and credit than he’s been given.

At times American Messiahs can suffer a bit from its own enthusiasm. It’s always an engaging read, but Morris’s claims can occasionally be a bit too sweeping. After reading quotations from pseudo-messiahs like Teed’s, when he writes that “The universe, is an alchemico-organic dynamo… Its general form is that of the perfect egg or shell, with its central vitellus at or near the center of the sphere,” such exhaustive litanies become exhausting, because strange cosmologies are, well, strange. When you read about the course of study at Teed’s College of Life, where for $50 participants could learn “analogical biology and analogical physiology, disciplines that taught the ‘laws of universal form,'” you can’t help but wish that just a smidgen more cynicism was expressed on Morris’s part. There is an admirability that Morris takes such figures on their own accord, but it’s hard to not approach them with some skepticism. When it comes to hucksters claiming to be possessed by the indwelling presence of God, it’s difficult not to declare that sometimes a crank is a crank is a crank, and a nutcase by any other name would still smell of bullshit.

Still, Morris’s argument that we have to understand the messiahs as instances of often stunningly successful countercultural critique of American greed and inequity is crucial, and he’s right that we forget those lessons at our own peril. If we only read Teed for his bizarre anti-Copernican cosmology positing that we live within the Earth itself, but we forget that he also said that the “question for the people of to-day to consider is that of bread and butter. It must henceforth be a battle to the death between organized labor and organized capital,” than we forget the crucial left role that such groups have often played in American history. Even more important is the comprehension that such groups supplied a potent vocabulary, a sacred rhetoric that often spoke to people and that conceived of the political problems we face in a manner more astute and moving than simple secular analysis did. It’s not incidental that from the Shakers to the Amish, when it comes to successful counter-cultures, it’s the religious rather than the secular communes that endure. When Father Divine’s follower who went by the name John Lamb said that “The philosophies of men…were inadequate to cope with humanity’s problems,” he’s wisely speaking a truth just as accurate today as during the Great Depression when “clouds of tyranny” were wafting in from Europe. Say what you will about Morris’s messiahs, they were often woman and men who understood the score, and posed a solution to those problems regardless of how heterodox we may have found their methods.

Morris writes that “The American messianic impulse is based on a fundamentally irrefutable truth first observed by the Puritans: the injustices of capitalist culture cannot be reformed from within.” You can bracket out your theories of a hollow Earth and your spirit medium seances, but that above observation is the only one worth remembering. A profound lesson to be learned from the example of the women and men who most steadfastly lived in opposition to those idols of American culture. And yet, we can’t forget where the excesses of such opposition can sometimes lead—it’s hard not to wash the taste of the poisoned Kool-Aid of Jonestown from our mouths. With such a tragic impasse, what are those of us who wish there were utopias supposed to do? The great failure of these egalitarian experiments is that they paradoxically ended up enshrining a figure above the rest, that in rejecting American individualism they strangely embraced it in its purest and most noxious forms. If we can all look into those stark, foreboding mirror sunglasses favored by Jim Jones to conceal his drug-addled and bloodshot eyes, I don’t wonder if what we see isn’t our own reflections staring back at us, for good and ill. Perhaps there is an answer in that, for in that reflection we can see not just one man, but rather a whole multitude. What’s needed, if possible, is a messianism without a messiah.

Marks of Significance: On Punctuation’s Occult Power

“Prosody, and orthography, are not parts of grammar, but diffused like the blood and spirits throughout the whole.” —Ben Jonson, English Grammar (1617)

Erasmus, author of The Praise of Folly and the most erudite, learned, and scholarly humanist of the Renaissance, was enraptured by the experience, by the memory, by the very idea of Venice. For 10 months from 1507 to 1508, Erasmus would be housed in a room of the Aldine Press, not far from the piazzas of St. Mark’s Square with their red tiles burnt copper by the Adriatic sun, the glory and the stench of the Grand Canal wafting into the cell where the scholar would expand his collection of 3,260 proverbs entitled Thousands of Adages, his first major work. For Venice was the home to a “library which has no other limits than the world itself;” a watery metropolis and an empire of dreams that was “building up a sacred and immortal thing.”

Erasmus composed to the astringent smell of black ink rendered from the resin of gall nuts, the rhythmic click-click-click of movable type of alloyed lead-tin keys being set, and the whoosh of paper feeding through the press. From that workshop would come more than 100 titles of Greek and Latin, all published with the indomitable Aldus Manutius’s watermark, an image filched from an ancient Roman coin depicting a strangely skinny Mediterranean dolphin inching down an anchor. Reflecting on that watermark (which has since been filched again, by the modern publisher Doubleday), Erasmus wrote that it symbolized “all kinds of books in both languages, recognized, owned and praised by all to whom liberal studies are holy.” Adept in humanistic philology, Erasmus made an entire career by understanding the importance of a paragraph, a phrase, a word. Of a single mark. As did his publisher.

Erasmus’s printer was visionary. The Aldine Press was the first in Europe to produce books made not by folding the sheets of paper in a bound book once (as in a folio), or four times (as in a quarto), but eight times, to produce volumes that could be as small as four to six inches, the so-called octavo. Such volumes could be put in a pocket, what constituted the forerunner of the paperback, which Manutius advertised as “portable small books.” Now volumes no longer had to be cumbersome tomes chained to the desk of a library, they could be squirreled away in a satchel, the classics made democratic. When laying the typeface for a 1501 edition of Virgil in the new octavo form, Manutius charged a Bolognese punchcutter named Francesco Griffo to design a font that appeared calligraphic. Borrowing the poet Petrarch’s handwriting, Griffo invented a slanted typeface that printers quickly learned could denote emphasis, which came to be named after the place of its invention: italic.

However, it was an invention seven years earlier that restructured not just how language appears, but indeed the very rhythm of sentences; for, in 1496, Manutius introduced a novel bit of punctuation, a jaunty little man with leg splayed to the left as if he was pausing to hold open a door for the reader before they entered the next room, the odd mark at the caesura of this byzantine sentence that is known to posterity as the semicolon. Punctuation exists not in the wild; it is not a function of how we hear the word, but rather of how we write the Word. What the theorist Walter Ong described in his classic Orality and Literacy as being marks that are “even farther from the oral world than letters of the alphabet are: though part of a text they are unpronounceable, nonphonemic.” None of our notations are implied by mere speech, they are creatures of the page: comma, and semicolon; (as well as parenthesis and what Ben Jonson appropriately referred to as an “admiration,” but what we call an exclamation mark!)—the pregnant pause of a dash and the grim finality of a period. Has anything been left out? Oh, the ellipses…

No doubt the prescriptivist critic of my flights of grammatical fancy in the previous few sentences would note my unorthodox usage, but I do so only to emphasize how contingent and mercurial our system of marking written language was until around four or five centuries ago. Manutius may have been the greatest of European printers, but from Johannes Guttenberg to William Caxton, the era’s publishers oversaw the transition from manuscript to print with an equivalent metamorphosis of language from oral to written, from the ear to the eye. Paleographer Malcolm Parkes writes in his invaluable Pause and Effect: An Introduction to the History of Punctuation in the West that such a system is a “phenomenon of written language, and its history is bound up with that of the written medium.” Since the invention of script, there has been a war of attrition between the spoken and the written; battle lines drawn between rhetoricians and grammarians, between sound and meaning. Such is a distinction as explained by linguist David Crystal in Making a Point: The Persnickety Story of English Punctuation: “writing and speech are seen as distinct mediums of expression, with different communicative aims and using different processes of composition.”

Obviously, the process of making this distinction has been going on for quite a long time. The moment the first wedged-stylus pressed into wet Mesopotamian clay was the beginning of it, through ancient Greek diacritical and Hebrew pointing systems, up through when Medieval scribes began to first separate words from endless scripto continua, whichbroachednogapsbetweenwordsuntiltheendofthemiddleages. Reading, you see, was normally accomplished out loud, and the written word was less a thing-unto-itself and more a representation of a particular event—that is the event of speaking. When this is the guiding metaphysic of writing, punctuation serves a simple purpose—to indicate how something is to be read aloud. Like the luftpause of musical notation, the nascent end stops and commas of antiquity didn’t exist to clarify syntactical meaning, but only to let you know when to take a breath. Providing an overview of punctuation’s genealogy, Alberto Manguel writes in A History of Reading how by the seventh century, a “combination of points and dashes indicated a full stop, a raised or high point was equivalent to our comma,” an innovation of Irish monks who “began isolating not only parts of speech but also the grammatical constituents within a sentence, and introduced many of the punctuation marks we use today.”

No doubt many of you, uncertain on the technical rules of comma usage (as many of us are), were told in elementary school that a comma designates when a breath should be taken, only to discover by college that that axiom was incorrect. Certain difficulties, with, that, way of writing, a sentence—for what if the author is Christopher Walken or William Shatner? Enthusiast of the baroque that I am, I’m sympathetic to writers who use commas as Hungarians use paprika. I’ll adhere to the claim of David Steel, who in his 1785 Elements of Punctuation wrote that “punctuation is not, in my opinion, attainable by rules…but it may be procured by a kind of internal conviction.” Steven Roger Fischer correctly notes in his A History of Reading (distinct from the Manguel book of the same title) that “Today, punctuation is linked mainly to meaning, not to sound.” But as late as 1589 the rhetorician George Puttenham could in his Art of English Poesie, as Crystal explains, define a comma as the “shortest pause,” a colon as “twice as much time,” and an end stop as a “full pause.” Because our grade school teachers weren’t wrong in a historical sense, for that was the purpose of commas, colons, and semicolons, to indicate pauses of certain amounts of time when scripture was being aloud. All of the written word would have been quietly murmured under the breath of monks in the buzz of a monastic scriptorium.

For grammarians, punctuation has long been claimed as a captured soldier in the war of attrition between sound and meaning, these weird little marks enlisted in the cause of language as a primarily written thing. Fischer explains that “universal, standardized punctuation, such as may be used throughout a text in consistent fashion, only became fashionable…after the introduction of printing.” Examine medieval manuscripts and you’ll find that the orthography, that is the spelling and punctuation (insomuch as it exists), is completely variable from author to author—in keeping with an understanding that writing exists mainly as a means to perform speaking. By the Renaissance, print necessitated a degree of standardization, though far from uniform. This can be attested to by the conspiratorially minded who are flummoxed by Shakespeare’s name being spelled several different ways while he was alive, or by the anarchistic rules of 18th-century punctuation, the veritable golden age of the comma and semicolon. When punctuation becomes not just an issue of telling a reader when to breathe, but as a syntactical unit that conveys particular meanings that could be altered by the choice or placement of these funny little dots, then a degree of rigor becomes crucial. As Fischer writes, punctuation came to convey “almost exclusively meaning, not sound,” and so the system had to become fixed in some sense.

If I may offer an additional conjecture, it would seem to me that there was a fortuitous confluence of both the technology of printing and the emergence of certain intellectual movements within the Renaissance that may have contributed to the elevation of punctuation. Johanna Drucker writes in The Alphabetic Labyrinth: The Letters in History and Imagination how Renaissance thought was gestated by “strains of Hermetic, Neo-Pythagorean, Neo-Platonic and kabbalistic traditions blended in their own peculiar hybrids of thought.” Figures like the 15th-century Florentine philosophers Marsilio Ficino and Giovanni Pico della Mirandola reintroduced Plato into an intellectual environment that had sustained itself on Aristotle for centuries. Aristotle rejected the otherworldliness of his teacher Plato, preferring rather to muck about in the material world of appearances, and when medieval Christendom embraced the former, they modeled his empirical perspective. Arguably the transcendent nature of words is less important in such a context; what difference does the placement of a semicolon matter if it’s not conveying something of the eternal realm of the Forms? But the Florentine Platonists like Ficino were concerned with such things, for as he writes in Five Questions Concerning the Mind (printed in 1495—one year after the first semicolon), the “rational soul…possesses the excellence of infinity and eternity…[for we] characteristically incline toward the infinite.” In Renaissance Platonism, the correct ordering of words, and their corralling with punctuation, is a reflection not of speech, but of something larger, greater, higher. Something infinite and eternal; something transcendent. And so, we have the emergence of a dogma of correct punctuation, of standardized spelling—of a certain “orthographic Platonism.”  

Drucker explains that Renaissance scholars long searched “for a set of visual signs which would serve to embody the system of human knowledge (conceived of as the apprehension of a divine order).” In its most exotic form this involved the construction of divine languages, the parsing of Kabbalistic symbols, and the embrace of alchemical reasoning. I’d argue in a more prosaic manner that such orthographic Platonism is the well-spring for all prescriptivist approaches to language, where the manipulation of the odd symbols that we call letters and punctuation can lend themselves to the discovery of greater truths, an invention that allows us “to converse even with the absent,” as Parkes writes.  In the workshops of the Renaissance, at the Aldine Press, immortal things were made of letters and eternity existed between them, with punctuation acting as the guideposts to a type of paradise. And so it can remain for us.

Linguistic prescriptivists will bemoan the loss of certain standards, of how text speak signals an irreducible entropy of communication, or how the abandonment of arbitrary grammatical rules is as if a sign from Revelation. Yet such reactionaries are not the true guardians of orthographic Platonism, for we must take wisdom where we find it, in the appearance, texture, and flavor of punctuation. Rules may be arbitrary, but the choice of particular punctuation—be it the pregnant pause of the dash or the rapturous shouting of the exclamation mark—matters. Literary agent Noah Lukeman writes in A Dash of Style: The Art and Mastery of Punctuation that punctuation is normally understood as simply “a convenience, a way of facilitating what you want to say.” Such a limited view, which is implicit for either those that advocate punctuation as an issue of sound or as one of meaning, ignores the occult power of the question mark, the theurgy in a comma. The orthographic Platonists at the Aldine Press understood that so much depended on a semicolon; that it signified more than a longer-than-average pause or the demarcation of an independent clause. Lukeman argues that punctuation is rarely “pondered as a medium for artistic expression, as a means of impacting content,” yet in the most “profound way…it achieves symbiosis with the narration, style, viewpoint, and even the plot itself.”

Keith Houston in Shady Characters: The Secret Life of Punctuation, Symbols, and Other Typographical Marks claims that “Every character we type or write is a link to the past;” every period takes us back to the dots that Irish monks used to signal the end of a line; every semicolon back to Manutius’s Venetian workshop. Punctuation, as with the letters whom they serve, has a deep genealogy, their use places us in a chain of connotation and influence that goes back centuries. More than that, each individual punctuation has a unique terroir; they do things that give the sentence a waft, a wisdom, a rhythm that is particular to them. Considering the periods of Ernest Hemingway, the semicolons of Edgar Allan Poe and Herman Melville, and Emily Dickinson’s sublime dash, Lukeman writes that “Sentences crash and fall like the waves of the sea, and work unconsciously on the reader. Punctuation is the music of language.”

To get overly hung up on punctuation as either an issue of putting marks in the right place, or letting the reader know when they can gulp some air, is to miss the point—a comma is a poem unto itself, an exclamation point is an epic! Cecelia Watson writes in her new book, Semicolon: The Past, Present, and Future of a Misunderstood Mark, that Manutius’s invention “is a place where our anxieties and our aspirations about language, class, and education are concentrated.” And she is, of course, correct, as evidenced by all of those partisans of aesthetic minimalism from Kurt Vonnegut to Cormac McCarthy who’ve impugned the Aldine mark’s honor. But what a semicolon can do that other marks can’t! How it can connect two complete ideas into a whole; a semicolon is capable of unifications that a comma is too weak to do alone. As Adam O’Fallon Price writes in The Millions, “semicolons…increase the range of tone and inflection at a writer’s disposal.” Or take the exclamation mark, a symbol that I’ve used roughly four times in my published writing, but which I deploy no less than 15 times per average email. A maligned mark due to its emotive enthusiasms, Nick Ripatrazone observes in The Millions that “exclamation marks call attention toward themselves in poems: they stand straight up.” Punctuation, in its own way, is conscious; it’s an algorithm, as much thought itself as a schematic showing the process of thought.

To take two poetic examples, what would Walt Whitman be without his exclamation mark; what would Dickinson be without her dash? They didn’t simply use punctuation for the pause of breath nor to logically differentiate things with some grammatical-mathematical precision. Rather they did do those things, but also so much more, for the union of exhalation and thought gestures to that higher realm the Renaissance originators of punctuation imagined. What would Whitman’s “Pioneers! O pioneers!” from the 1865 Leaves of Grass be without the exclamation point? What argument could be made if that ecstatic mark were abandoned? What of the solemn mysteries in the portal that is Dickinson’s dash when she writes that “’Hope’ is the thing with feathers –”? Orthographic Platonism instills in us a wisdom behind the arguments of rhetoricians and grammarians; it reminds us that more than simple notation, each mark of punctuation is a personality, a character, a divinity in itself.

My favorite illustration of that principle is in dramatist Margaret Edson’s sublime play W;t, the only theatrical work that I can think of that has New Critical close reading as one of its plot points. In painful detail, W;t depicts the final months of Dr. Vivian Bearing, a professor of 17th-century poetry at an unnamed, elite, eastern university, after she has been diagnosed with Stage IV cancer. While undergoing chemotherapy, Bearing often reminisces on her life of scholarship, frequently returning to memories of her beloved dissertation adviser, E.M. Ashford. In one flashback, Bearing remembers being castigated by Ashford for sloppy work that the former did, providing interpretation of John Donne’s Holy Sonnet VI based on an incorrectly punctuated edition of the cycle. Ashford asks her student “Do you think the punctuation of the last line of this sonnet is merely an insignificant detail?” In the version used by Bearing, Donne’s immortal line “Death be not proud” is end stopped with a semicolon, but as Ashford explains, the proper means of punctuation, as based on the earliest manuscripts of Donne, is simply a comma. “And death shall be no more, comma, Death thou shalt die.”

Ashford imparts to Bearing that so much can depend on a comma. The professor tells her student that “Nothing but a breath—a comma—separates life from everlasting…With the original punctuation restored, death is no longer something to act out on a stage, with exclamation points…Not insuperable barriers, not semicolons, just a comma.” Ashford declares that “This way, the uncompromising way, one learns something from this poem, wouldn’t you say?” Such is the mark of significance, an understanding that punctuation is as intimate as breath, as exulted as thought, and as powerful as the union between them—infinite, eternal, divine.

Image credit: Wikimedia Commons/Sam Town.

Another Person’s Words: Poetry Is Always the Speaker

Blessedly, we are speakers of languages not of our own invention, and as such none of us are cursed in only a private tongue. Words are our common property; it would be a brave iconoclast to write entirely in some Adamic dialect of her own invention, her dictionary locked away (though from the Voynich Manuscript to Luigi Serafini’s Codex Seraphinianus, some have tried). Almost every word you or I speak was first uttered by somebody else—the key is entirely in the rearrangement. Sublime to remember that every possible poem, every potential play, ever single novel that could ever be written is hidden within the Oxford English Dictionary. The answer to every single question too, for that matter. The French philosophers Antoine Arnauld and Claude Lancelot enthuse in their 1660 Port-Royal Grammar that language is a “marvelous invention of composing out of 25 or 30 sounds that infinite variety of expressions which, whilst having in themselves no likeness to what is in our mind, allow us to… [make known] all the various stirrings of our soul.” Dictionaries are oracles. It’s simply an issue of putting those words in the correct order. Language is often spoken of in terms of inheritance, where regardless of our own origins speakers of English are the descendants of Walt Whitman’s languid ecstasies, Emily Dickinson’s psalmic utterances, the stately plain style of the King James bible, the witty innovations of William Shakespeare, and the earthy vulgarities of Geoffrey Chaucer; not to forget the creative infusions of foreign tongues, from Norman French and Latin, to Ibo, Algonquin, Yiddish, Spanish, and Punjabi, among others. Linguist John McWhorter puts it succinctly in Our Magnificent Bastard Tongue: The Untold History of English, writing that “We speak a miscegenated grammar.”

There is a glory to this, our words indicating people and places different from ourselves, our diction an echo of a potter in a Bronze Age East Anglian village, a canting rogue in London during the golden era of Jacobean Theater, or a Five Points Bowery Boy in antebellum New York. Nicholas Oster, with an eye towards its diversity of influence, its spread, and its seeming omnipresence, writes in Empires of the Word: A Language History of the World that “English deserves a special position among world languages” as it is a “language with a remarkably varied history.” Such history perhaps gives the tongue a universal quality, making it a common inheritance of humanity. True with any language, but when you speak it would be a fallacy to assume that your phrases, your idioms, your sentences, especially your words are your own. They’ve passed down to you. Metaphors of inheritance can either be financial or genetic; the former has it that our lexicon is some treasure collectively willed to us, the later posits that in the DNA of language, our nouns are adenine, verbs are as if cytosine, adjectives like guanine, and adverbs are thymine. Either sense of inheritance has its uses as a metaphor, and yet they’re both lacking to me in some fundamental way—too crassly materialist, too eugenic. The proper metaphor isn’t inheritance, but consciousness. I hold that a language is as if a living thing, or to be more specific, as if a thinking thing. Maybe this isn’t a metaphor at all, perhapswe’re simply conduits for the thoughts of something bigger than ourselves, the contemplations of the language which we speak.

Philosopher George Steiner, forever underrated, writes in his immaculate After Babel: Aspects of Language and Translation that “Language is the highest and everywhere the foremost of those assents which we human beings can never articulate solely out of our own means.” We’re neurons in the mind of language, and our communications are individual synapses in that massive brain that’s spread across the Earth’s eight billion inhabitants, and back generations innumerable. When that mind becomes self-aware of itself, when language knows that it’s language, we call those particular thoughts poetry. Argentinean critic (and confidant of Jorge Luis Borges) Alberto Manguel writes in A Reader on Reading that poetry is “proof of our innate confidence in the meaningfulness of wordplay;” it is that which demonstrates the eerie significance of language itself. Poetry is when language consciously thinks.

More than rhyme and meter, or any other formal aspect, what defines poetry is its self-awareness. Poetry is the language which knows that it’s language, and that there is something strange about being such. Certainly, part of the purpose of all the rhetorical accoutrement which we associate with verse, from rhythm to rhyme scheme, exists to make the artifice of language explicit. Guy Deutscher writes in The Unfolding of Language: An Evolutionary Tour of Mankind’s Greatest Invention that the “wheels of language run so smoothly” that we rarely bother to “stop and think about all the resourcefulness that must have gone into making it tick.” Language is pragmatic, most communication doesn’t need to self-reflect on, well, how weird the very idea of language is. How strange it is that we can construct entire realities from variations in the breath that comes out of our mouths, or the manipulation of ink stains on dead trees (or of liquid crystals on a screen). “Language conceals its art,” Deutscher writes, and he’s correct. When language decides to stop concealing, that’s when we call it poetry.

Verse accomplishes that unveiling in several different ways, chief among them the use of the rhetorical and prosodic tricks, from alliteration to Terza rima, which we associate with poetry. One of the most elemental and beautiful aspects of language which poetry draws attention towards are the axioms implied earlier in this essay – that the phrases and words we speak are never our own – and that truth is found not in the invention, but in the rearrangement. In Problems of Dostoevsky’s Poetics, the Russian literary theorist Mikhail Bakhtin wrote that we receive “the word from another’s voice and filled with that other voice.” Our language is not our own, nor is our literature. We communicate in a tongue not of our own creation; we don’t have conversations, we are the conversation. Bakhtin reasons that our “own thought finds the world already inhabited.” Just as the organization of words into enjambed lines and those lines into stanzas demonstrates the beautiful unnaturalness of language, so to do allusion, bricolage, and what theorists call intertextuality make clear to us that we’re not individual speakers of words, but that words are speakers of us. Steiner writes in Grammars of Creation that “the poet says: ‘I never invent.’” This is true, the poet never invents, none of us do. We only rearrange—and that is beautiful.

True of all language, but few poetic forms are as honest about this as a forgotten Latin genre from late antiquity known as the cento. Rather than inheritance and consciousness, the originators of the cento preferred the metaphor of textiles. For them, all of poetry is like a massive patchwork garment, squares of fabric borrowed from disparate places and sewn together to create a new whole. Such a metaphor is an apt explanation of what exactly a cento is – a novel poem that is assembled entirely from rearranged lines written by other poets. Centos were written perhaps as early as the first century, but the fourth-century Roman poet Decimus Magnus Ausonius was the first to theorize about their significance and to give rules for their composition. In the prologue to Cento Nuptialias, where he composed a poem about marital consummation from fragments of Virgil derived from The Aeneid, Georgics, and Eclogues, Ausonius explained that he has “but out of a variety of passages and different meanings,” created something new which is “like a puzzle.”

The editors of The Penguin Dictionary of Literary Terms and Literary Theory explain that while forgotten today, the cento was “common in later antiquity.” Anthologizer and poet David Lehman writes in The American Scholar that “Historically, the intent was often homage, but it could and can be lampoon,” with critic Edward Hirsch writing in A Poet’s Glossary that they “may have begun as school exercises.” Though it’s true that they were written for educational reasons, to honor or mock other poets, or as showy performance of lyrical erudition (the author exhibiting their intimacy with Homer and Virgil), none of these explanations does service to the cento’s significance. To return to my admittedly inchoate axioms of earlier, one function of poetry is to plunge “us into a network of textual relations,” as the theorist Graham Allen writes in Intertextuality. Language is not the provenance of any of us, but rather a common treasury; with its other purpose being what Steiner describes as the “rec-compositions of reality, of articulate dreams, which are known to us as myths, as poetry, as metaphysical conjecture.” That’s to say that the cento remixes poetry, it recombines reality, so as to illuminate some fundamental truth hitherto hidden. Steiner claims that a “language contains within itself the boundless potential of discovery,” and the cento is a reminder that fertile are the recombination’s of poetry that have existed before, that literature is a rich, many-varied compost from which beautiful new specimens can grow towards the sun.

Among authors of centos, this is particularly true of the fourth-century Roman poet Faltonia Betitia Proba. Hirsch explains that one of the purposes of the cento, beyond the pedagogical or the parodic, was to “create Christian narratives out of pagan text,” as was the case with Proba’s Cento virgilianus, the first major Christian epic by a woman poet. Allen explains that “Works of literature, after all, are built from systems, codes and traditions established by previous works of literature;” what Proba’s cento did was a more literal expression of that fundamental fact. The classical past posed a difficulty for proud Roman Christians, for how were the faithful to grapple with the paganism of Plato, the Sibyls, and Virgil? One solution was typological, that is the assumption that if Christianity was true, and yet pagan poets like Virgil still spoke the truth, that such must be encoded within his verse itself, part of the process of Interpretatio Christiana whereby pagan culture was reinterpreted along Christian lines.

Daughter of Rome that she was, Proba would not abandon Virgil, but Christian convert that she also was, it became her task to internalize that which she loved about her forerunner and to repurpose him, to place old wine into new skins. Steiner writes that an aspect of authorship is that the “poet’s consciousness strives to achieve perfect union with that of the predecessor,” and though those lyrics are “historically autonomous,” as reimagined by the younger poet they are “reborn from within.” This is perhaps true of how all influence works, but the cento literalizes that process in the clearest manner. And so Proba’s solution was to rearrange, remix, and recombine the poetry of Virgil so that the Christianity could emerge, like a sculptor chipping away all of the excess marble in a slab to reveal the statue hidden within.

Inverting the traditional pagan invocation of the muse, Proba begins her epic (the proem being the only original portion) with both conversion narrative and poetic exhortation, writing that she is “baptized, like the blest, in the Castalian font – / I, who in my thirst have drunk libations of the Light – / now being my song: be at my side, Lord, set my thoughts/straight, as I tell how Virgil sang the offices of Christ.” Thus, she imagines the prophetic Augustan poet of Roman Republicanism who died two decades before the Nazarene was born. Drawing from a tradition which claimed Virgil’s Eclogue predicted Christ’s birth, Proba rearranged 694 lines of the poet to retell stories from Genesis, Exodus, and the Gospels, the lack of Hebrew names in the Roman original forcing her to use general terms which appear in Virgil, like “son” and “mother,” when writing of Jesus and Mary. Proba’s paradoxically unoriginal originality (or is its original unoriginality?) made her popular in the fourth and fifth centuries, the Cento virgilianus taught to catechists alongside Augustin, and often surpassing Confessions and City of God in popularity. Yet criticism of Proba’s aesthetic quality from figures like Jerome and Pope Gelasius I ensured a millennium-long eclipse of her poem, forgotten until its rediscovery with the Renaissance.

Rearranging the pastoral Eclogues, Proba envisions Genesis in another poet’s Latin words, writing that there is a “tree in full view with fruitful branches;/divine law forbids you to level with fire or iron,/by holy religious scruple it is never allowed to be disturbed./And whoever steals the holy fruit from this tree,/will pay the penalty of death deservedly;/no argument has changed my mind.” Something uncanny about the way that such a familiar myth is reimagined in the arrangement of a different myth; the way in which Proba is a redactor of Virgil’s words, shaping them (or pulling out from) this other, different, unintended narrative. Scholars have derided her poem as juvenilia since Jerome (jealously) castigated her talent by calling her “old chatterbox,” but to be able to organize, shift, and shape another poet’s corpus into orthodox scripture is an unassailable accomplishment. Writers of the Renaissance certainly thought so, for a millennium after Proba’s disparagement, a new generation of humanists resuscitated her.

Cento virgilianus was possibly the first work by a woman to be printed, in 1474; a century before that, and the father of Renaissance poetry Petrarch extolled her virtues in a letter to the wife of the Holy Roman Emperor, and she was one of the subjects of Giovani Boccaccio’s 1374 On Famous Women, his 106 entry consideration of female genius from Eve to Joanna, the crusader Queen of Jerusalem and Sicily. Boccaccio explains that Proba collected lines of Virgil with such “great skill, aptly placing the entire lines, joining the fragments, observing the metrical rules, and preserving the dignity of the verses, that no one except an expert could detect the connections.” As a result of her genius, a reader might think that “Virgil had been a prophet as well as an apostle,” the cento suturing together the classic and the Hebraic, Athens and Jerusalem.

Ever the product of his time, Boccaccio could still only appreciate Proba’s accomplishment through the lens of his own misogyny, writing that the “distaff, the needle, and weaving would have been sufficient for her had she wanted to lead a sluggish life like the majority of women.” Boccaccio’s myopia prevented him from seeing that that was the precise nature of Proba’s genius – she was a weaver. The miniatures which illustrate a 15th-century edition of Boccaccio give truth to this, for despite the chauvinism of the text, Proba is depicted in gold-threaded red with white habit upon her head, a wand holding aloft a beautiful, blue expanding sphere studded with stars, a strangely scientifically accurate account of the universe as the poet sings song of Genesis in the tongue of Virgil. Whatever anonymous artist saw fit to depict Proba as a mage understood her well; for that matter they understood creation well, for perhaps God can generate ex nihilo, but artists must always gather their material from fragments shored against their ruin.

In our own era of allusion, reference, quotation, pastiche, parody, and sampling, you’d think that the cento would have new practitioners and new readers. Something of the disk jockeys Danger Mouse, Fatboy Slim, and Girl Talk in the idea of remixing a tremendous amount of independent lines into some synthesized newness; something oddly of the Beastie Boys’ Paul’s Boutique in the very form of the thing. But centos proper are actually fairly rare in contemporary verse, despite T.S. Eliot’s admission that “mature poets steal.” Perhaps with more charity, Allen argues that reading is a “process of moving between texts. Meaning because something which exists between a text and all the other texts to which it refers and relates.” But while theorists have an awareness of the ways in which allusion dominates the modernist and post-modernist sensibility—what theorists who use the word “text” too often call “intertextuality”—the cento remains as obscure as other abandoned poetic forms from the Anacreontic to the Zajal (look them up). Lehman argues that modern instances of the form are “based on the idea that in some sense all poems are collages made up of other people’s words; that the collage is a valid method of composition, and an eloquent one.”

Contemporary poets who’ve tried their hand include John Ashbery, who weaved together Gerard Manley Hopkins, Lord Byron, and Elliot; as well as Peter Gizzi who in “Ode: Salute to the New York School” made a cento from poets like Frank O’Hara, James Schuyler, and Ashbery. Lehman has tried his own hand at the form, to great success. In commemoration of his Oxford Book of American Poetry, he wrote a cento for The New York Times that’s a fabulous chimera whose anatomy is drawn from a diversity that is indicative of the sweep and complexity of four centuries of verse, including among others Robert Frost, Hart Crane, W.H. Auden, Gertrude Stein, Elizabeth Bishop, Edward Taylor, Jean Toomer, Anne Bradstreet, Henry Wadsworth Longfellow, Edna St. Vincent Millay, Wallace Stevens, Robert Pinsky, Marianne Moore, and this being a stolidly American poem, our grandparents Walt Whitman and Emily Dickinson.

Lehman contributed an ingenious cento sonnet in The New Yorker assembled from various Romantic and modernist poets, his final stanza reading “And whom I love, I love indeed,/And all I loved, I loved alone,/Ignorant and wanton as the dawn,” the lines so beautifully and seamlessly flowing into one another that you’d never notice that they’re respectively from Samuel Taylor Coleridge, Edgar Allan Poe, and William Butler Yeats. Equally moving was a cento written by editors at the Academy of American Poets, which in its entirety reads:

In the Kingdom of the Past, the Brown-Eyed Man is King
Brute. Spy. I trusted you. Now you reel & brawl.
After great pain, a formal feeling comes–
A vulturous boredom pinned me in this tree
Day after day, I become of less use to myself,
The hours after you are gone are so leaden.
Take this rather remarkable little poem on its own accord. Its ambiguity is remarkable, and the lyric is all the more powerful for it. To whom is the narrator speaking, who has been trusted and apparently violated that loyalty? Note how the implied narrative of the poem breaks after the dash that end-stops the third line. In the first part of the poem, we have declarations of betrayal, somebody is impugned as “Brute. Spy.” But from that betrayal, that “great pain,” there is some sort of transformation of feeling; neither acceptance nor forgiveness, but almost a tacit defeat, the “vulturous boredom.” The narrator psychologically, possibly physically, withers. They become less present to themselves, “of less use to myself.” And yet there is something to be said for the complexity of emotions we often have towards people, for though it seems that this poem expresses the heartbreak of betrayal, the absence of its subject is still that which affects the narrators so that the “hours…are so leaden.” Does the meaning of the poem change when I tell you that it was stitched together by those editors, drawn from a diversity of different poets in different countries living at different times? That the “true” authors are Charles Wright, Marie Ponsot, Dickinson, Sylvia Plath, and Samuel Beckett?

Steiner claims that “There is, stricto sensu, no finished poem. The poem made available to us contains preliminary versions of itself. Drafts [and] cancelled versions.” I’d go further than Steiner even, and state that there is no individual poet. Just as all drafts are ultimately abandoned rather than completed, so is the task of completion ever deferred to subsequent generations. All poems, all writing, and all language for that matter, are related to something else written or said by someone else at some point in time, a great chain of being going back to the beginnings immemorial. We are, in the most profound sense, always finishing each other’s’ sentences. Far from something to despair at, this truth is something that binds us together in an untearable patchwork garment, where our lines and words have been loaned from somewhere else, given with the solemn promise that we must pay it forward. We’re all just lines in a conversation that began long ago, and thankfully shall never end. If you listen carefully, even if it requires a bit of decipherment or decoding, you’ll discover the answer to any query you may have. Since all literature is a conversation, all poems are centos. And all poems are prophecies whose meanings have yet to be interpreted.

Image credit: Unsplash/Alexandra.

Ten Ways to Change Your God

“Well, it may be the devil or it may be the LordBut you’re gonna have to serve somebody.” —Bob Dylan (1979)

1.Walking Cambridge’s Trinity Lane in 1894, Bertrand Russell had an epiphany concerning the ontological proof for God’s existence, becoming the unlikely convert effected by logical argumentation. In Russell’s essay “Why I Became a Philosopher,” included in Amelie Oksenberg Rorty’s anthology The Many Faces of Philosophy: Reflections from Plato to Arendt, the logician explains how his ruminations turned to fervor, writing that “I had gone out to buy a tin of tobacco; on my way back, I suddenly threw it up in the air, and exclaimed as I caught it: ‘Great Scott, the ontological argument is sound.’” An atheist had a brief conversion—of a sort.

Not exactly Saul being confronted with the light that (quoting Euripides’s The Bacchae) told him “It is hard for thee to kick against the pricks,” or Augustin in his Confessions recounting that after a ghostly young voice told him to “Take up and read!”, he turned to Paul’s epistles. Russell’s conversion was a bit more abstract—of the head rather than the heart. In his flat-cap, tweed jacket, and herring-bone bowtie, he was converted not by the Holy Spirit, but by a deductive syllogism. Envision the co-author of Principia Mathematica, which rigorously reduced all of mathematics to logic, suddenly being moved by the spirit.

Derived by the medieval monk Anselm of Canterbury in his 1078 Proslogion, the ontological argument holds that since existence must be a property of perfection, and God is a priori defined as a perfect being, than quod erat demonstrandum: God must exist. Russell explains this metaphysical trick in his Nobel Prize-winning History of Western Philosophy: a “Being who possesses all other perfections is better if He exists than if He does not, from which it follows that if he does not He is not the best possible Being.”

From Aquinas to Rene Descartes, there is a venerable history of attempting to prove the existence of an omniscient, omnipotent, omnipresent deity, though as Nathan Schneider writes in God in Proof: The Story of a Search from the Ancients to the Internet, these arguments are “taught, argued about, and forgotten, sometimes saving a person’s particular faith, sometimes eroding it, and usually neither.” In defense of Anselm, nobody in the 11th century doubted God’s existence, and such proofs weren’t designed to convince, but rather to glory in divinity. As a subsequent defense, his proof has endured in a manner that other proofs haven’t. Cosmology and evolution have overturned most others, making them seem primitive to the point of adorableness, but Anselm endures.

Still, the syllogism can’t help but seem like a bit of a magic trick, defining God into existence rather than establishing even what type of God we’re to believe in. Critics of Anselm maintain that existence isn’t a property in the same way that other qualities are. We can imagine all sorts of characters with all sorts of qualities, but that doesn’t mean that they have to exist. Defenders of Anselm would claim that God isn’t like any other character, since a perfect thing that doesn’t exist can’t be said to be a perfect thing, and God is a perfect thing Critics of that would say that it’s possible to conceive of a perfect city, but that doesn’t mean you can buy an Amtrak ticket there, nor would a benevolent God allow Penn Station to look as it does. As the puzzle-writer (and theist) Martin Gardner notes in his delightful The Whys of a Philosophical Scrivener, “I agree with the vast majority of thinkers who see the proof as no more than linguistic sleight-of-hand.”

Eventually Russell’s new faith diffused like incense from a swinging thurible. If philosophy got Russell into this mess, then it also got him out. Russell explains that Immanuel Kant in his Critique of Pure Reason would “demolish all the purely intellectual proofs of the existence of God.” But what faith had Russell gained on Trinity Lane? It wasn’t a belief in God whom that street was named after, nor was it the Lord of Abraham, Isaac, and Jacob. What Russell’s faith was in, had always been in, and would always be in, was the power of reason, and in that he was unwavering.

David Hume, another of Russell’s antecedents, wrote in his 1739 A Treatise of Human Nature that “Reason is, and ought only to be the slave of the passions.” We’re going to believe what we’re going to (dis)believe, and we’ll concoct the reasons for it later. For his part, late in life, Russell was asked how he’d respond if upon death he was brought before God’s throne, and asked why he had dared not to believe? Russell said that he’d answer “Not enough evidence!”

2.According to mercurial family lore, when my paternal grandmother’s grandfather, August Hansmann, boarded a New York-bound steamship two years after the American Civil War and one year before his native Hanover would be subsumed into Prussia, he brought along with him a copy of the Dutch philosopher Baruch Spinoza’s Tractatus Theologico-Politicus, denounced when it was printed in 1677 as “a book forged in hell…by the devil himself.” Like Spinoza, Hansmann was a Jew who lived among gentiles, and like Spinoza, he understood that being Other in a narrative not written by yourself had tragic consequences.

Born illegitimate, Hansmann was raised Jewish even though his father was Christian; a man who understood how being two things sometimes meant that you were seen as nothing, he also knew the strange freedom of how dictated faith is no faith at all. Similarly, Spinoza was a Sephardic Jew of converso background whose Portuguese ancestors practiced their Judaism in secret until Dutch freedom allowed them to reinvent their hidden faiths. Hansmann encountered Spinoza’s celebration of religious liberty, “where everyone’s judgement is free and unshackled, where each may worship God as his conscience dictates, and where freedom is esteemed before all things.” For the pious lens grinder, content to work by the tulip-lined canals of red-brick Amsterdam, religious truth can only be discovered without shackles, divinity only visible if you’re not compelled by Church or State.

When the Jews of Spain and Portugal were forced to convert to Catholicism, many secretly practiced the mitzvoth, venerating the Sabbath, abjuring treyf, and kissing mezuzah’s surreptitiously concealed within the ceramic blue slipper of the Virgin. As scholar Karen Armstrong notes in The Battle for God, these were people who “had been forced to assimilate to a…culture that did not resonate with their inner selves.” When finally able to practice their religion in Holland, many of them then discovered that the Judaism of the rabbis was not the same Judaism that they’d imagined, and so they chose to be something else, something completely new —neither Jewish or Christian, but rather nothing. Armstrong writes that such persecution ironically led to the “first declarations of secularism and atheism in Europe.”

Many of those slurred as swinish Marranos found it more honest to live by the dictates of their own reason. Spinoza was the most famous, condemned by his synagogue for writing things like “I say that all things are in God and move in God,” holding that nature is equivalent with the Lord, so that either nothing is God or everything is. Such pantheism is what made some condemn Spinoza as an atheist, and others such as Russell later describe him as a “God-intoxicated man” who saw holiness in every fallen leaf and gurgling creek, his very name, whether “Baruch” or “Benedict” meaning “blessed.”

Rebecca Newberger Goldstein, in Betraying Spinoza: The Renegade Jew Who Gave us Modernity, asks if he can “be considered…a Jewish thinker?” She argues that his universalism derives from the Mosaic covenant, the monotheism of the Shema extended so that God is Everything. As a result, he is the primogeniture for a certain type of rational, secular, progressive, liberal, humane contemporaneity. On that steamer crossing the Atlantic, Hansmann may have read that “freedom [can] be granted without prejudice…but also that without such freedom, piety cannot flourish.” My great-great grandfather lived his life as a Jew, but the attraction he saw in Spinoza was that each individual could decide for themselves whether to be Jew, Catholic, Protestant, or nothing.

Hansmann worked as a peddler on the Lower East Side, until the Homestead Act enticed him to Iowa, where he married a Huguenot woman who bore him 10 children, while he worked as a trader among the Native Americans. He refused to raise his children in any religion—Jewish or Protestant—preferring rather that they should decide upon reaching adulthood. And so, a union was made between the Jewish and the Low Church Protestant, rejecting both baptism and bris, so that my grandmother born on the frontier had absolutely no religion at all.

That such things are even possible—to be of no religion—is due in no small part to Spinoza’s sacrifice, his congregation having excommunicated him by extinguishing each individual light in the synagogue until the assembled dwelled in darkness. From that expulsion, Spinoza was expected to find refuge among the Protestants—but he didn’t. I’ve a photo from the early years of the 20th century: August Hansmann surrounded by his secular, stolid, midwestern progeny, himself siting in the center with a thick black beard, and a kippah barely visible upon his head.

3.A long line of Spinoza’s ancestors, and my great-great-grandfather’s ancestors, would have concluded Pesach evenings with a “Next year in Jerusalem,” praying for the reestablishment of the Temple destroyed by the Romans in the first century. Less known than the equally exuberant and plaintive Passover declaration is that, for a brief period in the fourth century, it seemed that the Temple might actually be restored, ironically by Rome’s last pagan emperor. Born in Constantinople only six years after the Council of Nicaea convened there to define what exactly a Christian was, Julian the Apostate would mount a failed revolution.

His uncle was Rome’s first Christian emperor who conquered by the cross and who turned his Rome over to Christ. Julian was of a different perspective, seeing in the resurrection of Apollo and Dionysius, Jupiter and Athena, the rejuvenation of Rome. He bid his time until military success foisted him onto the throne, and then Julian revealed himself as an initiate into those Eleusinian Mysteries, a celebrant of Persephone and Demeter who greeted the morning sun and prayed for the bounty of the earth, quoted in W. Heinemann’s The Works of the Emperor Julian as having written “I feel awe of the gods, I love, I revere, I venerate them.”

In Julian’s panegyrics, one can smell the burning thyme and sage, feel the hot wax from votive candles, spy the blue moonlight filtered through pine trees in a midnight cedar grove. If Plutarch recorded the very heavens had once declared “the great god Pan is dead,” then Julian prayed for his return; if the oracles at Delphi and the Sibyllines had been silenced by the Nazarene, then the emperor wanted the divinations of those prophets to operate once again. Julian wanted this paganism to be a new faith, an organized, unified, consolidated religion that bore as much similarity to the cohesion of the Christian Church as it did to the rag-tag collection of rituals and superstitions that had defined previous Roman beliefs.

Classicist Robin Lane Fox makes clear in Pagans and Christians that this wasn’t simple nostalgia. Fox explains that those who returned to paganism normally did so with “an accompanying philosophy” and that apostasy “always lead to a favor for some systematic belief.” The emperor’s conversion was a turning back combined with a the reformer’s desire for regeneration. In paganism, Julian approached origin, genesis, birth—less conversion than a return to what you should have been, but was denied.

Julian the Apostate endures as cipher—duplicitous reactionary who’d see Christian Rome turn back, or tolerant visionary who theologically elevated paganism? Christian thinkers had long commandeered classical philosophy, now pagan thinkers were able to apply the same analytical standards to their own beliefs, developing theology as sophisticated as that of Christianity. The American rake and raconteur Gore Vidal repurposed the emperor as a queer hero of liberalism in his unusual 1964 novel Julian, having his protagonist humanely exclaim that “Truth is where ever man has glimpsed divinity.” Where some had seen those intimations in Golgotha’s sacrifice, the Apostate saw them in the oracles of Chaldea or the groves of Athena.

Far from banning the new faith, Julian declared that “By the gods I do not want the Galileans to be killed or beaten unjustly nor to suffer any other ill.” Julian was rather interested in monopolistic trust-busting, and in part that included funding the rebuilding of the Jewish Temple that would have been destroyed by the emperor’s ancestors. The building of a Third Temple would be terminated when, as a Roman witness to the construction attempts wrote, “fearful balls of fire [broke]…out near the foundations…till the workmen, after repeated scorchings, could approach no more.” The Christians attributed the disaster to God; the Jews and Romans to the Christians.

The desire for a pagan Rome would similarly end with Julian’s defeat on the battle fields of Persia, an emperor who longed to see old gods born again now forced to declare that “You have won, Galilean.” Hard to reverse an eclipse, and so, we supplicate on another mournful and deferred day—“Next year at Delphi.”

4.The titular character in Julian claims that “academics everywhere are forever attacking one another.” During the fourth century, the academic debates were theological, all of those schisms and heresies, excommunications and counter-excommunications between exotic groups with names like the Monophysites and the Chalcedonians, the Arians and the Trinitarians. By the middle of Vidal’s 20th century, such disputations were just as rancorous, but theology was now subsumed into politics. Vidal’s own politics were strange, broadly left but with a sympathy afforded to the anti-establishmentarians of any ideological persuasion.

Vidal is most celebrated for calling the conservative founder of the National Review William F. Buckley a “crypto-Nazi” during a debate on ABC News scheduled to coincide with the 1968 Democratic convention; even the pyrotechnic rainbow of early television was unable to conceal the pure hatred between those two prep school grads. If the earliest years of Christianity saw bishops and monks moving between ever nuanced theological positions, than the 20th century was an era of political conversion, liberals becoming conservatives and conservatives becoming liberals, with Buckley’s magazine a fascinating case study in political apostasy.

Buckley’s politics were cradle-to-grave Republican conservatism, even as he garnered a reputation for expelling acolytes of both Ayn Rand and John Birch from the movement as if he was a medieval bishop overseeing a synod (they’ve long since found a way back in). Entering public life with his 1951 God and Man at Yale: The Superstitions of “Academic Freedom,” Buckley understood better than most how ideology is theology by another name (even as I personally revile his politics). Into this midst, National Review was the stodgy, tweedy vanguard of the reactionary intelligentsia, defining a conservative as “someone who stands athwart history, yelling Stop, at a time when no one is inclined to do so.”

The problem with a manifesto that defines itself entirely by anti-progress is that such a doctrine can be rather nebulous, and so many of the bright young things Buckley hired for the National Review, such as Joan Didion and Garry Wills, found themselves moving to the left. Such were the subtleties of conversion that Wills could be both the author of Confessions of a Conservative and a journalist placed on Richard Nixon’s infamous “enemies list.”

As people become harder of hearing and their bone-density decreases, movement from the left to the right does seem the more predictable narrative. For every Gary Wills, there’s a Norman Podhoretz, an Irving Kristol, a David Horowitz, a Christopher Hitchens. Leave it to the arm-chair Freudians to ascertain what Oedipal complex made those men of the left move towards the Big Daddy of right-wing politics, but what’s interesting are the ways in which they refashioned conservatism in a specifically leftist manner. Their migration was not from milquetoast Democratic liberalism, for they’d indeed been far to the left, several of them self-described Trotskyites. And as the Aztecs who became Catholic kept secretly worshiping their old gods, or as basilicas were built atop temples to Mithras, so too did those doctrines of “permanent revolution” find themselves smuggled into neoconservatism.

If politics is but religion by another means, than it’s the ideological conversion that strikes us as most scandalous. We’ve largely ceded the ground on the sacred—what could be less provocative than abandoning Presbyterianism for Methodism? But politics, that’s the thing that keeps us fuming for holy war, and we’re as titillated by stories of conversion as our ancestors were in tales of heresy and schism. Psychologist Daniel Oppenheimer observes, in Exit Right: The People Who Left the Left and Reshaped the American Century, that “belief is complicated, contingent, multi-determined. But do we really know it? Do we feel it?” Strange to think that Elizabeth Warren was once a Republican, and the man whom she will beat for the presidency was once a Democrat, but such are the vagaries of God and man, whether at Yale or anywhere else.

5.For all their differences, Buckley and Vidal could at least agree on the martini. Buckley would write in 1977 that a “dry martini even at night is a straightforward invitation for instant relief from the vicissitudes of a long day,” and Vidal in his novel Kalki published a year later would rhapsodize about the “martini’s first comforting haze.” On the left or on the right, one thing WASPs concurred about (and though Buckley was technically Catholic he had the soul of an Episcopalian) was the cocktail hour. I’ve no idea if the two had been drinking before their infamous sparring on ABC, though the insults, homophobia, and violent threats make me suspicious.  

Better that they’d have followed the path of conversion that another prep school boy who moved in their social circles named John Cheever did: When on April 9, 1975 his brother checked him into New York’s Smithers Alcoholic Rehabilitation Unit, he never took another drink. Cheever had lived up to the alcoholic reputation of two American tribes—High Church Protestants and Low Church writers. From the former he inherited both the genes and an affection for gin and scotch on a Westchester porch watching the trains from Grand Central thunder Upstate, and from the later he took the Dionysian myth that conflates the muse with ethanol, pining for inspiration but settling for vomiting in an Iowa City barroom.

Cheever was one of the finest short story writers of the 20th century, his prose as crystalline and perfect as a martini. Such was the company of those other addicts, of Ernest Hemingway and F. Scott Fitzgerald, William Faulkner and Thomas Wolfe. Cheever’s story “The Swimmer” is one of the most perfect distillations of how alcoholism will sneak up on a person, and he avoids the laudatory denials you see in a lesser writer like Charles Bukowski. With the repressed self-awareness that is the mocking curse of all true alcoholics, Cheever would write in his diary some two decades before he got sober that “When the beginnings of self-destruction enter the heart it seems no bigger than a grain of sand,” no doubt understanding how a single drink is too many since a dozen is never enough.

His daughter Susan Cheever, herself a recovering alcoholic, notes in Drinking in America: Our Secret History that “My father’s drinking had destroyed his body, but it had also distorted his character—his soul. The restoration of one man through the simple measure of not drinking was revelatory.” The ancients called them spirits for a reason, and in their rejection there is a conversion of a very literal sort. Cheever—along with his friend Raymond Carver—is the happy exception to the fallacy that finds romance in the gutter-death of literary genius, and he got sober by doing the hard work of Alcoholics Anonymous.

The central text of that organization was compiled by Bill W., the founder of AA; its title is technically Alcoholics Anonymous, but members informally call it “The Big Book.” Past the uninspired yellow-and-blue cover of that tome, Cheever would have read stories where he’d have “found so many areas where we overlapped—not all the deeds, but the feelings of remorse and hopelessness. I learned that alcoholism isn’t a sin, it’s a disease.” And yet the treatment of that disease was akin to a spiritual transformation.

A tired debate whether Alcoholics Anonymous is scripture or not, but I’d argue that anything that so fully transforms the countenance of a person can’t but be a conversion, for as the Big Book says, “We talked of intolerance, while we were intolerant ourselves. We missed the reality and the beauty of the forest because we were diverted by the ugliness of some of its trees.” I once was lost, and now I’m found, so on and so forth. When Cheever died, he had seven sober years—and they made all the difference.

6. Conversion narratives are the most human of tales, for the drama of redemption is an internal one, played out between the protagonist and his demons. Certain tropes—the pleasure, the perdition, the contrition, the repentance, the salvation. Augustine understood that we do bad things because bad things are fun—otherwise why would he write in Confessions “Lord, grant me chastity—but not yet.” What readers thrill to are the details, the rake’s regress from dens of iniquity, from gambling, drinking, and whoring to some new-found piety.

For Cheever’s Yankee ancestors, the New England Puritans in whose stead we’ve uneasily dwelled for the past four centuries, “election” was not a matter of personal choice, but rather grace imparted onto the unworthy human. Easy to see some issues of utility here, for when accumulation of wealth is read as evidence of God’s grace, and it’s also emphasized that the individual has no role in his own salvation, the inevitable result is spiritual disenchantment and marginalization. By the middle of the 18th century, some five generations after the first Pilgrim’s slipper graced Plymouth Rock, the Congregationalist pastors of New England attempted to suture the doubts of their flocks, coming up with “half-way covenants” and jeremiads against backsliding so as to preserve God’s bounty.

Into that increasingly secular society would come an English preacher with a thick Gloucester accent named George Whitfield, who first arrived in the New World in 1738. Technically an Anglican priest, Whitfield was a confidant of George Wesley, the father of Methodism, and from that “hot” faith the preacher would draw a new vocabulary, dispelling John Calvin’s chill with the exhortation that sinners must be born again. Crowds of thousands were compelled to repent, for “Come poor, lost, undone sinner, come just as you are to Christ.” On the Eastern seaboard, the Englishman would preach from Salem to Savannah, more than 10,000 times, drawing massive crowds, even impressing that old blasphemer Benjamin Franklin at one Philadelphia revival (the scientist even donated money).

Such was the rhetorical style of what’s called the Great Awakening, when colonial Americans abandoned the staid sermons of the previous century in favor of this shaking, quaking, splitting, fitting preaching. Whitfield and Spinoza shared nothing in temperament, and yet one could imagine that the later might smile at the liberty that “established fractious sectarianism as its essential character,” as John Howard Smith writes in The First Great Awakening: Redefining Religion in American, 1725-1775. Whitfield welcomed worshippers into a massive tent—conversion as a means towards dignity and agency.

So ecumenical was Whitfield’s evangelization that enslaved people came in droves to his revivals, those in bondage welcomed as subjects in Christ’s kingdom. Such was the esteem in which the reverend was held that upon his passing in 1770 a black poet from Cambridge named Phyllis Wheatly would regard the “happy saint” as a man whom “in strains of eloquence refin’d/[did] Inflame the heart, and captivate the mind.” Whitfield’s religious charity, it should be said, was limited. He bemoaned the mistreatment of the enslaved, while he simultaneously advocated for the economic benefits of that very institution.

Can we tighten this line. As different as they were, Whitfield and Malcolm X were both children of this strange Zion that allows such reinvention. Malcolm X writes in a gospel of both American pragmatism and American power, saying that “I’m for truth, no matter who tells it. I’m for justice, no matter who it’s for or against…I am for whoever and whatever benefits humanity as a whole.” Conversion can be a means of seizing power; conversion can be a means of reinvention.

Activist Audre Lorde famously wrote that “The master’s tools will never dismantle the master’s house,” and for a young Harlem ex-con born Malcolm Little, the Christianity of Wheatly and Whitfield would very much seem to be the domain of the plantation’s manor, so that conversion to a slave religion is no conversion at all. Mocking the very pieties of the society that Whitfield preached in, Malcolm X would declare “We didn’t land on Plymouth Rock—Plymouth Rock landed on us.” Malcolm X’s life was an on-going narrative of conversion, of the desire to transform marginalization into power. As quoted by Alex Haley in The Autobiography of Malcolm X, the political leader said “I have no mercy or compassion in me for a society that will crush people, and then penalize them for not being able to stand up.”

Transformation defined his rejection of Christianity, his membership in the Nation of Islam, and then finally his conversion to orthodox Sunni Islam. Such is true even in the rejection of his surname for the free-floating signifier of “X,” identity transformed into a type of stark, almost algebraic, abstraction. If America is a land of conversion narratives, than The Autobiography of Malcolm X is ironically one of the most American. Though as Saladin Ambar reminds us in Malcolm X at Oxford Union, his “conversion was indeed religious, but it was also political,” with all which that implies.

7.It is a truth universally acknowledged, that an apostate in possession of a brilliant spiritual mind, must be in want of a religion. If none of the religions that already exist will do, then it becomes her prerogative to invent a better one and convert to that. Critic Harold Bloom writes in The American Religion that “the religious imagination, and the American Religion, in its fullest formulations, is judged to be an imaginative triumph.” America has always been the land of religious invention, for when consciences are not compelled, the result is a brilliant multitude of schisms, sects, denominations, cults, and communes. In his Essays, the French Renaissance genius Michel de Montaigne quipped that “Man is certainly stark mad; he cannot make a worm, and yet he makes gods by the dozens.” Who, however, if given the choice between a worm or a god, would ever possibly pick the former? For America is a gene splicing laboratory of mythology, an in vitro fertilization clinic of faith, and we birth gods by the scores.

Consider Noble Drew Ali, born Timothy Drew in 1886 to former North Carolina slaves who lived amongst the Cherokee. Ali compiled into the Holy Koran of the Moorish Science Temple of America a series of ruminations, meditations, and revelations he had concerning what he called the “Moorish” origins of African-Americans. Drawing freely from Islam, Christianity, Buddhism, Hinduism, and the free-floating occultism popular in 19th-century America, Ali became one of the first founders of an Afrocentric faith in the United States, his movement the original spiritual home to Wallace Fard Muhammad, founder of the Nation of Islam. Ali writes that the “fallen sons and daughters of the Asiatic Nation of North America need to learn to love instead of hate; and to know of their higher self and lower self. This is the uniting of the Holy Koran of Mecca for teaching and instructing all Moorish Americans.”

Ali drew heavily from mystical traditions, combining his own idiosyncratic interpretations of Islam alongside Freemasonry and Rosicrucianism. Such theurgy was popular in the 19th century, a melancholic era when the almost million dead from Antietam and Gettysburg called out to the living, who responded with séance and Ouija Board. Historian Drew Gilpin Faust recounts in The Republic of Suffering: Death and the American Civil War that “Many bereaved Americans…unwilling to wait until their own deaths reunited them with lost kin…turned eagerly to the more immediate promises of spiritualism.” The 19th century saw mass conversions to a type of magic, a pseudo-empirical faith whose sacraments were technological—the photographing of ghostly ectoplasm, or the receipt of telegraphs from beyond the veil of perception.

Spiritualism wasn’t merely a general term for this phenomenon, but the name of an actual organized denomination (one that still exists). Drawing from 18th-century occultists like Emanuel Swedenborg and Franz Mesmer, the first Spiritualists emerged out of the rich soil of upstate New York, the “Burned Over District” of the Second Great Awakening (sequel to Whitfield’s First). Such beliefs held that the dead were still among us, closer than our very breath, and that spirits could interact with the inert matter of our world, souls intermingled before the very atoms of our being.

Peter Manseau writes in The Apparitionists: A Tale of Phantoms, Fraud, Photography, and the Man Who Captured Lincoln’s Ghost, that “It was a time when rapidly increasing scientific knowledge was regarded not as the enemy of supernatural obsessions, but an encouragement…Electricity had given credence to notions of invisible energies…The telegraph had made communication possible over staggering distances, which raised hopes of receiving messages from the great beyond.”

Among the important founders of the movement were the Fox Sisters of Hydesville, N.Y.; three siblings whom in 1848 claimed that they’d been contacted by spirits, including one named “Mr. Splitfoot,” who communicated in raps, knocks, and clicks. Decades later, Margaret Fox would admit that it was a hoax, since a “great many people when they hear the rapping imagine at once that the spirits are touching them. It is a very common delusion.” Despite the seeming credulity of the movement’s adherents, Spiritualists were crucial reformers, with leaders like Cora L.V. Scott and Paschal Beverly Randolph embracing abolitionism, temperance, civil rights, suffragism, and labor rights. When the cause is good, perhaps it doesn’t matter which god’s vestments you wear.

And of course the great American convert to a religion of his own devising is Joseph Smith. America’s dizzying diversity of faith confused young Smith, who asked “Who of all these parties are right, and how shall I know?” From the same upstate environs as the Fox Sisters, Smith was weened on a stew of evangelicalism and occultism, a child of the Second Great Awakening, who in those flinty woods of New York dreamt of finding shining golden tablets left by angels. Writing in No Man Knows My History: The Life of Joseph Smith, scholar Fawn M. Brodie notes that for the New England and New York ancestors of Smith there was a “contempt for the established church which had permeated the Revolution, which had made the federal government completely secular, and which was in the end to divorce the church from the government of every state.”

Smith rather made America itself his invented religion. Stephen Prothero writes in American Jesus: How the Son of God Became a National Hero that there is a tendency of “Americans to make their nation sacred—to view its citizens as God’s chosen people.” Yet it was only Smith’s Mormons who so completely literalized such a view, for the Book of Mormon describes this as “a land which is choice above all other lands.” The effect was electrifying; Brodie writes: “In the New World’s freedom the church had disintegrated, its ceremonies had changed, and its stature had declined.” What remained was a vacuum in which individual minds could dream of new faiths. Spinoza would recognize such independence, his thin face framed by his curled wig, reflected back from the polished glow of one of Moroni’s tablets excavated from the cold ground of Palmyra, N.Y.

8.“In the beginning there was the Tao, and the Tao was God,” reads John 1:1 as translated in the Chinese Version Union bible commissioned by several Protestant denomination between 1890 and 1919. Appropriating the word “Tao” makes an intuitive sense, arguably closer to the Neo-Platonist language of “Logos” as the term is rendered in the koine Greek, than to the rather confusing terminology of “the Word” as it’s often translated in English.

Read cynically, this
bible could be seen as a disingenuous use of Chinese terminology so as to make
Christianity feel less foreign and more inviting, a Western wolf in Mandarin robes.
More charitably, such syncretism could be interpreted as an attempt to find the
universal core between those two religions, a way of honoring truth regardless
of language. Conversion not between faiths, but above them. Perhaps naïve, but
such a position might imply that conversion isn’t even a possibility, that all
which is needed in the way of ecumenicism is to place the right words with the
right concepts.

The earliest synthesis between Taoism, Buddhism, Confucianism, and Christianity is traceable to the seventh century. At the Mogao Caves in Dunhuang, Gansu Province, a cache called the Jingjiao Documents penned during the Tang Dynasty and attributed to the students of a Syrian monk named Alopen were rediscovered in 1907. Alopen was a representative of that massive eastern branch of Christianity slurred by medieval European Catholics as being “Nestorian,” after the bishop who precipitated their schism at a fifth-century church council (the theological differences are arcane, complicated, and for our purposes unimportant).

During those years of late antiquity, European Christendom was a backwater; before the turn of the first millennium the Catholicus of Baghdad would have been a far more important cleric than the Pope was, for as scholar Philip Jenkins explains in The Lost History of Christianity: The Thousand Year Golden Age of the Church in the Middle East, Africa, and Asia—and How it Died, the “particular shape of Christianity with which we are familiar is a radical departure from what was for well over a millennium the historical norm…For most of its history, Christianity was a tricontinental religion, with power representations in Europe, Africa, and Asia.”

In 635, Alopen was an evangelist to a pluralistic civilization that had a history that went back millennia. His mission was neither colonial nor mercantile, and as a religious scholar he had to make Christianity appealing to a populace content with their beliefs. And so, Alopen converted the Chinese by first converting Christianity. As with the translators of the Chinese Version Union bible, Alopen borrowed Taoist and Buddhist concepts, configuring the Logos of John as the Tao, sin as karma, heaven as nirvana, and Christ as an enlightened Bodhisattva.

Sinologist Martin Palmer, writing in The Jesus Sutras: Rediscovering the Lost Scrolls of Taoist Christianity, argues that Alopen avoided “what many missionaries have tried to do—namely, make people adapt to a Western mind-set.” Rather, Alopen took “seriously the spiritual concerns of China.” Alopen was successful enough that some 150 years after his arrival, a limestone stele was engraved in both Mandarin and Syriac celebrating the history of Chinese Christianity. With a massive cross at the top of the Xi’an stele, it announced itself as a “Memorial of the Propagation in China of the Luminous Religion from Rome.” During a period of anti-Buddhist persecution in the ninth century, when all “foreign” religions were banned, the stele was buried, and by 986 a visiting monk reported that “Christianity is extinct in China.”

Like Smith uncovering his golden tablets, workers in 1625 excavated the Xi’an stele, and recognizing it as Christian sent for Jesuits who were then operating as missionaries to the Ming Court. Portuguese priest Alvaro Semedo, known to the court as Xie Wulu, saw the stele as evidence of Christian continuity; other clergy were disturbed that the monument was from a sect that the Church itself had deemed heretical 1,000 years before. German Jesuit polymath Athanasius Kirchner supplied a Latin translation of the stele, enthusing in his China Illustrata that Xi’an’s rediscovery happened by God’s will “at this time when the preaching of the faith by way of the Jesuits pervaded China, so that old and new testimonies…would go forth…and so the truth of the Gospel would be clear to everyone.” But was it so clear, this strange gospel of the Tao?

Much of Kircher’s book was based on his colleague Fr. Mateo Ricci’s accounts of the Ming Court. Ricci had taken to wearing the robes of a Confucian scholar, borrowing from both Confucius and Lao-Tzu in arguing that Catholicism was a form of those older religions. The Dominicans and Franciscans in China were disturbed by these accommodations, and by 1645 (some 35 years after Ricci had died) the Vatican’s Sacred Congregation for the Doctrine of the Faith ruled against the Jesuits (though this was a process that went back and forth). Maybe there is something fallacious in simply pretending all religions are secretly the same. Prothero writes in God Is Not One: The Eight Rival Religions that Run the World, that we often have “followed scholars and sages down the rabbit hole into a fantasy world in which all gods are one.” Catholicism is not Taoism, and that’s to the integrity of both.

But Ricci’s attitude was a bold one, and in considering different beliefs, he was arguably a forerunner of pluralistic tolerance. We risk abandoning something beautiful if we reject the unity that Alopen and Ricci worked for, because perhaps there is a flexibility to conversion, a delightful promiscuity to faith. Examining one of the Chinese water-colors of Ricci, resplendent in the heavenly blue silk of the panling lanshan with a regal, heavy, black putou on his head, a Roman inquisitor may have feared who exactly was converting whom.  

9.In the painting, Sir Francis Dashwood —11th Baron le Despencer and Great Britain’s Chancellor of the Exchequer from 1762 to 1763—is depicted as if he was St. Francis of Assisi. Kneeling in brown robes, the aristocrat is a penitent in some rocky grove, a hazy blue-grey sfumato marking the countryside visible through a gap in the stones. In the corner is a silver platter, grapes and cherries tumbled onto the soil of this pastoral chapel, as if to remind the viewer of life’s mutability, “Vanity of vanity” and all the rest of it. Some tome—perhaps The Bible?—lay open slightly beyond the nobleman’s gaze, and with hand to breast, Dashwood contemplates what looks like a crucifix. But something is amiss in this portrait painted by Dashwood’s friend, that great notary of 18th-century foibles William Hogarth. The crucifix—it’s not Christ on the cross, but a miniature nude woman with her head thrown back. Suddenly the prurient grin on the stubbly face of Dashwood makes more sense.

If you happen to be an expert on 18th-century French pornography, you might notice that it’s not the gospels that lay open on cracked spine next to Dashwood, but a copy of Nicolas Chorier’s Elegantiae Latini sermonis; were you familiar with the intricacies of Westminster politics in the 1760s, you may have observed that rather than a golden, crescent halo above the baron’s head, it’s actually a cartoon of the Earl of Sandwich in lunar profile.

Already raised in the anti-Catholic environment of British high society, Dashwood’s disdain for religion was incubated during his roguish youth while on his fashionable Grand Tour of the continent—he was expelled from the Papal States. In the anonymously written 1779 Nocturnal Revels, a two-volume account of prostitution in London, the author claims that Dashwood “on his return to England, thought that a burlesque institution in the name of St. Francis, would mark the absurdity of such Societies; and in lieu of the austerities and abstemiousness there practiced, substitute convivial gaiety, unrestrained hilarity, and social felicity.”

To house his “Franciscans,” Dashwood purchased a former Cistercian Abby in Buckinghamshire that overlooked the Thames, and in dazzling stain-glass had inscribed above its entrance the famous slogan from the Abby of Thelema in Francois Rabelais’s 15th-century classic Gargantua and Pantagruel—“Do What Thou Wilt.” Its grounds were decorated with statues of Dionysius—Julian the Apostate’s revenge—and the gothic novelist (and son of a Prime Minister) Horace Walpole wrote that the “practice was rigorously pagan: Bacchus and Venus were the deities to whom they almost publicly sacrificed; and the nymphs and the hogsheads that were laid in against the festivals of this new church.” Within those gothic stone walls, Dashwood’s compatriots very much did do what they would, replacing sacramental wine with liquor, the host with feasting, and the Mass with their orgies. The Monks of Medenham Abby, founded upon a Walpurgis Night in 1752, initiated occasional worshipers including the respected jurist Robert Vansittart, John Montague 4th Earl of Sandwich, the physician Benjamin Edward Bates II, the parliamentarian George Bubb Dodington, and in 1758 they hosted a colonial scientist named Benjamin Franklin (fresh from a Whitfield revival no doubt).

Such gatherings were not uncommon among the bored upper classes of European society; Black Masses were popular among French aristocrats into the 17th century, and in Britain punkish dens of obscenity like Dashwood’s were known as “Hell-Fire Clubs.” Evelyn Lord writes in her history The Hellfire Clubs: Sex, Satanism and Secret Societies that long before Dashwood ever convened his monks, London had been “abuzz with rumors of highborn Devil-worshipers who mocked the established Church and religion, and allegedly supped with Satan,” with the apparently non-Satanic members of Parliament pushing for anti-blasphemy legislation.

That’s the thing with blasphemy though—there’s no Black Mass without first the Mass, no Satan without God. Irreverent, impious, and scandalous though Dashwood may have been, such activities paradoxically confirm faith. Lord writes that the “hell-fire clubs represented an enduring fascination with the forbidden fruit offered by the Devil…But the members of these clubs faced a dilemma: if they believed in Satan and hell-fire, did they by implications believe in a supernatural being, called God, and a place called Heaven?” Should the sacred hold no charged power, were relics simply bits of rag and bone, than there would be no electricity in their debasement; were a crucifix meaningless, than there would be no purpose in rendering it pornographic. A blasphemous conversion, it turns out, may just be another type of conversion.  

Geoffrey Ashe argues in The Hell-Fire Clubs: Sex, Rakes and Libertines that Thelema is an antinomian ethic that can be traced from Rabelais through the Hell-Fire Clubs onto today. He writes that such a history is “strange and unsettling. It discloses scenes of pleasure and laughter, and also some of the extremist horrors ever conceived. It introduces us to cults of the Natural, the Supernatural; to magic, black and otherwise.” Dashwood’s confraternity encompasses figures as diverse as the Marquis de Sade, the notorious occultist Aleister Crowley (who had Rabelais’s motto carved above the entrance to his own monastery in Sicily), and LSD evangelist Timothy Leary. Fear not the blasphemer, for such is merely a cracked prophet of the Lord. As Master Crowley himself wrote in Aceldama: A Place to Bury Strangers, “I was in the death struggle with self: God and Satan fought for my soul those three long hours. God conquered – now I have only one doubt left—which of the twain was God?”  

10.When the Blessed Kateri Tekakwitha, lily of the Mohawks and the sainted maiden of the Iroquois village of Kahnawake, laid her head upon her death-bed one chill spring in 1680, it was said that the disfiguring small-pox scars she’d contracted vanished from her beautiful corpse. There in the dread wilderness of New France, where spring snows fall blue and deep and the horizon is marked with warm smoke from maple long-houses and fallen acorns are soggy under moccasin slippers, America’s indigenous saint would die. A witness recorded that Tekakwitha’s face “suddenly changed about a quarter of an hour after her death, and became in a moment so beautiful.” A fellow nun records that the evening of the saint’s death, she heard a loud knock at her door, and Tekakwitha’s voice saying “I’ve come to say good-bye; I’m on my way to heaven.”

Tekakwitha’s short decades were difficult, as they must by necessity be for anyone who becomes a saint. She was victim of a world collapsing in on itself, of the political, social, economic, and ecological calamities precipitated by the arrival of the very people whose faith she would convert to, one hand holding a bible and a crucifix, the other a gun—all of them covered in the invisible killing virus. Despite it being the religion of the invaders, Tekakwitha had visions of the Virgin and desired conversion, and so she journeyed over frozen Quebec ground to the village of the “Black Robes” who taught that foreign faith.

When Tekakwitha met with the Jesuits, they told the Iroquois woman not of the Tao, nor did they speak of heaven, rather they chanted a hymn of Karonhià:ke, the realm from which the father of all things did send his only son to die. Of her own accord, Tekakwitha meditated on the words of the Jesuits, her confessor Fr. Cholonec recording that she finally said “I have deliberated enough,” and she willingly went to the baptismal font. She has for the past three-centuries been America’s indigenous saint, a symbol of Christ reborn on this land, the woman of two cultures whom William T. Vollman describes in his novel Fathers and Crows as “Tekakwitha…praying besides the Cross of maple wood she had made.”

Much controversy follows such conversions: are we to read Tekakwitha—who endures as a symbol of syncretism between Christianity and indigenous spirituality—as a victim? As a willing penitent? As some cross between the two? In his novel Beautiful Losers, the Canadian poet, novelist, and songwriter Leonard Cohen says of Tekakwitha that a “saint does not dissolve the chaos.” Tekakwitha is not a dialectic to resolve the contradictions between the Catholic and the Iroquois, the French and the Mohawk. She is not an allegory, a parable, a metaphor, or an example—she is Tekakwitha, a woman.

If we are to draw any allegorizing lesson from her example, it must be this—conversion, like death, is something that is finally done alone. Who can we be to parse her reasons for embracing that faith, just as how can we fully inhabit the decisions of Julian, or Spinoza, or Hansmann, or Ricci? Nothing can be more intimate, or sometimes more surprising, than the turn of a soul, the conversion of a woman or man. We aren’t known to one another; we’re finally known only to God—though it’s impossible to say which one. When Tekakwitha’s appearance changed, was this an indication of saintliness? Of her true form? Of the beatified face when it looks upon the creator-god Ha-wen-ni-yu? All that can be said of conversion is that it’s never final, we’re always in the process of being changed, and pray that it’s possible to alter our broken world in return. Converts, like saints, do not reconcile the chaos, they exist amidst it. In hagiography, we find not solution, but mystery—as sacred and holy as footprints on a virgin Canadian snow, finally to be erased as the day turns to night.

Image credit: Unsplash/Diana Vargas.