Conquering Hell


“You have reason to wonder that you are not already in hell.” —Jonathan Edwards, “Sinners in the Hands of an Angry God” (1741)

“For the ones who had a notion, a notion deep inside/that it ain’t no sin to be glad you’re alive.”—Bruce Springsteen, “Badlands” (1978)

Charging only a quarter, Joseph Dorfeuille allowed the curious to view Hell itself—admission was half-price for children. Not far from the Ohio River, amongst the steep hills of the Queen City, and from 1820 to 1867, the Western Museum of Cincinnati promised in an advertisement “Come hither, come hither by night or by day, /there’s plenty to look at and little to pay.” Founded by physician Daniel Drake, known by enthusiasts as the “Ben Franklin of the west,” the institution was modeled after the Wunderkammers, “Wonder Cabinets,” of Europe, displaying shells and rocks, feathers and fossils, pottery shards and arrow heads. Even ornithologist James Audubon was on staff. Only two years after its founding, however, and the trustees forced Drake to resign. In his place Dorfeuille was hired, who rather than assemble materials zoological, archeological, and geological, understood that the public was curious about the “occasional error of nature.” In place of Drake’s edifying scientific exhibits, Dorfeuille mounted skeletons that moved by mechanical apparatus, dancing while an organ grinder played. He featured a diorama of wax figurines depicting the local murderer, Cowan, in the act, while also preserving in formaldehyde the head and heart of Mathias Hoover, a Cincinnati serial killer. And with particular popularity, the director distributed huffs of nitrous oxide after his “lectures.” But no exhibit—even the laughing gas—was quite as popular as “Dorfeuille’s Hall.”

A recreation of characters and scenes from the 14th-century Italian poet Dante Alighieri’s epic religious allegory The Divine Comedy, as well as from the 17th-century British poet John Milton’s Paradise Lost. Molded in beeswax, the hall was mounted by Hiram Powers, who’d eventually become the first celebrated American neo-classical sculptor (Elizabeth Barret Browning would pen a sonnet in honor of his work). Powers was tasked with illustrating our most grotesque visions—it would be among the most talked about exhibits before the Civil War. Powers crafted wax figures of the demon Beelzebub and of fallen arch-rebel himself, Lucifer. Adept in mechanism and sound-effects, his wax statues would shakily move in the darkness while screams emanated. “Visitors…were so intrigued by the realism of the figures that they were constantly touching them for confirmation that they were indeed wax,” writes Andrea Stulman Dennett in Weird and Wonderful: The Dime Museum in America. “To minimize the damage to his sculptures,” she explains, “Dorfeuille had to put an iron grating charged with a mild electrical current.” At the center was the “King of Terrors,” a stock Devil with red horns and pitchfork, smoke swirling about him. Originally Powers played the role, though an automata would after he quit the Western Museum—moving onto a respectable art career in Washington DC and then in Dante’s Florence—after Dorfeuille stiffed him on pay. By 1839, Dorfeuille sold off the Western Museum of Cincinnati, but he took his deteriorating waxworks to New York, where they were latter immolated in a fire, their owner following them into the underworld a few months after. King Entropy awaits us all.

A skeleton at the exit to Dorfeuille’s Hall held aloft a sign with some doggerel on it: “So far we are equal, but once left, /Our mortal weeds of vital spark bereft, /Asunder, father than the poles were driven;/Some sunk in deepest Hell, some raised to highest Heaven,” though highest Heaven was never as pruriently fascinating as deepest Hell. Powers didn’t outfit an exhibit with harp-playing, winged angels and shining halos; no wax figurines of the unassailable, the respectable, the decent. Not that it was ever much different, for people have always been more attracted—in both the neurotic’s fear and the sadist’s delight—with the fires of damnation. We recognized the 700th anniversary of Dante’s The Divine Comedy in September, and while I have no reliable numbers, my hunch is that 10-to-1 more people have read “Inferno” than the remainder about Purgatory and Paradise. Hell is more visual; if asked to envision Heaven we could offer gauzy, sepia-toned cliches about clouds and pearly gates, but if being perfectly honest nothing about it sounds appealing. But Hell. Well, Hell, we can all agree is interesting. The sulphur and shrieks, bitumen and biting, the light that burns eternally but gives off no glow. It all sounds pretty bad. While our curiosity draws us to the grotesque, we also can’t help but be haunted by Hell, traces of its ash smeared across even the most secular mind.

“Understand, I’m not speaking here only of the sincerely religious,” writes Dinty W. Moore in his irreverent To Hell With It: Of Sin and Sex, Chicken Wings, and Dante’s Entirely Ridiculous, Needlessly Guilt-Inducing Inferno, focusing on his shame-filled Catholic education. “The fable of our flawed souls, the troubling myth of original sin, the looming possibility of eternal damnation clandestinely infects even those of us who think ourselves immune: atheists, agnostics, secularists.” Supposedly only the most zealous lives without doubt, but just as such a pose evidences more anxiety than might be assumed, so too is nobody an atheist all of the time. The pious are haunted by absence and apostates by a presence, but the fear is the same. I don’t believe in a literal Hell, a place of eternal torment where our sins are punished by demons—most of the time. Hell is often on my mind, but unlike Moore I think that occasionally there is intellectual benefit in that which is sulphury (or at least the idea of it). Moore writes about the traumas of “depressive guilt brought on by religious malarkey,” and it would be a cold critic to disagree that belief has been used to oppress, persecute, and inculcate shame. Not just a cold critic, but one incapable of parsing objective reality. But even though he’s right, it’s still hard for me to throw the demon out with the hot coals.

Hell’s tragedy is that those who deserve to go there almost never think that they will, while the pious shame-filled neurotic imagines those flames with scrupulous anxiety. My soul is content if God prepared a place of eternal torment for the EXXON executive who sees climate change as an opportunity to drill for Arctic oil, the banker who gave out high-risk loans with one hand and then foreclosed on a family with the other, the CEO who makes billions getting rich off of slave labor, the racist representative who spreads hate for political gain, the NRA board member unconcerned with the slaughter of the innocent, the A.M. peddlers of hokum and bullshit, the fundamentalist preacher growing rich off his flock’s despair, and the pedophile priest abusing those whom he was entrusted to protect. Here’s an incomplete list of people who don’t belong in Hell—anyone who eats the whole sleeve of Oreos, somebody who keeps on pressing “Still Watching” on the Netflix show that they’re binging, the person who shouts “Jesus Fucking Christ” after they stub their toe, anybody who has spied on their neighbor’s home through Zillow, the Facebook humble-bragger talking about a promotion who keeps hitting “Refresh” for the dopamine rush of digital approval, the awkward subway passengers suddenly fascinated by their feet when a loud panhandler boards, and mastrubators. That so many guilty of the little sins, peccadillos, tics, frailties, and flaws that are universal and make us all gloriously imperfect humans so often feel crippling shame, guilt, and depression over these things is tragic. That my first category of sinners almost never feels those things is even more so.      

Before either Hell or Heaven there was the shadow land of indeterminate conclusions. Neither the ancient Greeks or Jews much delineated out an afterlife. Greek Hades was a grey place, a foggy place, and not particularly pleasant, even if it wasn’t exactly inferno. “Gloomy as night,” is how Alexander Pope describes Hades in his 18th-century translation of Homer’s The Odyssey, a realm populated by “Thin airy shoals of visionary ghosts.” Excepting Elysium—where excellence is more honored than virtue—and everybody has Hades to anticipate, though Homer is content to consign even the glorious to this depressing place. “I’d rather slave on earth for another man,” the hero Achilles tells Odysseus in Robert Fagles’s contemporary translation, “than rule down here over all the breathless dead.” The ancient Jewish conception of an afterlife was originally not much more hopeful, with the Hebrew Scriptures speaking of Sheol, described in Ecclesiastes as a place of “neither deed nor reckoning, neither knowledge nor wisdom.” As smudgy and unfocused as Sheol was, the Bible sometimes draws distinction (and conflation) with a horrific domain known as Gehenna.   

Drawing its name from a dusty valley where pagan child sacrifice had once been practiced, and which became a filthy heap where trash was burnt in a frenzy of ash and dirt, Gehenna is ruled over by Baal and dedicated to the punishment of the wicked. Both the Talmud and the New Testament use the word “Gehenna,” an indication of how, in the first centuries of the Common Era, a comprehensive vision of the afterlife emerged in Judaism and Christianity. Among the Sadducees, who composed the Temple elite, the afterlife was largely defined by absence, but the Pharisees (from whom rabbinic Judaism emerged) shared with early Christians an interest in charting the geography of death, and Gehenna was a highlighted location. (Worth mentioning that the historical Pharisees bear no similarity to the hatchet job performed on them in the Gospels). As it was, Jewish Gehenna would be understood slightly differently from Hell, more akin to Purgatory, a place of finite punishment cleansing the soul of its inequities. Though as the 13th-century Kabbalistic Zohar makes clear, the truly evil are “judged over in this filth… [and] never get released… the fire remains.”

Though both Judaism and Christianity developed a complex vocabulary of Heaven and Hell, it’s not unfair to attribute much of the iconography of perdition to Dante. Writing a generation after Dante’s death, and Giovanni Boccaccio predicted that as regards the Florentine poet’s name “the more it is furbished by time, the more brilliant it will ever be.” Boccaccio’s prediction has been perennially proven correct, with the Victorian critic John Ruskin describing Dante as the “central man of all the world,” and T.S. Eliot arguing that “Dante and Shakespeare divide the modern world between them; there is no third.” All of this might seem a tad Eurocentric, a tad chauvinist, and a tad too focused on Christianity—the apotheosis of Dante into a demigod. Concerning prosody—Dante’s ingenious interlocking rhyme scheme known as terza rima or his ability to describe a “place void of all light, /which bellows like the sea in tempest, /when it is combated by warring winds”—he was brilliant. But acknowledging acumen is different—even Voltaire quipped that more people valorize Dante than read him. And yet (there’s always an “And yet”…), there must be a distinction between the claim that Dante says something vital about the human condition, and the objective fact that in some ways Dante actually invented the human condition (or a version of it). John Casey argues in After Lives: A Guide to Heaven, Hell, & Purgatory that Dante’s was “without doubt the supreme imagining of the afterlife in all [Western] literature,” while Alberto Manguel in The Traveler, the Tower, and the Worm: The Reader as Metaphor claims that his epic has “acquired a permanent and tangible geography in our imagination.”

“Dante’s story, then, is both a landscape and a map” writes Manguel. None of the poem’s unfortunates get out, save for one—the author. Midway in the course of life Dante falls into despair, loneliness, alienation, and The Divine Comedy is the record of his descent and escape from those doldrums. Alice K. Turner argues in The History of Hell that Dante was the progenitor of a “durable interior metaphor.” Claiming that the harrowing and ascension that he described can be seen in everything from psychoanalysis to 12-step programs, Turner writes that “this entirely comfortable and pervasive method of modern metaphorical thinking might not exist if Dante had never written.” “Empathy” might not be the first word readers associate with a book sticky with the blood of the damned—”punishment” or even “justice” would figure higher—and yet Dante feels pain for these characters. That aspect of The Divine Comedy is why we’re still talking about it. Turner explains that “Dante was concerned with history, with Florentine politics, with the corruption of the clergy, with the moral position of his contemporaries, and most of all with the state of his own psyche,” while arguing that at a “distance of seven centuries, we can no longer easily appreciate any of these things except the last—Dante is generous with his emotions.” It’s true that for any contemporary reader, concerns with forgotten factions like the Ghibelline and Guelphs, parsing of Thomas Aquinas, or condemnations of this or that obscure pope can seem hermetic. When perusing a heavily glossed and footnoted copy of The Divine Comedy, it’s his intimate perspective that is the most human.

Eliot may have claimed that between Dante and Shakespeare there was no third, but that’s the sort of thing that a self-declared “classicist in literature, royalist in politics, and Anglo-Catholic in religion” would say, giving short shrift to that bomb-throwing author of Paradise Lost. Earth can be given over to Shakespeare, but Heaven and Hell belong to Dante and Milton. If The Divine Comedy is the consummate expression of Catholicism, then Milton’s epic is Protestantism’s fullest literary flowering, and yet neither of the two are orthodox. Milton’s depiction of damnation in media res after the rebel angels have been expelled from Heaven and the once beautiful Lucifer has been transformed into Satan, revises our understanding of Hell for the first time since Dante. Much remains recognizable in Milton, even if immaculately portrayed, this “dungeon horrible, on all sides round, /As one great furnace, flames; yet from those flames/No light, but rather darkness visible,” a fallen kingdom defined by “sights of woe, /Regions of sorrow, doleful shades, where peace/And rest can never dwell, hope never comes… but torture without end.” Paradise Lost’s beauty belies the darkness of that sunken pit, for Milton’s brilliance has always been that he acknowledges what’s evocative, what’s magnetic, what’s attractive about Hell, for Lucifer in his intransigence and his obstinacy declares that it is “Better to reign in Hell, than to serve in Heav’n.” Dante’s Satan is a monster encased in ice, weeping frozen tears as he forever masticates the bodies of Casius, Brutus, and Judas. He is bestial, animalistic, and barely sentient. Nobody would admire him; nobody would want to be Dante’s Satan. Milton’s Lucifer, on the other hand, is a revolutionary who gets all the best lines; certainly, better than God and Christ. As William Empson had it in his vaguely heretical Milton’s God, the “poem is not good in spite of but especially because of its moral confusions.”        

Milton is a “Puritan,” that wooly category of killjoy whom we associate with the Plymouth Pilgrims (though they were technically different), all belt-buckled black hats and shoes, trying to sniff out witches and other people having impure thoughts. As it goes, Milton may have politically been a Puritan, but he was also a unitarian and a materialist, and on the whole a rather cracked Protestant, not least of all because of his Devil’s thinly disguised heroism. Paradise Lost is honest because it exemplifies a principle that we all know, something expressed by Augustine in his Confessions when filching a pear from a Tunisian marketplace like he was Eve in Eden with her apple, admitting that he wasn’t even hungry but he did it because “It was foul and I loved it. I loved my own undoing.” Doing bad things is fun. Puritans ironically seem more apt to admit that clear truism than all of the Panglossian advocates for humanity’s intrinsic good nature, an obvious foolishness. Just try and negotiate a Trader Joe’s parking lot and then tell me that you’re so certain that Original Sin is old-fashioned superstition. Acknowledging sin’s innate attractiveness—our “total depravity” as John Calvin described it in his 16th-century tome The Institutes of Christian Faith—means that detailed descriptions of Hell can sound perverse. At a pulpit in Enfield, Conn., the minister Jonathan Edwards delivered an infamous sermon in 1741 in which he told the assembled that “Your wickedness makes you as it were heavy as lead, and to tend downwards with great weight and pressure towards hell… if God should let you go, you would immediately sink and swiftly descend and plunge into the bottomless gulf” for God abhors all of us, and is ” his wrath… burns like fire; he looks upon you as worthy of nothing… but to be caste into the fire.” According to Edwards, all women and men are “ten thousand times more abominable in [God’s] eyes, than the most hateful venomous serpent is in ours.” Historians note that while Edwards delivered his sermon, congregants rolled around in the church aisles and stood on the pews, moaning and screaming. All of this is a bit kinky, honestly.   

Calvinism’s God has always read more like his nemesis, omnipotent enough that He created us, but apparently not so omnipotent that He makes us worthy of salvation. More than a tyrant, He reads as a sadist, but when God is deleted from Calvinism what’s left is only the putrid, jaundiced, rotting corpse of our contemporary world, where nobody knows the value of anything, but only the price of everything. Nothing better expressed the dark American transition to post-Calvinism than our greatest of novels, Herman Melville’s Moby-Dick; or, the Whale, of which the author wrote to his friend Nathaniel Hawthorne in 1851 that “I have written a wicked book, and feel spotless as the lamb.” Immature exegetes perseverate on what the white whale means, what he is a symbol of. God? Satan? America? That’s the Holy Trinity, though only one of them has ever kept his promises (I’ll let you guess who). It’s both more and less complicated than that; the whale is the terrifying, naked abyss stripped bare of all our presuppositions. In short, he is the world as it actually is, where Hell is all around us. In Moby-Dick, Ishmael listens to the former harpooner Father Mapple’s sermon at the New Bedford, Mass., Whaling Chapel (with its cenotaphs and its massive cetacean bones), and though the sermon is ostensibly on Jonah, it describes a type of Hell. The congregants sing a strange hymn about the “ribs and terrors in the whale/Arched over… a dismal gloom, /While all God’s sun-lit waves rolled by, /And lift me deepening down to doom.” Mapple preaches that despite how sinful the reluctant prophet was, “Jonah does not weep and wail for direct deliverance. He feels that his dreadful punishment is just… And here, shipmates, is true and faithful repentance; not clamorous for pardon, but grateful for punishment.” You’ll go to Hell—or the belly of a whale—and you’ll be happy about it, too.     

Not that this rhetoric is limited to Protestantism, for fire and brimstone come just as often from homily as sermon. James Joyce knew that Catholic priests could chill the blood every bit as much as an evangelical bible thumper, with perhaps no more disturbing a vision of Hell ever offered than in The Portrait of the Artist as a Young Man. His roman a clef Stephen Dedalus attends a religious retreat, and the visiting priest provides visceral description to the adolescents—buffeted by arrogance and hormones—of what awaits them if they give into temptation, for “Hell is a straight and dark and foul-smelling prison, an abode of demons and lost souls, filled with fire and smoke” where all of the punished are “heaped together in their awful prison… so utterly bound and helpless that… they are not even able to remove from the eye a worm that gnaws it.” Drawing upon the Jansenist Catholicism that migrated from France to Ireland, and the priest’s descriptions of Hell are the equal of anything in Edwards. With barely concealed sadism he intones how amongst the damned the “blood seethes and boils in the veins.. the heart in the breast glowing and bursting, the bowels a red-hot mass of burning pulp, the tender eyes flaming like molten balls.” Sin is spoken of in sensory terms—the taste of gluttony, the touch of lust, the rest of sloth—then so too does the priest give rich description of Hell’s smell. That pit is permeated by the odor of “some foul and putrid corpse that has lain rotting and decomposing… a jelly-like mass of liquid corruption… giving off dense choking fumes of nauseous loathsome decomposition… a huge and rotting human fungus.” Heaven can remain vague, because none of us will ever agree on what it is we actually want. Pleasure is uncertain, but pain is tangibly, deeply, obviously real, and so Hell is always easier to envision.       

After Stephen is convinced to become rigidly austere following that terrifying, he finds himself once again straying. A friend asks if he plans on converting to Protestantism, and Dedalus responds that “I said I had lost the faith… not that I had lost self-respect. What kind of liberation would that be to forsake an absurdity which is logical and coherent and to embrace one which is illogical and incoherent?” Funny in the way that Joyce is, and true as well, though it might only make sense to lapsed Catholics who fumble over the newly translated words of the liturgical preface at Mass. Such Catholic atheism, or Catholic agnosticism, or Catholic what-ever-you-want-to-call-it influenced one of the great contemporary depictions of Hell in Stephen Adly Guirgis’s play The Last Days of Judas Iscariot. Guirgis engages an idiom that could be called “bureaucratizing the sacred,” transposing the rigid elements of our society in all of its labyrinthine absurdity onto the transcendent order. Whenever Hell (or Heaven) is depicted as an office, or a waiting room, or a border checkpoint, a prison, a hospital, or a school, that’s bureaucratizing the sacred. In Guirgis’s play, the action takes place in that most arcane of bureaucratic structures, the court system. “In biblical times, Hope was an Oasis in the Desert,” a character says. “In medieval days, a shack free of Plague. Today, Hope is no longer a place for contemplation—litigation being the preferred new order of the day.” The Last Days of Judas Iscariot portrays a trial of its largely silent titular character, held in a courtroom that exists beyond time and space, where expert witnesses include not just Christ and Pontius Pilate, but Sigmund Freud and Mother Teresa as well. True to the mind-bending vagaries of eternity, both God and the Devil exist in a country unimaginably far from us and yet within our very atoms, for as Christ says “Right now, I am in Fallujah. I am in Darfur. I am on Sixty-third and Park… I’m on Lafayette and Astor waiting to hit you for change so I can get high. I’m taking a walk through the Rose Garden with George Bush. I’m helping Donald Rumsfeld get a good night’s sleep… I was in that cave with Osama, and on that plane with Mohamed Atta… And what I want you to know is that your work has barely begun.” Who does the messiah love?—”every last one.” A vision expansive and universalist, and like all great portrayals—including Dante and Milton who most definitely didn’t think you could breach the walls of inferno by drilling into the earth—Hell is entirely a mental place.

“Despair,” Guirgis writes, “is the ultimate development of a pride so great and so stiff-necked that it selects the absolute misery of damnation rather than accepts happiness,” or as Milton famously put it, “The mind is its own place, and in itself/Can make a Heaven of Hell, a Hell of Heaven.”  Like all things in the sacred—that vast network of metaphors, allusions, images, allegories, poems, and dreams—the question of “is it real or not?” is entirely nonsensical. Heaven and Hell have always been located in the human mind, along with God and the Devil, but the mind is a vast country, full of strange dreams and unaccounted things. Perdition is an idea that is less than helpful for those who fear (or hope) that there is an actual Hell in the black waters of the Mariana Trench, or in scorched, sunbaked Death Valley, or near a sandy cave next to the Dead Sea. But when we realize that both Hell and Heaven exist in a space beyond up or down, left or right, any of the cardinal directions and towards a dimension both infinitely far away and nearer than our very hearts, then there just might be wisdom. Such is the astuteness of Dante, who with startling psychological realism, records the woeful tale of illicit lovers Paolo Malatesta and Francesca da Rimini, tempted into an adulterous kiss after reading the romance of Lancelot and Guinevere, now caught in a windstorm as they had once tussled in bedsheets. “There is no greater sorrow,” Francesca tells the poet, “Than to be mindful of the happy time/in misery.” Because we often think of sin as simply a matter of broken rules, psychological acuity can be obscured. Drawing from Thomas Aquinas, Dante writes that “Pride, Envy, and Avarice are/the three sparks that have set these hearts on fire,” and the interpretative brilliance of the Seven Deadly Sins is that they explain how an excess of otherwise necessary human impulses can pervert us. Reading about Paolo and Francesca, it’s understandable to doubt that they deserve such punishment—Dante does. But in the stomach-dropping, queasy, nauseous, never-ending uncertainty of their lives, the poet conveys a bit of their inner predicament.    

“There are those… who, undoubtedly, do not feel personally touched by the scourge of Dante, [or] by the ashen pall of Augustine,” writes Moore, and while I wouldn’t say that I’m so irreverent that I’m willing to lustily eat a rare hamburger on Good Friday without worrying a bit, I will admit that if I make a visit to Five Guys after forgetting that it’s Holy Week I don’t fret too much. A privilege to play with the idea of Hell without fearing it (at least too much), but the concept still has some oomph in it. Returning to an earlier observation, Hell must be the language with which we think about that which divides us, estranges us, alienates us, not for when we despair at having eaten a bit too much or sleeping in on the weekend. If anything, that Puritanical rugged individualism that so defines American culture whether we’ve opted into it or not—You need to be up by 5a.m.! You have to be a productive worker! You must avoid any real joy except for the curated experience chosen for you by algorithm!—is the true wage of sin. In opposition, I lustily and gluttonously and slothfully advocate for eating a bit too much, laughing a bit too loud, and sleeping a bit too long. Preachers once told us that to be alive was a sin, but that was never it at all. Now we have pundits and TED talk lecturers forcing us to weigh our souls instead, but there’s no shame in knowing that the road rises to meet us, in feeling the wind at our back and the warmth of the sun or the cool rain upon our faces. Shame and guilt are utilitarian, but they belong on the other side. Notable that the idea of Hell developed contemporaneously with that of Heaven, and if it wasn’t for the former, then what would we ever struggle against? “Without contraries there is no progression,” writes the poet and prophet William Blake in his 1793 The Marriage of Heaven and Hell. “Attraction and repulsion, reason and energy, love and hate are necessary to human existence.” Whether temporary or eternal, finite or infinite, a place of extreme punishment implied another for reward. There’s no Heaven unless there is also Hell, and both places are much closer than might be supposed.

Photo credit: Wikimedia Commons

Drizzly November in My Soul


Because Robert Burton used astrology to forecast the date of his death with exact accuracy—January 25, 1640—even some skeptics in that credulous age suspected that he may have assisted the prediction’s veracity. To accuse anyone of suicide was a slander; for Burton’s contemporaries such a death was an unpardonable offense. A half-century later, and the antiquary John Aubrey noted in his 1681 Brief Lives that ”tis whispered that… [Burton] ended his days in that chamber by hanging himself.” There are compelling reasons to think this inaccurate. Burton would not have been buried in consecrated ground had he been a suicide—though, of course, it’s possible that friends may have covered for him. Others point to the historian Anthony Wood, who described Burton as “very merry, facete, and lively,” though seemingly happy people do kill themselves. And finally, there’s the observation that within his massive, beguiling, strange, and beautiful The Anatomy of Melancholy, first printed in 1621, Burton rejected suicide—even while writing with understanding about those who are victim of it. As it actually is, the circumstances of Burton’s death remain a mystery, just as self-destruction frequently is, even as etiology has replaced astrology, as psychiatry has supplanted humoral theory.

That such a rumor spread at Christ Church, where Burton had worked for years in the library, compiling his vast study of depression, is not surprising. So identified was Burton with his subject—called “history’s greatest champion of the melancholy cause” by Andrew Solomon in The Noonday Demon: An Atlas of Depression—that his readers simply expected such a death. Within The Anatomy of Melancholy Burton gives overview of Greek and Roman biothanatos, while still condemning it. And yet Burton empathetically concludes that “In such sort doth the torture and extremity of his misery torment him, that he can take no pleasure in his life, but is in a manner enforced to offer violence unto himself, to be freed from his present insufferable pains.” Burton was also frank about his own suffering. White Kennett would write in his 1728

A Register and Chronicle Ecclesiastical and Civil that “I have heard that nothing at last could make… [Burton] laugh, but going down to the Bridge-foot in Oxford, and hearing the bargemen scold and storm and swear at one another, at which he would set his Hands to his sides and laugh most profusely.” Such a man, it was imagined, was the sort who may have dreamed of wading into that cold water in the years when the rivers of England still froze over, walking out into infinity until he felt nothing. Who is to say? We don’t have a complete record of Burton’s thoughts, especially not in his last moments (we don’t have those things for anybody), but The Anatomy of Melancholy is as comprehensive a record as possible, a palliative for author and reader, an attempt to reason through the darkness together.

“Burton’s book has attracted a dedicated rather than a widespread readership,” writes Mary Ann Lund in Aeon, “its complicated branching structure, its Latin quotations and its note-crammed margins resist easy reading.” Though clearly indebted to the humanism of Erasmus and Montaigne, The Anatomy of Melancholy is one of those books that’s almost post-modern before modernity, like the novel
Tristram Shandy (Laurence Sterne shamelessly plagiarized from Burton). The book is encyclopedic but open-ended, erudite but curious, expansive but granular, poignant but funny; never doctrinaire, never judgmental, never orthodox, but gleefully self-referential even while contemplating sadness. Burton combed history, poetry, theology, travelogue, philosophy, and medicine for case studies, across five editions during his lifetime (and a sixth based on posthumous notes) in three separate sections printed as a folio that ballooned to half-a-million words. In the first section he enumerates accounts of melancholia, in the second he offers up cures (from drinking coffee to eating spiced ram’s brain), and in the third Burton presents taxonomies of insanity, including love madness and religious mania. The contemporary edition from the New York Review of Books Classics series is 1,382 pages long. Within those digressive, branching, labyrinthine passages Burton considers King Louis XI of France’s madness whereby everything had the stink of shit about it, an Italian baker from Ferrara who believed that he’d been transformed into butter, and the therapeutic effects of music on elephants. Lund explains how subsequent editions, rather than cutting verbiage, fully indulged Burton’s favored rhetorical conceit of concierges, whereby words are piled upon words in imitation of the manic mind, a quality that has both endeared and frustrated his readers. And yet as William H. Gass observes in his introduction to the NYRBC edition, “the words themselves are magical; you cannot have too many of them; they are like spices brought back from countries so far away they’re even out of sight of seas; words that roll… words even more exotic, redolent, or chewy.”

Sales of Burton’s monumental work, which readers felt free to dip in and out of rather than reading cover-to-cover, easily outsold Shakespeare’s folio, though by the Enlightenment his acclaim had dimmed, the work interpreted as a poorly organized baroque grotesquerie based in outmoded theories. During the optimistic 18th century, The Anatomy of Melancholy had not a single printing. Despite that, it still had readers, including Benjamin Franklin and Dr. Johnson, who told Boswell that it was the “only book that ever took him out of bed two hours sooner than he wished to rise.” Romantics were naturally drawn to it; both Samuel Taylor Coleridge and John Keats had underlined copies, with the latter drawing the plot for his vampiric
Lamia from Burton. In the 20th century, the existentialists saw something modern in Burton, with Samuel Becket a lover of the book. The Canadian physician William Osler, a founder of the Johns Hopkins School of Medicine, thought it the greatest medical text by a layman, and was instrumental both in increased interest as well as the bibliographic tabulation of Burton’s personal library at Oxford. Despite the pedigree of his fans, Burton hasn’t had a wide readership for centuries, as The Anatomy of Melancholy has never been easy. An assemblage of disparate phenomena, a hodgepodge of unusual examples, a commonplace book of arcane quote and complicated exegesis, none of which is structured in a straightforward way, with Burton himself apologizing that “I had not time to lick it into form, as a bear doth her young ones,” though as it became even more formless over the next two decades that would belie his protestation.               

The Anatomy of Melancholy feels unfinished, just like life; it’s contradictory, just like a person; and it encompasses both wonder and sadness, just like a mind. On its quadricentenary it’s abundantly worthwhile to spend some time with Burton, because though he can’t speak of neurotransmitters, he does speak of the soul; though he can’t diagnose, he can understand; though he can’t prescribe, he can sympathize. Beyond just depression, Burton considers ailments like anxiety, obsessions, delusions, and compulsions, Sufferers “conceive all extremes, contrarieties and contradictions, and that in infinite varieties.” To paraphrase Tolstoy, the happy are all the same, but Burton’s depressives are gloriously different. “The Tower of Babel never yielded such confusion of tongues, as this chaos of melancholy doth variety of symptoms.” The second thing that is important to note is that Burton distinguishes between everyday emotions—the occasional blues if you will—from the more punishing. He explains that “miseries encompass our life,” that everyone suffers grief, loss, sorrow, pain, and disappointment, and that it would be “ridiculous for any mortal man to look for a perpetual tenor of happiness.” If somebody is suffering from physical pain or a loved one’s death, grief and sadness are rational; for a person facing economic ruin or an uncertain future, anxiety makes sense, but a “melancholic fears without a cause…this torment procures them and all extremity of bitterness.” For those whose humors are balanced, grief is the result of some outside torment, for the melancholic grief is itself the torment. Furthermore, as Burton makes clear, this disposition is not a moral failing but a disease, and he often makes suggestions for treatments (while just as soon allowing that he could be entirely wrong in his prescriptions). “What can’t be cured must be endured,” Burton notes. In the depressive canon of the late Renaissance, Burton would be joined by Thomas Browne with his similarly digressive, though much shorter, Religio Medici, wherein he writes, “The heart of man is the place the devil dwells in; I sometimes feel hell within myself;” John Donne’s sickbed, Devotions Upon Emergent Occasions, where he concludes that “Man, who is the noblest part of the earth, melts so away as if he were a statue, not of earth, but of snow;” and, of course, Shakespeare’s famed soliloquy from Hamlet that wonders if “by a sleep to say we end/The heart-ache, and the thousand natural shocks/That flesh is heir to.” And that’s just relatively High Church Englishmen; with a broader scope you’d include the Catholic Frenchman Blaise Pascal, whom in his Pensées defines man as that who is “equally incapable of seeing the nothingness out of which he was drawn and the infinite in which he is engulfed,” and the 15th-century Florentine Neo-Platonist Marsilio Ficino who wrote in his 1489 The Book of Life that the condition was “conducive to judgment and wisdom,” entitling one chapter “Why the Melancholic Are Intelligent.” None of them, however, is as all-encompassing as Burton, as honest about his own emotions and as sympathetic to his fellow sufferers. Within his book’s prologue, entitled “Democritus Junior to His Readers,” ironically written under a pseudonym adapted from the pre-Socratic thinker known as the “laughing philosopher,” Burton explains that “I had a heavy heart and an ugly head, a kind of imposture in my head, which I was very desirous to be unladen of,” and so his scribbling would act as therapy.

Across denominations, countries, and continents, the contemplation of a fashionable melancholia was encouraged, with even Burton confessing to sometimes enjoying such abjection as a “most delightsome humor, to be alone, dwell alone, walk alone, meditate, lie in bed whole days, dreaming awake as it were.” Noga Arikha explains in The Public Domain Review that melancholia could be seen as “good if one believed that a capacity for strong passions was the mark of a fine soul that recognized beauty and goodness… the source of sonnets, the harbinger of creativity,” while Darin M. McMahon notes in Happiness: A History that this is a “phenomenon that would have a long and robust future: the glamorization of intellectual despair.” A direct line can be drawn from the goth teen smoking clove cigarettes in a Midwestern high school parking lot through the Beats in their Manhattan lofts eating hash brownies and masturbating to William Blake through to the Left Bank existentialists parsing meaninglessness in the post-war haze and the Lost Generation writers typing on Remingtons in Parisian cafes back to the Decadents and Symbolists quaffing absinthe and the Romantics dreaming opium reveries until interrupted by the person from Porlock through to Burton, and Browne, and Donne, and Shakespeare, and Pascal and Ficino and every other partisan of depression. As Burton notes, “melancholy men of all others are most witty.”

More than a pose, however, and even if Burton agreed that melancholy could sometimes be romanticized, he never lost sight of its cost. Tabulating the price of anxious scrupulosity, Burton notes that “Our conscience, which is a great ledger book, wherein are written all our offences…grinds our souls with the remembrance of some precedent sins, makes us reflect upon, accuse and condemn ourselves.” In the millennium before Burton there were contrary perspectives concerning melancholy. It was interpreted by theologians as a sin—an indolent sloth called acedia—as opposed to the position of doctors who diagnosed it as an imbalance of elemental substances called humors. One thing that Burton is clear on was that melancholy wasn’t simply feeling down. To be melancholy isn’t to be “dull, sad, sour, lumpish, ill disposed, solitary, any way moved or displeased,” Burton writes, and that clarification is still helpful. For those blessed with a brain chemistry that doesn’t incline them towards darkness, depression might seem an issue of will power, something that can be fixed with a multivitamin and a treadmill. Reading Burton is a way to remind oneself—even as he maintained erroneous physiological explanations—that depression isn’t a personal failing. And it’s certainly not a sin. McMahon explains that by “reengaging with the classical tradition to treat excessive sadness and melancholia as an aberration or disease—not just the natural effect of original sin—Renaissance medicine opened the way toward thinking about means to cure it.”

That was a possibility more than anything, for the rudiments of mental health were still mired in superstition. Such an emotion was identified with an overabundance of black bile in the spleen, and a deficit of yellow bile, blood, and phlegm, a condition associated with a dry coldness, so that some of that metaphorical import still survives today. Arikha writes in Passions and Tempers: A History of the Humours how the “experiences of joy, pain, anguish, and fear each had their temperature, their match in some sort of stuff in the body whose motion modulated the emotion.” In a broader way, however, there is something to be said in how the humors emphasized embodiment, the way it acknowledged how much of the emotional was in the physical. We now know that melancholy isn’t caused by problems with our humors, but rather in our neurotransmitters—I am not cutely implying that this is equivalent, accurate science is the only way that pharmacologists have been able to develop the medicine that saves so many of our lives. Yet there is an enduring wisdom in knowing that this is a malady due to something coursing in your veins, whatever you call it. “We change language, habits, laws, customs, manners,” writes Burton, “but not diseases, not the symptoms of folly and madness, they are still the same.”    

Depressives have always existed because there have always been those of us who have a serotonin and dopamine deficiency, even if we’re not lacking in yellow bile, blood, and phlegm. How culture interprets mental illness is entirely another thing, though. As Burton’s popularity demonstrates, there was a surprising lack of stigma around melancholy. In an abstract way, during the 17th century this a reaction to how eternal verities no longer seemed so eternal. Gass explains that “people were lifting their heads from canonical books to look boldly around, and what they saw first were errors, plentiful as leaves. Delight and despair took turns managing their moods.” Even while The Anatomy of Melancholy used Galen’s humoral theory that dominated medicine since the second century, the English surgeon William Harvey was developing his book Anatomical Account of the Motion of the Heart and Blood, which would dispel the basis for the four bodily humors (it would take two more centuries to die). There were more practical reasons for melancholy as well. On the continent, the Thirty Years War started three years before Burton’s book was completed and would end in 1648, eight years after he died. As many as 12 million people perished, a death rate that dwarfed all previous European wars, with one out of five people on the continent dead. Writing in the early ’20s, Burton’s native England was headed towards inevitable civil war, disunion clear in the political and religious polarization. By its conclusion, 200,000 people were dead, fully 2.5 percent of the population. By comparison, that would be as if 12 million contemporary Americans were killed. Disease could be just as deadly as the New Model Army; over the course of Burton’s life the bubonic plague broke out in 1603, 1625, and 1636, with close to 1000,000 deaths. Depression can come from an imbalance within the body, but sometimes insanity is also a sane reaction to an insane world. You still have to bear it, however.     

Burton is good humored, he may even have been jovial from time to time, but he’s resolutely a partisan of the sighing angels. Not that Burton didn’t advocate for treatment, even while he emphasized his own inexpertness. Solomon explained that Burton recommends “marigold, dandelion, ash, willow, tamarisk, roses, violets, sweet apples, wine, tobacco, syrup of poppy, featherfew, Saint’-John’s-wort…and the wearing of a ring made from the right forefoot of an ass.” We are, it should be said, fortunate to have refined our prescriptions. Despite the fact that Americans hunger for painkillers both helpful and deadly, The Anatomy of Melancholy isn’t a particularly American book. If the English malady is glum dampness, then the American affliction is plucky sociopathic optimism. A can-do-attitude, pulling-ones-self-up-from-the-bootstraps, rugged individualism, grit, determination, cheeriness. We were once inundated by snake oil salesmen and medicine men, now we have self-help authors and motivational speakers. A nation where everybody can be a winner in seven easy steps and there are keys to a new car under ever guest’s seat. “Americans are a ‘positive’ people,” writes Barbara Ehrenreich in Bright-Sided: How Positive Thinking is Undermining America, this “is our reputation as well as our self-image…we are upbeat, cheerful, optimistic, and shallow.” Some crucial points: optimism is not equivalent with happiness, and if anything, it’s a mask when we lack the latter. That’s not bearing it—that’s deluding ourselves.

We weren’t always like this; we have our counter-melody to faux-positivity, from those dour Puritans to Herman Melville writing of the “damp drizzly November in my soul.” But could anyone imagine Abraham Lincoln being elected today, who as a young lawyer in 1841, would write that “I am now the most miserable man living. If what I feel were equally distributed to the whole human family, there would not be one cheerful face on earth?” Now, with the ascendancy of the all-seeing Smiley Face, we’ve categorized talk like that as weakness, even if we don’t admit what we’re doing. Our 16th president had a melancholic understanding that grappled with wisdom, what Roy Porter in Flesh in the Age of Reason: The Modern Foundations of Body and Soul phrased as “Melancholy and spleen, those stigmata of true scholarly dedication.” An ability to see the world as it is. Not just as some cankered, jaundiced, diseased thing, but how in the contrast of highs and lows there is a sense of how ecstatically beautiful this life is, even in its prosaic mundanity. Solomon writes that “I love my depression. I do not love experiencing my depression.” He explains that the “opposite of depression is not happiness but vitality,” and ignorance of this distinction bolsters the cult of positivity. Therapy is honest, unsparing, difficult, and painful. Self-help just tells you what you want to hear. Norman Vincent Peale wrote in Stay Alive All Your Life that the “dynamic and positive attitude is a strong magnetic force which, by its very nature, attracts good results.” This, quite clearly, is unmitigated bullshit. Instead of Dale Carnegie, we need Donne; rather than Eckhart Tolle we could use Browne; let’s replace Tony Robbins with Robert Burton.  

Because, dear reader, if you haven’t noticed, we’re not at a happy point in history. America’s cheery cult of optimism is finally folding under the onslaught of the pandemic, political extremism, economic collapse, and the ever-rising mercury. If you’re the sort who’d be chemically glum even in paradise, then if you’ve already been to hell, you might have a bit of extra knowledge folks could benefit from. Stanley Fish explains in Self-Consuming Artifacts: The Experience of Seventeenth Century Literature how “sober discourse itself is an impossibility given the world,” and that for Burton “nothing—no person, place, object idea—can maintain its integrity in the context of an all-embracing madness.” Gass is even more blunt on the score: “When the mind enters a madhouse…however sane it was when it went in, and however hard it struggles to remain sane while there, it can only make the ambient madness more monstrous, more absurd, more bizarrely laughable by its efforts to be rational.” Burton speaks to our epoch, for depression is both real and there are legitimate reasons to be depressed. As he writes, melancholy is an “epidemical disease,” now more than ever. Burton’s prescriptions, from tincture of marigold to stewed offal, seem suspect—save for one. With the whole world an asylum, Burton advocates for awareness. There are risks to such hair-of-the-dog though. “All poets are mad,” Burton writes, the affliction of “excellent Poets, Prophets, &c,” and I suspect, dear reader, that you too may be in that “etcetera.” Depression, along with addiction, is the writer’s disease. Sylvia Plath, James Baldwin, David Foster Wallace, Ann Sexton, Arthur Rimbaud, F. Scott Fitzgerald, Mark Twain, Emily Dickinson, and so on—all wrestled with the noon-day demon. Many of them died because of it, at least in one way or another. There is no shame here, only sadness that some couldn’t be around with us a bit longer, and the genuine, deep, and loving request that you, dear reader, stick around here. As for Burton, he was committed to the principle that “I write of melancholy, by being busy to avoid melancholy,” and it often worked. He gives the most poignant expression to that black veil that shrouds the mind, the caul of the soul that afflicts some from time to time. If writers are prone to depression, then Burton’s tome was an attempt to write himself out of it, to “satisfy and please myself, make a Utopia of mine own, a New Atlantis, a poetical commonwealth of mine own.” We’re lucky that he did, because even if it’s not the only thing—even if it’s not always the best of things—there is something sacred in that. No matter how occluded, know that somebody else understands what you’re feeling.

So, blessed is Burton and duloxetine, therapy and sertraline, writing and citalopram, empathy and fluoxetine, compassion and escitalopram. Blessed are those who ask for help, those who are unable to ask for help, those who ask if somebody else needs help. Blessed are those who struggle everyday and blessed are those who feel that they no longer can, and blessed are those who get better. Blessed are those who hold the hands of anyone suffering. Blessed is understanding—and being seen—for what Burton offers you is the observation that he sees you, the reader. “I would help others, out of a fellow-feeling,” he writes. Because Robert Burton often felt worthless; as if he was walking at the bottom of the chill Thames. Sometimes it felt like his skull was filled with water-soaked-wool and his eyes pulsated, vision clouded over with gauzy darkness; he knew of listing heaviness and the futility of opening the door, of getting dressed, of leaving the bed, of how often the window of care shrunk to a pinpoint of nothingness, so that he could feel no more than that. This strange book studded with classical allusion and scriptural quotation, historical anecdote and metaphysical speculation—who was it for? He wrote for the person who has had the tab for this article open on their computer for days, but has no energy to read; for the woman who hasn’t showered in weeks and the man who can’t bring himself to go outside. “Thou thyself art the subject of my discourse,” Burton writes. His purpose was one thing—to convey that you have never been alone. Not then, not now, not ever.    

And the Walls Came Down

- | 1

Across seven seasons and 178 episodes, Patrick Stewart performed the role of Capt. Jean-Luc Picard of the United Federation of Planets NCC-1701-D starship the USS Enterprise with professionalism, wisdom, and humanity. Displaying granitoid stoicism and reserved decorum, Picard was the thinking-person’s captain. The fifth season episode “Darmok” is an example of how Star Trek: The Next Generation was among the most metaphysically speculative shows to ever be broadcast. In the beginning, the Enterprise is hailed by the Tamarians, but the Enterprise’s computerized translator can’t comprehend their language. Against his will, Picard is transported alongside the Tamarian captain Dathon—whose head looks like a cross between a pig and a walnut—to the surface of a hostile planet. Confronted by a massive, shimmering, and horned beast, Dathon tosses Picard a dagger and yells “Darmok and Jalad at Tanagra.” After they’ve defeated the monster, though at the price of Dathon being injured, Picard realizes that the enigmatic phrase is a reference, an allusion, a myth. Thus, a clue to their language—Tamarians speak only in allegory. Dathon has invoked two mythic heroes to communicate that he and Picard must fight the beast together. Doubtful though it may be that Paul Ricoeur was a Trekkie, but Dathon embodies the French philosopher’s claim in The Rule of Metaphor: The Creation of Meaning in Language that “Only a feeling transformed into myth can open and discover the world.” Dathon, however, has sacrificed himself upon the altar of comprehensibility, for he received a fatal blow, and dies as a pieta in Picard’s arms. As he passes into that undiscovered country, Picard speaks to him in our own mythic-metaphorical tongue – “They became great friends. Gilgamesh and Enkidu at Uruk.” 
Tamarian makes explicit our own language’s reliance on the figurative—any language’s use of metaphor, for that matter. “Try as one might, it is impossible to free oneself from figural language,” writes Marjorie Garber in The Use and Abuse of Literature, as “all language is figurative.” Examine my first paragraph, for there are several mythic allusions throughout—the first clause of my fourth sentence reads “In the beginning,” a reference to the 16th-century Bible translator William Tyndale’s rendering of the Hebrew Bereshith in Genesis (later incorporated into the more famous King James Version, as well as the translation of a different koine phrase in John 1:1). My penultimate sentence has two mythopoeic references; one to Dathon having “sacrificed himself upon the altar,” and a mention of the “pieta,” a word that translates to “piety” and often refers to Christ dead in the arms of the Virgin Mary. Such mythic allusions abound in language. For example—as a writer my Achilles’ Heel is that I take on Herculean tasks like writing an overview of metaphor, requiring me to stop resting on my laurels and to open up a Pandora’s Box, so that I can leave no stone unturned; touch on wood, and don’t treat me like a Casandra, but let’s hope that I can cut the Gordian Knot here. Such Greco-Roman adornments aren’t just mythic allusions, they’re also a dead or at least dying metaphors—for who among us ever tells somebody that we’re “Between a rock and a hard place” while thinking of the Scylla and Charybdis in Homer’s The Odyssey?  That’s another way to say that they’re clichés, but mythic connections abound in less noticeable ways too, though perhaps I’d do well this Wednesday to render unto Odin what belongs to Odin and to spare a thought for the Germanic goddess Frigg when the work week draws to a close.

Star Trek: TNG screenwriters Joe Menosky and Philip LaZebnik make “metaphorical” synonymous with “mythic,” though figurative language draws from more than just classical tales of gods and heroes, but from anything that transfers meaning from one different realm into another. In my first paragraph, I described Stewart’s face as “granitoid,” though he is not coarse-grained igneous rock; later I deployed a simile (rhetoricians disagree on how different that conceit is from metaphor) where I said that Dathon looked like a cross between a pig and a walnut. I can’t claim that these are great metaphors, but they were my attempt to steer the course of the ship away from the rocky shoals of cliché. “A newly invented metaphor assists thought by evoking a visual image,” writes George Orwell in “Politics and the English Language,” his classic essay of 1946, “while on the other hand a metaphor which is technically ‘dead’… has in effect reverted to being an ordinary word and can generally be used without loss of vividness.” Between the arresting, lyrical, novel metaphor of the adept poet and the humdrum language of the everyday are the “huge dump of worn-out metaphors which have lost all evocative power and are merely used because they save people the trouble of inventing phrases for themselves.” Orwell lists around a dozen clichés, including “ride roughshod over,” “grist to the mill,” “swansong,” and “toe the line.” When encountering clichés such as these, in other peoples’ writing but especially in my own prose, they appear to me like a dog turd hidden under the coffee table; their fumes make my eyes water and my stomach churn. Cliché must be ruthlessly expunged by red pen at every opportunity. But those two other categories that Orwell lists are entirely more interesting.

Dead metaphors—not just those on life support but also those decomposing in the ground—merit us getting a shovel and some smelling salts. Tamarian was imagined as structured by metaphor, but the vast majority of words you use every day were originally metaphors. Take the word “understand;” in daily communication we rarely parse it’s implications, but the word itself is a spatial metaphor. Linguist Guy Deutscher explains in The Unfolding of Language: An Evolutionary Tour of Mankind’s Greatest Invention how “understand,” derived from the Middle English understanden, itself from the Anglo-Saxon understandan, and ultimately from the Proto-Germanic understandana. “The verb ‘understand’ itself may be a brittle old skeleton by now,” Deutscher writes, “but its origin is till obvious: under-stand originally must have meant something like ‘step under,’ perhaps rather like the image in the phrase ‘get to the bottom of.'” Such spatial language is common, with Deutscher listing “the metaphors that English speakers use today as synonyms: we talk of grasping the sense, catching the meaning, getting the point, following an explanation, cottoning on to an idea, seeing the difficulty.” The word “comprehend” itself is a metaphor, with an etymology in the Latin word prehendere, which means “to seize.” English has a propensity to those sorts of metaphors, foreign loan words from Anglo-Saxon, Frisian, and Norman; Latin, French, Italian, and Spanish; Abenaki, Wolof, and Urdu, which don’t announce themselves as metaphors precisely because of their foreignness—and yet that rich vein of the figurative runs through everything. Dan Paterson gives two examples in his voluminous study The Poem, explaining how a word as common as “tunnel” is from the “Medieval English tonel, a wide-mouthed net used to trap birds, so its first application to ‘subterranean passage’ will have been metaphorical—and would inevitably have carried the connotation ‘trap’ for a little while.” Similarly, the word “urn” comes from “the Latin urere, to burn, bake,” the word itself holding a connotative poetry of immolation. In both these examples, “the original neologists will have been aware of their elation to earlier terms,” Paterson writes, “but this conscious awareness can be lost very rapidly.” If you’re lucky enough to have a subscription to the Oxford English Dictionary, you can trace the etymology of prosaic words and find their glamorous metaphorical beginnings. Within the seemingly hollow throat of every word resides the voice of metaphor, no matter how faint it’s call to us may be.

The manner in which metaphors fossilize into everyday words is not unlike the process during the fetid days of the Carboniferous Period when giant spiders and scorpions, salamanders and sharks, scaled trees and seeded plants would sink into some shallow tropical pool and fossilize until dug out of the ground by a coal miner in Wales or West Virginia. Just as miners occasionally find deposits in the shape of a trilobite, so too do some metaphors that are dead appear as obvious clichés, but the bulk of our language is so dark and hard that it might as well be bituminous. Yet as burning coal releases heat, so too does the glow of meaning come off of these rocks that we call words, the once vital energy of metaphor still hidden inside. “However stone dead such metaphors seem,” writes I.A. Richards in The Philosophy of Rhetoric, “we can easily wake them up.”  Such evolutionary development is what linguists call “semantic change,” as tangible and concrete words are used as metaphors, transferring intent in a one-way direction towards abstraction. “The mind cannot just manufacture words for abstract concepts out of thin air—all if it can do is adapt what is already available. And what’s at hand are simple physical concepts,” writes Deutscher, so that when I say that I’m trying to establish a foundation for my argument, I draw from the concrete metaphors of construction, while it would be odd to appropriate abstraction to describe something tangible. “One thing is certain,” Deutscher says, “nothing can come from nothing.”

Every word is a metaphor; every phrase, no matter how prosaic, is a poem—even if it’s mute. Words don’t correspond to reality; they only correspond to one another. All languages are a tapestry of threads forever floating above the ground. “Metaphor is the transference of a term from one thing to another, whether from genus to species, species to genus, species to species, or by analogy,” as Aristotle defined it in his Rhetoric, the first book to taxonomize the trope. The word metaphor is itself, rather predictably, a metaphor. The Greek μεταφορά, as Aristotle’s explanation literally states, translates as “to transfer,” with all the connotations of transport, shifting, movement, and travel. As contemporary travelers to Greece still discover, delivery vans have the word metaphora painted on their side, something that was a delight to Ricœur. No language can lack metaphor for the same reason that no tongue can lack fiction; it’s not an issue of grammar in the same way that there are languages incapable of tense or person, but rather figuration is the domain of rhetoric. Wherever there are words that are used to designate one thing, then the moment somebody uses those terms to refer to something else, we are within the realm of metaphor. This, it must be said, is rather different from encouraging novel metaphors, or enshrining metaphor as a poetic device, or really even noticing that it exists.

During the Middle Ages, there was a literary fascination with metaphor’s gilded siblings—parable and allegory—but the explication of figuration’s significance didn’t move much beyond Aristotle’s original definition. By 1589, the great English rhetorician George Puttenham would still define the term in The Art of English Poesy as involving “an inversion of sense by transport,” and that belief that metaphor gets you somewhere else remains central, metaphorically at least. Contemporary study of metaphor begins with Richards’s invaluable 1937 The Philosophy of Rhetoric. Because the subject went largely unexplored over two millennia, Richards complained that the “detailed analysis of metaphors, if we attempt it with such slippery terms as these, sometimes feels like extracting cube-roots in the head.” He had a method of exactness, however, for Richards presents a novel vocabulary, a vocabulary that—also unsurprisingly—is metaphorical. According to Richards, a metaphor is composed of two parts, the “tenor” and the “vehicle.” The first is whatever it is that’s being described, and the second is that whose attributes are being carried over in the description of the former (shades of that Greek delivery van). “For the Lord God is a sun and a shield,” writes the Psalmist, providing us an opportunity to see how Richards’s reasoning works. Belying the fundamentalist fallacy that the Bible is a literal text—though nothing is—it’s clear to most that God is neither a giant ball of ionized hydrogen undergoing nuclear fusion into helium, nor is He defensive armor. What God is, in King David’s metaphor, is the tenor, because He is what is being described. The attributes that are being borrowed from “sun” and “shield” are those connotations of being life-giving, luminescent, warm, as well as being defensive and protective. “Sun” and “shield” are even gently contradictory, as blocking out the former with the later can testify towards; but it’s also a demonstration of how the non-literal can express reality in its glorious paradox. In exactly the same manner, Robert Frost’s central conceit in “The Road Not Taken” uses the vehicle of an arduous and uncertain wooded path on the implied tenor of the narrator’s life. A great richness of poetic metaphor—as separate from cliché—is that it allows for ambiguity of referent, so that meaning is a many-colored thing. What makes a poetic metaphor successful is the delicate interplay between tenor and vehicle. A poet’s task is to space that width just right, and to somehow surprise the reader while doing so, without befuddling them.

Poets are less the unacknowledged legislators of the world than they are its wizards, because the form has “the power to define reality,” as George Lakoff and Mark Johnson write in Metaphors We Live By. Or maybe it’s more accurate to say that those who nurture metaphors are as if apiarists, since the figurative is like pollen from a flower and each word a bee, meaning traded in a continual hum. Language—and thus reality—is supersaturated with meaning, everything capable of being transformed into something else. “If all language is metaphorical,” writes David Punter in his short study Metaphor, “then it could also follow that we might want to say that all language is continually involved in a series of acts of translation: translating things which are difficult to apprehend into things we can apprehend.” Just as translation is based in difference, so is all communication. All language is relational, a communion between that-which-is and that-which-isn’t. Because words are always arbitrarily connected to that which they represent, language is intrinsically metaphorical, the tethering of random shapes on a page or the vibration of air molecules to an outside sense, the compulsion of someone with a mind different from your own imagining that which you wish them to. Without metaphor there’s no fiction, and without fiction there’s no metaphor. Without either, there’s no possibility of communication. Metaphor is a bridge that doesn’t exist that you’re still able to cross, for faith in being understood is that which gets you to the other side.    

Figurative language encompasses the expansiveness of metaphor; the insularity of metonymy; the granularity of synecdoche; the straightforwardness of simile; the folksiness of parable; the transcendence of allegory. We don’t just read metaphor in literature; humans have always seen it in events, in nature, in the cosmos, in any manner of thinking that sees existence as overdetermined, as meaning permeating that which would otherwise be inert. We see metaphor in the hidden grinning skull of Hans Holbein’s painting The Ambassadors and the melting clocks in Salvador Dali’s The Persistence of Memory. It’s in F. Scott Fitzgerald’s green light and Herman Melville’s whale. It’s in the Anglo-Saxon form of the “kenning,” where the sky is a “bird-road” and Homer’s evocation of the Aegean “wine-dark sea”; in saying that space-time can “curve” and that genes are “selfish”; with Rene Descartes description of the body as a machine and William Paley’s claim that the universe is a clock, and the understanding that God can be both Mother and Father. Politics is threaded through with metaphor, narrative, and artifice—the most effective means of getting the masses to listen to you, for both good and evil. Metaphor is both what facilitated the horror of the Rwandan genocide when Hutu propagandists described the Tutsis as “cockroaches,” as well as what generates the hopefulness in the socialist call for “breads and roses.” Symbolism undergirds both the eagle in the Seal of the United States of America and that of the Third Reich. That the same animal is used for both only emphasizes how mercurial metaphor happens to be. As Punter explains, “metaphor is never static, and rarely innocent.” Figuration’s precise power and danger comes from such slipperiness, as everything is forever morphing and mutating between the metaphorical and the literal.

When Patrick Stewart first came to the United States in 1978, arriving in Los Angeles where he would make his greatest fame as a starship captain, it was as a 37-year-old member of the Royal Shakespeare Company performing the role of Duke Senior in As You Like It at the Ahmanson Theatre. Traipsing through the enchanted forest of Arden, and Stewart appeared opposite Alan Howard performing as Jacques. During the seventh scene of the second act, Howard (most celebrated for voicing Sauron in the Peter Jackson’s Lord of the Rings films) uttered the most celebrated extended metaphor in Shakespeare’s literary career of extended metaphors: “All the world’s a stage, /And all the men and women merely players;/They have their exits and their entrances;/And one man in his time plays many parts.” Conventionally understood as referring to the various roles we all perform over the course of our lives, it’s an apt encapsulation of metaphor itself. All of communication is a stage, and every word is merely a player, and one word can play many parts. When it comes to figuration, Jacques’s monologue is right up there with Darmok and Jalad at Tanagra. Because nothing in the English language is as perfect—as immaculate, as blessed, as sacred, as divine—as the word “like.” What it effects is a syntactical transubstantiation, the transformation of one thing into another. If everything—our concepts, our words, our minds—are sealed off behind a wall of unknowability, then metaphor is that which can breach those walls. Whether implied or stated, “like” is a bridge transporting sense across the chasm of difference, providing intimations of how all ideas and things are connected to each other by similarities, no matter how distant. Whether uttered or only implied, “like” is the wispy filament that nets together all things. Perhaps there is naked reality, but we’ll never be able to see it ourselves, always clothing it in finery that is continually put on and taken off. In our life, metaphor is the intimate kiss between difference.

Image Credit: Wikipedia

Nothing Outside the Text

- | 3

Let’s pretend that you work in a windowless office on Madison, or Lex, or Park in the spring of 1954, and you’re hurrying to Grand Central to avoid the rush back to New Rochelle, or Hempstead, or Weehawken. Walking through the Main Concourse with its Beaux Arts arches and its bronzed clock atop the ticketing booth, its cosmic green ceiling fresco depicting the constellations slowly stained by the accumulated cigarette smoke, you decide to purchase an egg-salad sandwich from a vendor near the display with its ticking symphony of arrivals and departures. You glance down at a stack of slender magazines held together with thick twine. The cover illustration is of a buxom brunette wearing a yellow pa’u skirt and hauling an unconscious astronaut in a silver spacesuit through a tropical forest while fur-covered extraterrestrials look on between the palm trees. It’s entitled Universe Science Fiction. And if you were the sort of Manhattan worker who, after clocking in eight hours at the Seagram Building or the Pan Am Building, settles into the commuter train’s seat—after loosening his black-knit tie and lighting a Lucky Strike while watching Long Island go by—ready to be immersed in tales of space explorers and time travelers, then perhaps you parted with a quarter so that Universe Science Fiction’s cheap print would smudge your gray flannel suit. The sort of reading you’d want to cocoon yourself in with nothing outside the text. As advertised, you find a story of just a few hundred words entitled “The Immortal Bard,” by a writer named Isaac Asimov.

Reprinted three years later in Earth Is Room Enough, “The Immortal Bard” is set among sherry-quaffing, tweedy university faculty at their Christmas party, where a boozed-up physicist named Phineas Welch corners the English professor Scott Robertson, and explains how he’s invented a method of “temporal transference” able to propel historical figures into the present. Welch resurrects Archimedes, Galileo, and Isaac Newton, but “They couldn’t get used to our way of life… They got terribly lonely and frightened. I had to send them back.” Despite their genius, their thought wasn’t universal, and so Welch conjures William Shakespeare, believing him to be “someone who knew people well enough to be able to live with them centuries away from his own time.” Robertson humors the physicist, treating such claims with a humanist’s disdain, true to C.P. Snow’s observation in The Two Cultures and the Scientific Revolution that “at one pole we have the literary intellectuals, at the other scientists… Between the two a gulf of mutual incomprehension.” Welch explains that the Bard was surprised that he was still taught and studied, after all, he wrote Hamlet in a few weeks, just polishing up an old plot. But when introduced to literary criticism, Shakespeare can’t comprehend anything. “God ha’ mercy! What cannot be racked from words in five centuries? One could wring, methinks, a flood from a damp clout!” So, Welch enrolls him in a Shakespeare course, and suddenly Robertson begins to fear that this story isn’t just a delusion, for he remembers the bald man with a strange brogue who had been his student. “I had to send him back,” Welch declares, because our most flexible and universal of minds had been humiliated. The physicist downed his cocktail and mutters “you poor simpleton, you flunked him.”

“The Immortal Bard” doesn’t contain several of the details that I include—no sherry, no tweed (though there are cocktails). We have no sense of the characters’ appearances; Welch’s clothes are briefly described, but Robertson is a total blank. A prolific author, penning over 1,000 words a day, by Asimov’s death in 1992 he had published more than 500 books across all categories of the Dewey Decimal System (including Asimov’s Guide to Shakespeare). Skeletal parsimony was Asimov’s idiom; in his essay “On Style” from Who Done It?, coedited with Alice Laurence, he described his prose as “short words and simple sentence structure,” bemoaning that this characteristic “grates on people who like things that are poetic, weighty, complex, and, above all, obscure.” Had his magisterial Foundation been about driving an ambulance in the First World War rather than a galactic empire spanning billions of light years, it’d be the subject of hundreds of dissertations. If his short story “The Nine Billion Names of God” had been written in Spanish, then he’d be Jorge Luis Borges; had Asimov’s “The Last Question” originally been in Italian, then he’d be Italo Calvino (American critics respect fantasy in an accent, but then they call it “magical realism”). As it was, critical evaluation was more lackluster, with the editors of 1981’s Dictionary of Literary Biography claiming that since Asimov’s stories “clearly state what they mean in unambiguous language [they] are… difficult for a scholar to deal with because there is little to be interpreted.”

Asimov’s dig comes into sharper focus having admitted that “The Immortal Bard” was revenge on those professors who’d rankled him by misinterpreting stories—his and other’s. A castigation of the discipline as practiced in 1954, which meant a group that had dominated the study of literature for three decades—the New Critics. With their rather uninspired name, the New Critics—including I.A. Richards, John Crow Ransome, Cleanth Brooks, Allen K. Tate, Robert Penn Warren, William Empson, and a poet of some renown named T.S. Eliot (among others)—so thoroughly revolutionized how literature is studied that explaining why they’re important is like explaining air to a bird. From Yale, Cambridge, and Kenyon, the New Criticism would disseminate, and then it trickled down through the rest of the educational system. If an English teacher asked you to analyze metaphors in Shakespeare’s “Sonnet 12″—that was because of the New Critics. If an AP instructor asked you to examine symbolism in F. Scott Fitzgerald’s The Great Gatsby—that was because of the New Critics. If a college professor made you perform a rhetorical analysis of Virginia Woolf’s Mrs. Dalloway—that was because of the New Critics. Most of all, if anyone ever made you conduct a “close reading,” it is the New Critics who are to blame.

According to the New Critics, their job wasn’t an aesthetic one, parsing what was beautiful about literature or how it moved people (the de facto approach in Victorian classrooms, and still is in popular criticism), but an analytical one. The task of the critic was scientific—to understand how literature worked, and to be as rigorous, objective, and meticulous as possible. That meant bringing nothing to the text but the text. Neither the critic’s concerns—or the author’s—mattered more than the placement of a comma, the connotation of a particular word, the length of a sentence. True that they often unfairly denigrated worthy writing because their detailed explications only lent themselves to certain texts. Poetry was elevated above prose; the metaphysical over the Romantic; the “literary” over genre. But if the New Critics were snobbish in their preferences, they also weren’t wrong that there was utility in the text’s authority. W.K. Wimsatt and Monroe Beardsley argued in a 1946 issue of The Sewanee Review that the “design or intention of the author is neither available nor desirable as a standard for judging the success of a work of literary art.” In a more introspective interview, Asimov admitted that what inspired “The Immortal Bard” was his inadequacy at answering audience questions about his own writing. Asimov was right that he deserved more attention from academics, but wrong in assuming that he’d know more than them. The real kicker of the story might be that Shakespeare actually earned his failing grade.     

When New Critics are alluded to in popular culture, it’s as anal-retentive killjoys. Maybe the most salient (and unfair) drumming the New Critics received was in the 1989 (mis)beloved Peter Weir film Dead Poets Society, featuring Robin Williams’s excellent portrayal of John Keating, an English teacher at elite Welton Academy in 1959. Keating arrives at the rarefied school urging the boys towards carpe diem, confidently telling them that the “powerful play goes on, and you may contribute a verse.” With vitality and passion, Keating introduces the students to Tennyson, Whitman, and Thoreau, and the audience comprehends that this abundantly conservative curriculum is actually an act of daring, at least when contrasted to the New Critical orthodoxy that had previously stultified the children of millionaires. On his first day, wearing corduroy with leather arm patches, Keating asks a student to recite from their approved textbook by Professor J. Evans Pritchard. In a monotone, the student reads “If the poem’s score for perfection is plotted on the horizontal of a graph and its importance is plotted on the vertical, then calculating the total area of the poem yields the measure of its greatness.” We are to understand that after the cold scalpel of analysis cuts the warm flesh of the poem—that if it wasn’t already dead—then it certainly perished thereafter. Keating pronounces the argument to be “Excrement,” and demands his students rip the page out, so that a film that defends the humanities depicts the destruction of books. “But poetry, beauty, romance, love, these are what we stay alive for,” Keating tells his charges, and how could dusty Dr. Pritchard compete with a Cartesian coordinate plane ?

The fictional Pritchard’s essay is an allusion to Brook and Warren’s influential 1938 Understanding Poetry, where they write that “Poetry gives us knowledge… of ourselves in relation to the world of experience, and to that world considered… in terms of human purposes and values.” Not soaring in its rhetoric, but not the cartoon from Dead Poets Society either, though also notably not what Keating advocated. He finds room for poetry, beauty, romance, and love, but neglects truth. What Keating champions isn’t criticism, but appreciation, and while the former requires rigor and objectivity, the latter only needs enjoyment. Appreciation in and of itself is fine—but it doesn’t necessarily ask anything of us either. Kevin Detmar in his defenestration of the movie from The Atlantic writes that “passion alone, divorced from the thrilling intellectual work of real analysis, is empty, even dangerous.” Pontificating in front of his captive audience, Keating recites (and misinterprets) poems from Robert Frost and Percy Shelley, demanding that “When you read, don’t just consider what the author thinks, consider what you think.” Estimably sensible, for on the surface an enjoinder towards critical thought and independence seems reasonable, certainly when reading an editorial, or a column, or policy proposal—but this is poetry.

His invocation is diametrically opposed to another New Critical principle, also defined by Wimsatt and Beardsley in The Sewanee Review in 1946, and further explicated in their book The Verbal Icon: Studies in the Meaning of Poetry, published the same year as Asimov’s story. Wimsatt and Beardsley say that genuine criticism doesn’t “talk of tears; prickles or other physiological symptoms, of feeling angry, joyful, hot, cold, or intense, or of vaguer states of emotional disturbance, but of shades of distinction and relation between objects of emotion.” In other words, it’s not all about you. Being concerned with the poem’s effect tells us everything about the reader but little about the poem. Such a declaration might seem arid, sterile, and inert—especially compared to Keating’s energy—and yet there is paradoxically more life in The Verbal Icon. “Boys, you must strive to find your own voice,” Keating tells a room full of the children of wealth, power, and privilege, who will spend the whole 20th century ensuring that nobody has the option not to hear them. Rather than sounding the triumphant horn of independence, this is mere farting on the bugle of egoism. Poetry’s actual power is that it demands we shut up and listen. Wimsatt and Beardsley aren’t asking the reader not to be changed by a poem—it’s the opposite. They’re demanding that we don’t make an idol from our relative, contingent, arbitrary reactions. A poem is a profoundly alien place, a foreign place, a strange place. We do not go there to meet ourselves; we go there to meet something that doesn’t even have a face. Keating treats poems like mirrors, but they’re windows.

With austerity and sternness, New Criticism is an approach that with sola Scriptura exactitude understood nothing above, below, behind, or beyond the text. Equal parts mathematician and mystic, the New Critic deigns objectivity the preeminent goal, for in the novel, or poem, or play properly interpreted she has entered a room separate, an empire of pure alterity. An emphasis on objectivity doesn’t entail singularity of interpretation, for though the New Critics believed in right and wrong readings, good and bad interpretations, they reveled in nothing as much as ambiguity and paradox. But works aren’t infinite. If a text can be many things, it also can’t mean everything. What is broken are the tyrannies of relativism, the cult of “I feel” that defined conservative Victorian criticism and ironically some contemporary therapeutic manifestations as well. New Criticism drew from the French tradition of explication de texte, the rigorous parsing of grammar, syntax, diction, punctuation, imagery, and narrative. Not only did they supplant Victorian aesthetic criticism’s woolliness, their method was estimably democratic (despite their sometimes-conservative political inclinations, especially among the movement known as the Southern Agrarians). Democratic because none of the habituated knowledge of the upper-class—the mores of Eton or Philips Exeter, summering at Bath or Newport, proper dress from Harrod’s or Brooks Brothers—now made a difference in appreciating Tennyson or Byron. Upper class codes were replaced with the severe, rigorous, logical skill of being able to understand Tennyson or Byron, with no recourse to where you came from or who you were, but only the words themselves. Appreciation is about taste, discernment, and breeding—it’s about acculturation. Analysis? That’s about poetry.

From that flurry of early talent came Richards’s 1924 Principles of Literary Criticism and 1929 Practical Criticism, Empson’s 1930 Seven Types of Ambiguity, Brook and Warren’s 1938 Understanding Poetry, Brooks’s classic 1947 The Well Wrought Urn: Studies in the Structure of Poetry, and Wimsatt and Beardsley’s 1954 The Verbal Icon, as well as several essays of Ransome and Eliot. The New Critics did nothing less than upend literature’s study by focusing on words, words, words (as Hamlet would say). “A book is a machine to think with,” Richards wrote in Principles of Literary Criticism, and disdain that as chilly, but machines do for us that which we can’t do for ourselves. Reading yourself into a poem is as fallacious as sitting in a parked car and thinking that will get you to Stop & Shop. Lest you think that Richards is sterile, he also affirmed that “Poetry is capable of saving us,” and that’s not in spite of it being a machine, but because of it. They sanctified literature by cordoning it off and making it a universe unto itself, while understanding that its ideal rigidity is like absolute zero, an abstraction of readerly investment that by necessity always lets a little heat in. In practice, the job of the critic is profound in its prosaicness. Vivian Bearing, a fictional English professor in Margaret Edson’s harrowing and beautiful play W;t, which contains the most engaging dramatization of close reading ever committed to stage or screen, describes the purpose of criticism as not to reaffirm whatever people want to believe, but that rather by reading in an “uncompromising way, one learns something from [the] poem, wouldn’t you say?”

There have been assaults upon this bastion over the last several decades, yet even while that mélange of neo-orthodoxies that became ascendant in the ’70s and ’80s when English professor was still a paying job are sometimes interpreted as dethroning the New Critics—the structuralists and post-structuralists, the New Historicists and the Marxists, the Queer Theorists and the post-colonial theorists—their success was an ironic confirmation of the staying power of Wimsatt, Beardsley, Brooks, Warren, Richards, Empson, and so on. “Like all schools of criticism, the New Critics have been derided by their successors,” writes William Logan in his forward to Garrick Davis’s Praising It New: The Best of the New Criticism, “but they retain an extraordinary influence on the daily practice of criticism.” After all, when a post-structuralist writes about binary oppositions, there are shades of Empson’s paradox and ambiguity; when Roland Barthes wrote in 1967 that “the reader is without history, biography, psychology” there is a restatement about effect, and that he was writing in a book called The Death of the Author is the ultimate confirmation against intention. And Jacques Derrida’s deconstruction, that much maligned and misinterpreted word, bane to conservative and balm to radical? It’s nothing more than a different type of close reading—a hyper attenuated and pure form of it—where pages could be produced on a single comma in James Joyce’s Ulysses. We are still their children.

Though far from an unequivocal celebrant of the New Critics, Terry Eagleton writes in Literary Theory: An Introduction that close reading provided “a valuable antidote to aestheticist chit-chat,” explaining that the method does “more than insist on due attentiveness to the text. It inescapably suggests an attention to… the ‘words on the page.'” Close reading is sometimes slandered as brutal vivisection, but it’s really a manner of possession. In the sifting through of diction and syntax, grammar and punctuation, image and figuration, there are pearls. Take a sterling master—Helen Vendler. Here she is examining Shakespeare’s Sonnet 73, where he writes “In me thou see’st the glowing of such fire/That on the ashes of his youth doth lie.” She notes that the narrator across the lyric “Defines himself not by contrast but by continuity with his earlier state. He is the glowing—a positive word, unlike ruin or fade—of a fire.” Or Vendler on the line in Emily Dickinson’s poem 1779. The poet writes that “To make a prairie it takes a clover and a bee,” and the critic writes “Why ‘prairie’ instead of ‘garden’ or ‘meadow?’ Because only ‘prairie’ rhymes with ‘bee’ and ‘revery,’ and because ‘prairie’ is a distinctly American word.” Or, Vendler, once again, on Walt Whitman, in her book Poets Thinking: Pope, Whitman, Dickinson, Yeats. In Leaves of Grass he begins, “I celebrate myself, and sing myself,” and she observes that “The smallest parallels… come two to a line… When the parallels grow more complex, each requires a whole line, and we come near to the psalmic parallel, so often imitated by [the poet], in which the second verse adds something to the substance of the first.” And those are just examples from Helen Vendler.

When I queried literary studies Twitter about their favorite close readings—to which they responded generously and enthusiastically—I was directed towards Erich Auerbach’s Dante: The Poet of the Secular (to which I’d add Mimesis: The Representation of Reality in Western Literature); Marjorie Garber’s reading of Robert Lowell’s poem “For the Union Dead” in Field Work: Sites in Literary and Cultural Studies; the interpretation of Herman Melville’s Billy Budd in The Barbara Johnson Reader: The Surprise of Otherness; Shoshana Felman’s paper on Henry James titled “Turning the Screw of Interpretation” in Yale French Studies; Susan Howe’s My Emily Dickinson; Randall Jarry writing about Robert Frost’s “Home Burial” in The Third Book of Criticism; Olga Springer’s Ambiguity in Charlotte Bronte’s Villette; Northrop Frye’s Fearful Symmetry: A Study of William Blake; Vladimir Nabokov on Charles Dickens and Gustave Flaubert in Lectures on Literature; Ian Watt’s The Rise of the Novel: Studies in Defoe, Richardson, and Fielding. Nina Baym on Nathaniel Hawthorne in Novels, Readers, Reviewers: Responses to Fiction in Antebellum America; Minrose Gwin on The Sound and the Fury in The Feminine and Faulkner; Edward Said on Jane Austen’s Mansfield Park in Orientalism; Christopher Ricks’s Milton’s Grand Study (to which I’d add Dylan’s Vision of Sin, which refers not to Thomas but Bob), and so on, and so on, and so on. If looking to analyze my previous gargantuan sentence, just note that I organized said critics by no schema, save to observe how such a diversity includes the old and young, the dead and alive, the traditional and the radical, all speaking to the vitality of something with as stuffy a name as close reading. Risking sentimentality, I’d add another exemplary participant—all of those anonymous graduate students parsing the sentence, all of those undergraduates scanning the line, and every dedicated reader offering attentiveness to the words themselves. That’s all close reading really is.

Naïve to assume that such a practice offers a way out of our current imbroglio. However, when everyone has an opinion but nobody has done the reading, where an article title alone is enough to justify judgement, and criticism tells us more about the critic than the writing, then perhaps slow, methodical, humble close reading might provide more than just explication. Interpretations are multifaceted, but they are not relative, and for all of its otherworldliness, close reading is built on evidence. Poorly done close reading is merely fan fiction. Something profound in acknowledging that neither the reader—nor the author—are preeminent, but the text is rather the thing. It doesn’t serve to affirm what you already know, but rather to instruct you in something new. To not read yourself into a poem, or a novel, or a play, but to truly encounter another mind—not that of the author—but of literature itself, is as close to religion as the modern age countenances. Close reading is the most demonstrative way to experience that writing and reading are their own dimension.

Let’s pretend that you’re a gig worker, and while waiting to drive for Uber or Seamless, you scroll through an article entitled “Nothing Outside the Text.” It begins in the second-person, inserting the reader into the narrative. The author invents a mid-century office worker who is travelling home. Place names are used as signifiers; the author mentions “New Rochelle,” “Hempstead,” and “Weehawken,” respectively in Westchester County, Long Island, and New Jersey, gesturing towards how the city’s workers radiate outward. Sensory details are emphasized—the character buys an “egg-salad sandwich,” and we’re told that he stands near the “display with its ticking symphony,” the last two words unifying the mechanical with the artistic—with the first having an explosive connotation—yet there is an emphasis on regularity, harmony, design. We are given a description of a science fiction magazine the man buys, and are told that he settles into his train seat where the magazine’s “cheap print… smudge[s]” his “gray flannel suit,” possibly an allusion to the 1955 Sloane Wilson novel of that name. This character is outwardly conformist, but his desire to be “immersed in tales of space explorers and time travelers” signals something far richer about his inner life. Finally, if you’re this modern gig worker reading about that past office worker, you might note that the latter is engaging the “sort of reading you’d want to cocoon yourself in, a universe to yourself with nothing outside the text.” And in close reading, whether you’re you or me, the past reader or the present, the real or imagined, all that the text ever demands of us—no more and no less—is to enter into that universe on its own terms. For we have always been, if we’re anything, citizens of this text.   

Image Credit: Pixabay

This Isn’t the Essay’s Title

- | 1

“The test of a first-rate intelligence is the ability to hold two opposing ideas in mind at the same time and still retain the ability to function.” —F. Scott Fitzgerald, “The Great Crack Up” (1936)

“You’re so vain, you probably think this song is about you.” —Carly Simon (1971)

On a December morning in 1947 when three fellows at Princeton’s Institute for Advanced Study set out for the Third Circuit Court in Trenton, it was decided that the job of making sure that the brilliant but naively innocent logician Kurt Gödel didn’t say something intemperate at his citizenship hearing would fall to Albert Einstein. Economist Oscar Morgenstern would drive, Einstein rode shotgun, and a nervous Gödel sat in the back. With squibs of low winter light, both wave and particle, dappled across the rattling windows of Morgenstern’s car, Einstein turned back and asked, “Now, Gödel, are you really well prepared for this examination?” There had been no doubt that the philosopher had adequately studied, but as to whether it was proper to be fully honest was another issue. Less than two centuries before, and the signatories of the U.S. Constitution had supposedly crafted a document defined by separation of powers and coequal government, checks and balances, action and reaction. “The science of politics,” wrote Alexander Hamilton in “Federalist Paper No. 9,” “has received great improvement,” though as Gödel discovered, clearly not perfection. With a completism that only a Teutonic logician was capable of, Gödel had carefully read the foundational documents of American political theory, he’d poured over the Federalist Papers and the Constitution, and he’d made an alarming discovery.

It’s believed that while studying Article V, the portion that details the process of amendment, Gödel realized that there was no safeguard against that article itself being amended. Theoretically, a sufficiently powerful political movement with legislative and executive authority could rapidly amend the articles of amendment so that a potential demagogue would be able to rule by fiat, all while such tyranny was perfectly constitutional. A paradox at the heart of the Constitution—something that supposedly guaranteed democracy having coiled within it rank authoritarianism. All three men driving to Trenton had a keen awareness of tyranny; all were refugees from Nazi Germany; all had found safe-haven on the pristine streets of suburban Princeton. After the Anschluss, Gödel was a stateless man, and though raised Protestant he was suspect by the Nazis and forced to emigrate. Gödel, with his wife, departed Vienna by the Trans-Siberian railroad, crossed from Japan to San Francisco, and then took the remainder of his sojourn by train to Princeton. His path had been arduous and he’d earned America, so when Gödel found a paradox at the heart of the Constitution, his desire to rectify it was born from patriotic duty. At the hearing, the judge asked Gödel how it felt to become a citizen of a nation where it was impossible for the government to fall into anti-democratic tyranny. But it could, Gödel told him, and “I can prove it.” Apocryphally, Einstein kicked the logician’s chair and ended that syllogism.

Born in Austria-Hungary, citizen of Czechoslovakia, Austria, Germany, and finally the United States, Gödel’s very self-definition was mired in incompleteness, contradiction, and unknowability. From parsing logical positivism among luminaries such as Rudolph Carnap and Moritz Schlick, enjoying apfelstrudel and espresso at the Café Reichsrat on Rathausplatz while they discussed the philosophy of mathematics, Gödel now rather found himself eating apple pie and weak coffee in the Yankee Doodle Tap Room on Nassau Street—and he was grateful.  Gone were the elegant Viennese wedding-cake homes of the Ringstrasse, replaced with Jersey’s clapboard colonials; no more would Gödel debate logic among the rococo resplendence of the University of Vienna, but at Princeton he was at least across the hall from Einstein. “The Institute was to be a new kind of research center,” writes Ed Regis in Who Got Einstein’s Office?: Eccentricity and Genius at the Institute for Advanced Study. “It would have no students, no teachers, and no classes,” the only responsibility being pure thought, so that its fellows could be purely devoted to theory. Its director J. Robert Oppenheimer (of Manhattan Project fame) called it an “intellectual hotel;” physicist Richard Feynman was less charitable, referring to it as a “lovely house by the woods” for “poor bastards” no longer capable of keeping up. Regardless, it was to be Gödel’s final home, and there was something to that.

Seventeen years before his trip to Trenton, and it was at the Café Reichsrat where he presented the discovery for which he’d forever be intractably connected—Gödel’s Incompleteness Theorems. In 1930 he had irrevocably altered mathematics when Gödel demonstrated that the dream of completism that had dogged deduction since antiquity was only a mirage. “Any consistent formal system,” argues Gödel in his first theorem, “is incomplete… there are statements of the language… which can neither be proved nor disproved.” In other words, it’s an impossibility that any set of axioms can be demonstrated to be true as part of a self-contained system—the rationalist dream of a unified, self-evidently provable system is only so much fantasy. Math, it turns out, will never be depleted, since there can never be a solution to all mathematical problems. In Gödel’s formulation, a system must either sometimes produce falsehoods, or it must sometimes generate unprovable truths, but it can never consistently render only completely provable truths. As the cognitive scientist Douglas Hofstadter explained in his countercultural classic Gödel, Escher, Bach: An Eternal Golden Braid, “Relying on words to lead you to the truth is like relying on an incomplete formal system to lead you to the truth. A formal system will give you some truth, but… a formal system, no matter how powerful—cannot lead to all truths.” In retrospect, the smug certainties of American exceptionalism should have been no match for Gödel, whose scalpel-like mind had already eviscerated mathematics, philosophy, and logic, to say nothing of some dusty parchment once argued over in Philadelphia.

His theorems rest on a variation of what’s known as the “Liar’s Paradox,” which asks what the logical status of a proposition such as “This statement is false” might be. If that sentence is telling the truth, then it must be false, but if it’s false, then it must be true, ad infinitum, in an endless loop. For Gödel, that proposition is amended to “This sentence is not provable,” with his reasoning demonstrating that a sufficiently formal system of logic can’t demonstrate that proposition, regardless of its truth value, since to prove the statement is to make it unprovable, but if unprovable, then it’s proved, again ad infinitum in yet another grueling loop. As with the Constitution and its paeans to democracy, so must mathematics be rendered perennially useful while still falling short of perfection. The elusiveness of certainty bedeviled Gödel throughout his life; a famously paranoid man, the assassination of his friend Schlick by a Nazi student in 1936 pushed the logician into a scrupulous anxiety. After the death of his best friend Einstein in 1955 he became increasingly isolated. “Gödel’s sense of intellectual exile deepened,” explains Rebecca Goldstein in Incompleteness: The Proof and Paradox of Kurt Gödel. “The young man in the dapper white suit shriveled into an emaciated man, entombed in a heavy overcoat and scarf even in New Jersey’s hot humid summers, seeing plots everywhere… His profound isolation, even alienation, from his peers provided fertile soil for that rationality run amuck which is paranoia.” When his beloved wife fell ill in 1977, Gödel quit eating since she could no longer prepare his meals. The ever-logical man whose entire career had demonstrated the fallibility of rationality had concluded that only his wife could be trusted not to poison his food, and so when she was unable to cook, he properly reasoned (by the axioms that were defined) that it made more sense to simply quit eating. When he died, Gödel weighed only 50 pounds.

Gödel’s thought was enmeshed in that orphan of logic that we call paradox. As was Einstein’s, that man who converted time into space and space into time, who explained how energy and mass were the same thing so that (much to his horror) the apocalyptic false dawn of Hiroshima was the result. Physics in the 20th century had cast off the intuitive coolness of classical mechanics, discovering that contradiction studded the foundation of reality. There was Werner Heisenberg with his uncertainty over the location of individual subatomic particles, Louis de Broglie and the strange combination of wave and particle that explained the behavior of light, Niels Bohr who understood atomic nuclei as if they were smeared across space, and the collapsing wave functions of Erwin Schrödinger for whom it could be imagined that a hypothetical feline was capable of being simultaneously alive and dead. Science journalist John Gribbin explains in Schrödinger’s Kittens and the Search for Reality: Solving the Quantum Mysteries that contemporary physics is defined by “paradoxical phenomena as photons (particles of light) that can be in two places at the same time, atoms that go two ways at once… [and how] time stands still for a particle moving at light speed.” Western thought has long prized logical consistency, but physics in the 20th century abolished all of that in glorious absurdity, and from those contradictions emerged modernity—the digital revolution, semiconductors, nuclear power, all built on paradox.

The keystone of classical logic is the so-called “Law of Non-Contradiction.” Simply put, something cannot both be and not be what it happens to be simultaneously, or if symbolic logic is your jam:  ¬(p ∧ ¬p), and I promise you that’s the only formula you will see in this essay. Aristotle said that between two contradictory statements one must be correct and the other false—”it will not be possible to be and not to be the same thing” he writes in The Metaphysics—but the anarchic potential of the paradox greedily desires truth and its antecedents. And again, in the 17th century the philosopher Wilhelm Gottfried Leibnitz tried to succinctly ward off contradiction in his New Essays on Human Understanding when he declared, “Every judgement is either true or false,” and yet paradoxes fill the history of metaphysics like landmines studded across the Western Front. Paradox is the great counter-melody of logic—it is the question of whether an omnipotent God could will Himself unable to do something, and it’s the eye-straining M.C. Escher lithograph “Waterfall” with its intersecting Penrose triangles showing a stream cascading from an impossible trough. Paradox is the White Queen’s declaration in Lewis Carroll’s Through the Looking Glass that “sometimes I believed as many as six impossible things before breakfast,” and the Church Father Tertullian’s creedal statement that “I believe it because it is absurd.” The cracked shadow logic of our intellectual tradition, paradox is confident though denounced by philosophers as sham-faced; it is troublesome and not going anywhere. When a statement is made synonymous with its opposite, then traditional notions of propriety are dispelled and the fun can begin. “But one must not think ill of the paradox,” writes Søren Kierkegaard in Philosophical Fragments, “for the paradox is the passion of thought, and the thinker without the paradox is like the lover without passion: a mediocre fellow.”

As a concept, it may have found its intellectual origin on the sunbaked, dusty, scrubby, hilly countryside of Crete. The mythic homeland of the Minotaur, who is man and beast, human and bull, a walking, thinking, raging horned paradox covered in cowhide and imprisoned within the labyrinth. Epimenides, an itinerant philosopher some seven centuries before Christ, supposedly said that “All Cretans are liars” (St. Paul actually quotes this assertion in his epistle to Titus). A version of the aforementioned Liar’s Paradox thus ensues. If Epimenides is telling the truth then he is lying, and if he is lying then he is telling the truth. This class of paradoxes has multiple variations (in the Middle Ages they were known as “insolubles”—the unsolvable). For example, consider two sentences vertically arranged; the upper one is written “The statement below is true” and the lower says “The statement above is false,” and again the reader is caught in a maddening feedback loop. Martin Gardner, who for several decades penned the delightful “Mathematical Games” column in Scientific American, asks in Aha! Gotcha: Paradoxes to Puzzle and Delight, “Why does this form of the paradox, in which a sentence talks about itself, make the paradox clearer? Because it eliminates all ambiguity over whether a liar always lies and a truth-teller always tells the truth.” The paradox is a function of language, and in that way is the cousin to tautology, save for the former describing propositions that are always necessarily both true and false.

Some intrinsic meaning is elusive in all of this this, so that it would be easy to reject all of it as rank stupidity, but paradoxes provide a crucial service. In paradox, we experience the breakdown of language and of literalism. Whether or not paradoxes are glitches in how we arrange our words or due to something more intrinsic, they signify a null-space where the regular ways of thinking, of understanding, of writing, no longer hold. Few crafters of the form are as synonymous with paradox as the fifth-century BCE philosopher Zeno of Elea. Consider his famed dichotomy paradox, wherein Zeno concludes that motion itself must be impossible, since the movement from point A to point B always necessitates a halving of distance, forever (and so the destination itself can never be reached). Or his celebrated arrow paradox, wherein Aristotle explains in Physics that “If everything when it occupies an equal space is at rest at that instant of time, and if that which is in location is always occupying such a space at any moment, the flying arrow is therefore motionless at that instant of time and at the next instant of time.” And yet the arrow still moves. Roy Sorenson explains in A Brief History of the Paradox that the form “developed from the riddles of Greek folklore” (as with the Sphinx’s famous query in Sophocles’s Oedipus Rex), so that words have always mediated these conundrums, while Anthony Gottlieb writes in The Dream of Reason: A History of Philosophy from the Greeks to the Renaissance that “ingenious paradoxes… try to discredit commonsense views by demonstrating that they lead to unacceptable consequences,” in a gambit as rhetorical as it is analytical. Often connected primarily with mathematics and philosophy, paradox is fundamentally a literary genre, and one ironically (or paradoxically?) associated with the failure of language itself. All of the great authors of paradox—the pre-Socratics, Zen masters, Jesus Christ—were at their core storytellers, they were writers. Words stretched to incomprehension and narrative unspooling is their fundamental medium. Epimenides’s utterance triggers a collapse of meaning, but where the literal perishes there is room made for the figurative. Paradox is the mother of poetry.

I’d venture that the contradictions of life are the subject of all great literature, but paradoxes appear in more obvious forms, too. “There was only one catch and that was Catch-22,” writes Joseph Heller. The titular regulation of Heller’s Catch-22 concerned the mental state of American pilots fighting in the Mediterranean during the Second World War, with the policy such that if somebody requests that they don’t want to fly a mission because of mental infirmity, they’ve only demonstrated their own sanity, since anyone who would want to fly must clearly be insane, so that it’s impossible to avoid fighting. The captain was “moved very deeply by the absolute simplicity of this clause of Catch-22 and let out a respectful whistle.” Because politics is often the collective social function of reducto ad absurdum, political novels make particularly adept use of paradox. George Orwell did something similar in his celebrated (and oft-misinterpreted) novel of dystopian horror 1984, wherein the state apparatus trumpets certain commandments, such as “War is peace. /Freedom is slavery. /Ignorance is strength.” Perhaps such dialectics are the (non-Marxist) socialist Orwell’s parody of Hegelian double-speak, a mockery of that supposed engine of human progress that goes through thesis-antithesis-synthesis. Within paradox there is a certain freedom, the ability to understand that contradiction is an attribute of our complex experience, but when statements are also defined as their opposite, meaning itself can be the casualty. Paradox understood as a means to enlightenment bestows anarchic freedom; paradox understood as a means unto itself is nihilism.

Political absurdities are born out of the inanity of rhetoric and the severity of regulation, but paradox can entangle not just society, but the fabric of reality as well. Science fiction is naturally adept at examining the snarls of existential paradox, with time travel a favored theme. Paul Nahin explains in Time Machines: Time Travel in Physics, Metaphysics, and Science Fiction that temporal paradoxes are derived from the simple question of “What might happen if a time traveler changed the past?” This might seem an issue entirely of hermetic concern, save for in contemporary physics neither general relativity nor quantum mechanics preclude time travel (indeed certain interpretations of those theories downright necessitate it). So even the idea of being able to move freely through past, present, and future has implications for how reality is constituted, whether or not we happen to be the ones stepping out of the tesseract. “The classic change-the-past paradox is, of course, the so-called grandfather paradox,” writes Nahin, explaining that it “poses the question of what happens if an assassin goes back in time and murders his grandfather before his (the time-travelling murderer’s) own father is born.” The grandfather’s murder requires a murderer, but for the murderer in question to be born there is also the requirement that the grandfather not be murdered, so that the murderer is able to travel back in time and kill his ancestor, and again we’re in a strange loop.

Variations exist as far back as the golden age of the pulps, appearing in magazines like Amazing Stories as early as 1929. More recently, Ray Bradbury explored the paradox in “A Sound of Thunder,” where he is explicit about the paradoxical implications that any travel to the past will alter the future in baroque ways, with a 21st century tourist accidentally killing a butterfly in the Cretaceous, leading to the election of an openly fascistic U.S. president millions of years into the future (though the divergence of parallel universes is often proffered as a means of avoiding such implications). In Bradbury’s estimation, every single thing in history, every event, every incident, is “an exquisite thing,” so that a “small thing that could upset balances and knock down a line of small dominoes and then big dominoes and then gigantic dominoes all down the years across Time.” This conundrum need not only be phrased in patricidal terms, for what all temporal paradoxes have at their core is an issue of causality—if we imagine that time progresses from past through future, then what happens when those terms get all mixed up? How can we possibly understand a past that’s influenced by a future that in turn has been affected by the past?

Again, no issue of scholastic quibbling, for though we experience time as moving forward like one of Zeno’s arrows, the physics itself tells us that past, present, and future are constituted in entirely stranger ways. One version of the grandfather paradox involves, rather than grisly murder, the transfer of information from the future to the past; for example, in Tim Powers’s novel The Anubis Gates, a time traveler is stranded in the early 19th century. The character realizes that “I could invent things—the light bulb, the internal combustion engine… flush toilets.” But he abandons this hubris, for “any such tampering might cancel the trip I got here by, or even the circumstances under which my mother and father met.” Many readers will perhaps be aware of temporal paradoxes from the Robert Zemeckis Back to the Future film trilogy (which for what they lack in patricide they make up for in Oedipal sentiments), notably a scene in which Marty McFly inadvertently introduces Chuck Berry to his own song “Johnny B. Goode.” Ignoring the troubling implications that a suburban white teenager had to somehow teach the Black inventor of rock ‘n’ roll his own music, Back to the Future presents a classic temporal paradox—if McFly first heard “Johnny B. Goode” from Berry records, and Berry first heard the song from McFly, then from whence was the song actually composed? (Perhaps from God).

St. Augustine asks in The City of God “What is time, then? If nobody asks me, I know; but if I were desirous to explain it to one who should ask me, I plainly do not know.” Paradox sprouts from the fertile soil of our own incomprehension, and to its benefit there is virtually nothing that humans really understand, at least not really. Time is the oddest thing of all, if we honestly confront the enormity of it. I’m continually surprised that I can’t easily walk into 1992 as if it were a room in my house. No surprise then that time and space are so often explored in the literature of paradox. Oxymoron and irony are the milquetoast cousins of paradox, but poetry at its most polished, pristine, and adamantine elevates contradiction into an almost religious principle. Among the 17th-century poets who worked in the stead of John Donne, paradox was often a central aspect of what critics have called a “metaphysical conceit.” These brilliant, crystalline, rhetorical turns are often like Zeno’s paradoxes rendered into verse, expanding and compressing time and space with a dialectical glee. An example of this from the good Dr. Donne, master of both enigma and the erotic, who in his poem “The Good-Morrow” imagined two lovers for whom they have made “one little room an everywhere.” The narrator and the beloved’s bed-chamber—perhaps there is heavy wooden paneling on the wall and a canopy bed near a fireplace burning green wood, a full moon shining through the mottled crown glass window—are as if a singularity where north, south, east and west; past, present, and future; are all collapsed into a point. Even more obvious is Donne in “The Paradox,” wherein he writes that “Once I loved and died; and am now become/Mine epitaph and tomb;/Here dead men speak their last, and so do I,” the talking corpse its own absurdity made flesh.

So taken were the 20th-century scholars known as the New Critics with the ingenuity of metaphysical conceits that Cleanth Brooks would argue in his classic The Well Wrought Urn: Studies in the Structure of Poetry that the “language of poetry is the language of paradox.” Donne and Andrew Marvell, George Herbert, and Henry Vaughan used paradox as a theme and a subject—but to write poetry itself is paradoxical. To write fiction is paradoxical. Even to write nonfiction is paradoxical. To write at all is paradoxical. A similar sentiment concerning the representational arts is conveyed in the Belgian surrealist painter Rene Magritte’s much parodied 1929 work “The Treachery of Images.” Magritte presents an almost absurdly recognizable smoking pipe, polished to a totemistic brown sheen with a shiny black mouth piece, so basically obvious that it might as well be from an advertisement, and beneath it he writes in cursive script “Ceci n’est pas une pipe”—”This is not a pipe.” A seeming blatant contradiction, for what could the words possibly relate to other than the picture directly above them? But as Magritte told an interviewer, “if I had written on my picture ‘This is a pipe,’ I’d have been lying!” For you see, Magritte’s image is not a pipe, it is an image of a pipe. Like Zeno’s paradoxes, what may first initially seem to be simple-minded contrarianism, a type of existential trolling if you will, belies a more subtle observation. The philosopher Michel Foucault writes in his slender volume This Is Not a Pipe that “Contradiction could exist only between two statements,” but that in the painting “there is clearly but one, and it cannot be contradictory because the subject of the propositions is a simple demonstrative.” According to Foucault, the picture, though self-referential, is not a paradox in the logical sense of the word. And yet there is an obvious contradiction between the viewer’s experience of the painting, and the reality that they’ve not looked upon some carefully carved and polished pipe, but rather only brown and black oil carefully applied to stretched canvas.

This, then, is the “treachery” of which Magritte speaks, the paradox that is gestated within that gulf where meaning resides, a valley strung between the-thing-in-itself and the way in which we represent the-thing-in-itself. Writing is in some ways even more treacherous than painting, for at least Magritte’s picture looks like a pipe—perhaps other than some calligraphic art, literature appears as nothing so much as abstract squiggles. Moby-Dick is not a whale and Jay Gatsby is not a man. They are less than a picture of a pipe, for we have not even images of them, only ink-stained books, and the abject abstraction of mere letters. And yet the paradox is that from that nothingness is generated the most sumptuous something; just as the illusion of painting can trick one into the experience of the concrete, so does the more bizarre phenomenon of the literary imagination make you hallucinate characters that are generated from the non-figurative alphabet. From this essay, if I’ve done even a somewhat adequate job, you’ve hopefully been able to envision Gödel and Einstein bundled into a car on the Jersey turnpike, windows frosted with nervous breath and laughter, the sun rising over the wooded Pine Barrens—or to imagine John and Anne Donne bundled together under an exquisite blanket of red and yellow and blue and green, the heavy oak door of their chamber closed tight against the English frost—but of course you’ve seen no such thing. You’ve only skimmed through your phone while sitting on the toilet, or toggled back and forth between open tabs on your laptop. Literature is paradoxical because it necessitates the invention of entire realities out of the basest nothing; the treachery of representation is that “This is not a pipe” is a principle that applies to absolutely all of the written word, and yet when we read a novel or a poem we can smell the burning tobacco.

All of literature is a great enigma, a riddle, a paradox. What the Zen masters of Japanese Buddhism call a koan. Religion is too often maligned for being haunted by the hobgoblin straw-man of consistency, and yet the only real faith is one mired in contradiction, and few practices embrace paradox quite like Zen. Central to Zen is the breaking down of the dualities that separate all of us from absolute being, the distinction between the I and the not-I. As a means to do this, Zen masters deploy the enigmatic stories, puzzles, sayings, and paradoxes of koan, with the goal of forcing the initiate toward the para-logical, a catalyst for the instantaneous enlightenment known as satori. Sometimes reduced to the “What is the sound of one-hand clapping?” variety of puzzle (though that is indeed a venerable koan), the monk and master D.T. Suzuki explains in An Introduction to Zen Buddhism that these apparently “paradoxical statements are not artificialities contrived to hide themselves behind a screen of obscurity; but simply because the human tongue is not an adequate organ for expressing the deepest truth of Zen, the latter cannot be made the subject of logical exposition; they are to be experienced in the inmost soul when they become for the first time intelligible.” A classic koan, attributed to the ninth-century Chinese monk Linji Yixuan, famously says “If you meet the Buddha, kill him.” Linji’s point is similar to Magritte’s—”This is not the Buddha.” It’s a warning about falling into the trap of representation, of refusing to resist the treachery of images, and yet the paradox is that the only way we have of communicating is through the fallible, inexact, medium of words. Zen is the only religion whose purpose is to overcome religion, and everything else for that matter. It asks us to use its paradoxes as a ladder to which we can climb toward ultimate being—and then we’re to kick that ladder over. In its own strange way, literature is the ultimate koan, all of these novels and plays, poems and essays, all words, words, words meaning nothing and signifying everything, gesturing towards a Truth beyond truth, and yet nothing but artfully arranged lies (and even less than that, simply arrayed squiggles on a screen). To read is to court its own type of enlightenment, of transcendence, and not just because of the questions literature raises, but because of literature’s very existence in the first place.

Humans are themselves the greatest of paradoxes: someone who is kind can harbor flashes of rage, the cruelest of people are capable of genuine empathy, our greatest pains often lead to salvation and we’re sometimes condemned by that which we love. In a famous 1817 letter to his brothers, the English Romantic poet John Keats extolled the most sublime of literature’s abilities that was to dwell in “uncertainties, mysteries, doubts, without any irritable reaching after fact and reason,” a quality that he called “negative capability.” An irony in our present’s abandonment of nuance, for ours is a paradoxical epoch through and through—an era of unparalleled technological superiority and appalling barbarity, of instantaneous knowledge and virtually no wisdom. A Manichean age as well—which valorizes consistency above all other virtues, though it is that most suburban of values—yet Keats understood that if we’re to give any credit to literature, and for that matter any credit to people, we must be comfortable with complexity and contradiction. Negative capability is what separates the moral from the merely didactic. In all of our baroque complexity, paradox is the operative mode of literature, the only rhetorical gambit commensurate with displaying the full spectrum of what it means to be a human. We are all such glorious enigmas—creatures of finite dimension and infinite worth. None of us deserve grace, and yet all of us are worthy of it, a moral paradox that makes us beautiful not in spite of its cankered reality, but because of it. The greatest of paradoxes is that within that contradictory form, there is the possibility of genuine freedom—of liberation.

Image Credit: Wikipedia

Circles of the Damned


Maybe during this broiling summer you’ve seen the footage—in one striking video, women and men stand dazed on a boat sailing away from the Greek island of Evia, watching as ochre flames consume their homes in the otherwise dark night. Similar hellish scenes are unfolding in Algeria, Tunisia, and Libya, as well as in Turkey and Spain. Currently Siberia is experiencing the largest wildfire in recorded history, an unlikely place for such a conflagration, joined by large portions of Canada. As California burns, the global nature of our immolation is underscored by horrific news around the world, a demonstration of the United Nation’s Intergovernmental Report on Climate Change’s conclusion that such disasters are “unequivocally” anthropogenic, with the authors signaling a “code red” for the continuation of civilization. On Facebook, the mayor of a town in Calabria mourned that “We are losing our history, our identity is turning to ashes, our soul is burning,” and though he was writing specifically about the fires raging in southern Italy, it’s a diagnosis for a dying world as well.

Seven centuries ago, another Italian wrote in The Divine Comedy, “Abandon all hope ye who enter here,” which seems just as applicable in 2021 as it did in 1221. That exiled Florentine had similar visions of conflagration, describing how “Above that plain of sand, distended flakes/of fire showered down; their fall was slow –/as snow descends on alps when no wind blows… when fires fell, /intact and to the ground.” This September sees the 700th anniversary of both the completion of The Divine Comedy and the death of its author Dante Alighieri. But despite the chasms of history that separate us, his writing about how “the never-ending heat descended” holds a striking resonance. During our supposedly secular age, the singe of the inferno feels hotter when we’ve pushed our planet to the verge of apocalyptic collapse. Dante, you must understand, is ever applicable in our years of plague and despair, tyranny and treachery.

People are familiar with The Divine Comedy’s tropes even if they’re unfamiliar with Dante. Because all of it—the flames and sulphur, the mutilations and the shrieks, the circles of the damned and the punishments fitted to the sin, the descent into subterranean perdition and the demonic cacophony—find their origin with him. Neither Reformation nor revolution has dispelled the noxious fumes from inferno. There must be a distinction between the triumphalist claim that Dante says something vital about the human condition, and the objective fact that in some ways Dante actually invented the human condition (or a version of it).  When watching the videos of people escaping from Evia, it took me several minutes to understand what it was that I was looking at, and yet those nightmares have long existed in our culture, as Dante gave us a potent vocabulary to describe Hell centuries ago. For even to deny Hell is to deny something first completely imagined by a Medieval Florentine.

Son of a prominent family, enmeshed in conflicts between the papacy and the Holy Roman Empire, a respected though ultimately exiled citizen, Dante resided far from the shores of modernity, though the broad contours of his poem, with its visceral conjurations of the afterlife, are worth repeating. “Midway upon the journey of our life/I found myself within a forest dark,” Dante famously begins. At the age of 35, Dante descends into the underworld, guided by the ancient Roman poet Virgil and sustained by thoughts of his platonic love for the lady Beatrice. That Inferno constitutes only the first third of The Divine Comedy—subsequent sections consider Purgatory and Heaven—yet that it is the most read, speaks to something cursedly intrinsic in us. The poet descends like Orpheus, Ulysses, and Christ before him deep into the underworld, journeying through nine concentric circles, each more brutal than the previous. Perdition is a space of “sighs and lamentations and loud cries” filled with “Strange utterances, horrible pronouncements, /accents of anger, words of suffering, /and voice shrill and faint and beating hands” who are buffeted “forever through that turbid, timeless air, /like sand that eddies when a whirlwind swirls.” Cosmology is indistinguishable from ethics, so that each circle was dedicated to particular sins: the second circle is reserved for crimes of lust, the third to those of gluttony, the fourth to greed, the wrathful reside in the fifth circle, the sixth is domain of the heretics, the seventh is for the violent, all those guilty of fraud live in the eighth, and at the very bottom that first rebel Satan is eternally punished alongside all traitors.

Though The Divine Comedy couldn’t help but reflect the concerns of Dante’s century, he still formulated a poetics of damnation so tangible and disturbing that it’s still the measure of hellishness, wherein he “saw one/Rent from the chin to where one breaks wind. /Between his legs were hanging down his entrails;/His heart was visible, and the dismal sack/That makes excrement of what is eaten.” Lest it be assumed that this is simply sadism, Dante is cognizant of how gluttony, envy, lust, wrath, sloth, covetousness, and pride could just as easily reserve him a space. Which is part of his genius; Dante doesn’t just describe Hell, which in its intensity provides an unparalleled expression of pain, but he also manifests a poetry of justice, where he’s willing to implicate himself (even while placing several of his own enemies within the circles of the damned). 

No doubt the tortures meted out—being boiled alive for all eternity, forever swept up in a whirlwind, or masticated within the freezing mouth of Satan—are monstrous. The poet doesn’t disagree—often he expresses empathy for the condemned. But the disquiet that we and our fellow moderns might feel is in part born out of a broad theological movement that occurred over the centuries in how people thought about sin. During the Reformation, both Catholics and Protestants began to shift the model of what sin is away from the Seven Deadly Sins, and towards the more straightforward Ten Commandments. For sure there was nothing new about the Decalogue, and the Seven Deadly Sins haven’t exactly gone anywhere, but what took hold—even subconsciously—was a sense that sins could be reduced to a list of literal injunctions. Don’t commit adultery, don’t steal, don’t murder. Because we often think of sin as simply a matter of broken rules, the psychological acuity of Dante can be obscured. But the Seven Deadly Sins are rather more complicated—we all have to eat, but when does it become gluttony? We all have to rest, but when is that sloth?

An interpretative brilliance of the Seven Deadly Sins is that they explain how an excess of otherwise necessary human impulses can pervert us. Every human must eat; most desire physical love; we all need the regeneration of rest—but when we slide into gluttony, lust, sloth, and so on, it can feel as if we’re sliding into the slime that Dante describes. More than a crime, sin is a mental state which causes pain—both within the person who is guilty and to those who suffer because of those actions. In Dante’s portrayal of the stomach-dropping, queasy, nauseous, never-ending uncertainty of the damned’s lives, the poet conveys a bit of their inner predicament. The Divine Comedy isn’t some punitive manual, a puritan’s little book of punishments. Rather than a catalogue of what tortures match which crimes, Dante’s book expresses what sin feels like. Historian Jeffrey Burton Russell writes in Lucifer: The Devil in the Middle Ages how in hell “we are weighted down by sin and stupidity… we sink downward and inward… narrowly confined and stuffy, our eyes gummed shut and our vision turned within ourselves, drawn down, heavy, closed off from reality, bound by ourselves to ourselves, shut in and shut off… angry, hating, and isolated.”

If such pain were only experienced by the guilty, that would be one thing, but sin effects all within the human community who suffer as a result of pride, greed, wrath, and so on. There is a reason why the Seven Deadly Sins are what they are. In a world of finite resources, to valorize the self above all others is to take food from the mouths of the hungry, to hoard wealth that could be distributed to the needy, to claim vengeance as one’s own when it is properly the purview of society, to enjoy recreation upon the fruits of somebody else’s labor, to reduce another human being to a mere body, to covet more than you need, or to see yourself as primary and all others as expendable. Metaphor is the poem’s currency, and what’s more real than the intricacies of how organs are pulled from orifices is how sin—that disconnect between the divine and humanity—is experienced. You don’t need to believe in a literal hell—I don’t—to see what’s radical in Dante’s vision. What Inferno offers isn’t just grotesque descriptions, increasingly familiar though they may be on our warming planet, but also a model of thinking about responsibility to each other in a connected world.

Such is the feeling of the anonymous authors of the 2018 anarchist manifesto The Invisible Committee—a surprise hit when published in France—who opined that “No bonds are innocent” in our capitalist era, for “We are already situated within the collapse of a civilization,” structuring their tract around the circles of Dante’s inferno. Since its composition, The Divine Comedy has run like a molten vein through culture both rarefied and popular; from being considered by T.S. Eliot, Samuel Becket, Primo Levi, and Dereck Walcott, to being referenced in horror films, comic books, and rock music. Allusion is one thing, but what we can see with our eyes is another—as novelist Francine Prose writes in The Guardian, those images of people fleeing from Greek wildfires are “as if Dante filmed the Inferno on his iPhone.” For centuries artists have mined Inferno for raw materials, but now in the sweltering days of the Anthropocene we are enacting it. To note that our present appears as a place where Hell has been pulled up from the bowels of the earth is a superficial observation, for though Dante presciently gives us a sense of what perdition feels like, he crucially also provided a means to identify the wicked.

Denizens of the nine circles were condemned because they worshiped the self over everybody else; now the rugged individualism that is the heretical ethos of our age has made man-made apocalypse probable. ExxonMobil drills for petroleum in the heating Arctic and the apocalyptic QAnon cult proliferates across the empty chambers of Facebook and Twitter; civil wars are fought in the Congo over the tin, tungsten, and gold in the circuit boards of the Android you’re reading this essay with, children in Vietnam and Malaysia sew the clothing we buy at The Gap and Old Navy, and the simplest request for people to wear masks so as to protect the vulnerable goes unheeded in the name of “freedom” as our American Midas Jeff Bezos barely flies to outer space while workers in Amazon warehouses are forced to piss in bottles rather than be granted breaks. Responsibility for our predicament is unequally distributed, those in the lowest circle are the ones who belched out carbon dioxide for profit knowing full well the effects, those who promoted a culture of expendable consumerism and valorized the rich at the expense of the poor. Late capitalism’s operative commandment is to pretend that all seven of the deadly sins are virtues. Literary scholar R.W.B. Lewis describes the “Dantean principle that individuals cannot lead a truly good life unless they belong to a good society,” which means that all of us are in a lot of trouble. Right now, the future looks a lot less like paradise and more like inferno. Dante writes that “He listens well who takes notes.” Time to pay attention.

Bonus Link:—Is There a Poet Laureate of the Anthropocene?

Image Credit: Wikipedia

Who’s There?: Every Story Is a Ghost Story


“One need not be a chamber to be haunted.” —Emily Dickinson

Drown Memorial Hall was only a decade old when it was converted into a field hospital for students stricken with the flu in the autumn of 1918. A stolid, grey building of three stories and a basement, Drown Hall sits half-way up South Mountain where it looks over the Lehigh Valley to the federal portico of the white-washed Moravian Central Church across the river, and the hulking, rusting ruins of Bethlehem Steel a few blocks away. Composed of stone the choppy texture of the north Atlantic in the hour before a squall, with yellow windows buffeted by mountain hawks and grey Pennsylvania skies. Built in honor of Lehigh University’s fourth president, a mustachioed Victorian chemistry professor, Drown was intended as a facility for leisure, exercise, and socialization, housing (among other luxuries) bowling alleys and chess rooms. Catherine Drinker Bowen enthused in her 1924 History of Lehigh University that Drown exuded “dignity and, at the same time, a certain at-home-ness to every function held there,” that the building “carries with it a flavor and spice which makes the hotel or country club hospitality seem thin, flat and unprofitable.” If Drown was a monument to youthful exuberance, innocent pluck, and boyish charm, then by the height of the pandemic it had become a cenotaph to cytokine storms. Only a few months after basketballs and Chuck Taylors would have skidded across its gymnasium floor, and those same men would lay on cots hoping not to succumb to the illness. Twelve men would die of the influenza in Drown.

After its stint as a hospital, Drown would return to being a student center, then the Business Department, and by the turn of our century the English Department. It was in that final purpose that I got to know Drown a decade ago, when I was working on my PhD. Toward the end of my first year, I had to go to my office in the dusk after-hour, when lurid orange light breaks through the cragged and twisted branches of still leafless trees in the cold spring, looking nothing so much like jaundiced fingers twisting the black bars of a broken cage, or like the spindly embers of a church’s burnt roof, fires still cackling through the collapsing wood.  I had to print a seminar paper for a class on 19th-century literature, and to then quickly adjourn to my preferred bar. When I keyed into the locked building, it was empty, silent save for the eerie neon hum of the never-used vending machines and the unnatural pooling luminescence of perennially flickering fluorescent lights in the stairwells at either end of the central hall. While in a basement computer lab, I suddenly heard a burst of noise upstairs come from one end of the hall rapidly progress towards the other—the unmistakable sound of young men dribbling a basketball. Telling myself that it must be the young children of one of the department’s professors, I shakily ascended. As soon as I got to the top the noise ceased. The lights were out. The building was still empty. Never has an obese man rolled down that hill quite as quickly as I did in the spring of 2011.

There are several rational explanations—students studying in one of the classrooms even after security would have otherwise locked up. Or perhaps the sound did come from some faculty kids (though to my knowledge nobody was raising adolescents at that time). Maybe there was something settling strangely, concrete shifting oddly or water rushing quickly through a pipe (as if I didn’t know the difference between a basketball game and a toilet flushing). When depleted of all explanations, I know what I heard and what it sounded like, and I still have no idea what it was. Nor is this the only ghost story that I could recount—there was the autumn of 2003 when walking back at 2 a.m. after the close of the library at Washington and Jefferson College, feet unsteady on slicked brown leaves blanketing the frosted sidewalk, that I noted an unnatural purple light emanating from a half-basement window of Macmillan Hall, built in 1793 (having been the encampment of Alexander Hamilton during the Whisky Rebellion) and the oldest university building west of the Alleghenies. A few seconds after observing the shining, I heard a high-pitched, unnatural, inhuman banshee scream—some kind of poltergeist cry—and being substantially thinner in that year I was able to book it quickly back to my dorm. Or in 2007 while I was living in Scotland, when I toured the cavernous subterranean vaults underneath the South Bridge between the Old and New towns of Edinburgh, and I saw a young chav, who decided to make obscene hand gestures within a cairn that the tour guide assured us had “evil trapped within it,” later break down as if he was being assaulted by some unseen specter. Then there was the antebellum farm house in the Shenandoah Valley that an ex-girlfriend lived in, one room being so perennially cold and eerie that nobody who visited ever wanted to spend more than a few minutes in it. A haunted space in a haunted land where something more elemental than intellect screams at you that something cruel happened there.

Paranormal tales are popular, even among those who’d never deign to believe in something like a poltergeist, because they speak to the ineffable that we feel in those arm-hair-raised, scalp-shrinking, goose-bumped-moments where we can’t quite fully explain what we felt, or heard, or saw. I might not actually believe in ghosts, but when I hear the dribbling of a basketball down an empty and dark hall, I’m not going to stick around to find out what it is. No solitary person is ever fully a skeptic when they’re alone in a haunted house. Count me on the side of science journalist Mary Roach, who in Spook: Science Tackles the Afterlife writes that “I guess I believe that not everything we humans encounter in our lives can be neatly and convincingly tucked away inside the orderly cabinetry of science. Certainly, most things can… but not all. I believe in the possibility of something more.” Haunting is by definition ambiguous—if with any certainty we could say that the supernatural was real it would, I suppose, simply be the natural.

Franco-Bulgarian philosopher Tzvetan Todorov formulated a critical model of the supernatural in his study The Fantastic: A Structural Approach to a Literary Genre, in which he argued that stories about unseen realms could be divided between the “uncanny” and the “marvelous.” The former are narrative elements that can ultimately be explained rationally, i.e., supernatural plot points that prove to be dreams, hallucinations, drug trips, hoaxes, illusions, or anything unmasked by the Scooby-Doo gang. The latter are things that are actually occult, supernatural, divine. When it’s unclear as to whether or not a given incident in a story is uncanny or marvelous, then it’s in that in-between space of the fantastic, which is the same place any ghostly experience has had to be honestly categorized. “The fantastic is that hesitation experienced by a person who knows only the laws of nature, confronting an apparently supernatural event,” writes Todorov, and that is a succinct description of my Drown anomaly. Were it to be simply uncanny, then I suppose my spectral fears would have been assuaged if upon my ascent I found a group of living young men playing impromptu pick-up basketball. For that experience to be marvelous, I’d have to know beyond any doubt that what I heard were actual spirits. As it is, it’s the uncertainty of the whole event—the strange, spooky, surreal ambiguity—that makes the incident fantastic. “What I’m after is proof,” writes Roach. “Or evidence, anyway —evidence that some form of disembodied consciousness persists when the body closes up shop. Or doesn’t persist.” I’ve got no proof or disproof either, only the distant memory of sweaty palms and a racing heart.

Ghosts may haunt chambers, but they also haunt books; they might float through the halls of Drown, but they even more fully possess the books that line that building’s halls. Traditional ghosts animate literature, from the canon to the penny dreadful, including what the Victorian critic Matthew Arnold grandiosely termed the “best which has been thought and said” as well as lurid paperbacks with their garish covers. We’re so obsessed with something seen just beyond the field of vision that vibrates at a frequency that human ears can’t quite detect—from Medieval Danish courts to the Overlook Hotel, Hill House to the bedroom of Ebenezer Scrooge—that we’re perhaps liable to wonder if there is something to ghostly existence. After all, places are haunted, lives are haunted, stories are haunted. Such is the nature of ghosts; we may overlook their presence, their flitting and meandering through the pages of our canonical literature, but they’re there all the same (for a place can be haunted whether you notice that it is or not).

How often do you forget that the work that is the greatest in the language is basically a ghost story? William Shakespeare’s Hamlet is fundamentally a good old-fashioned yarn about a haunted house (in addition to being a revenge tragedy and pirate tale). The famed soliloquy of the Danish prince dominates our cultural imagination, but the most cutting bit of poetry is the eerie line that begins the play: “Who’s there?” Like any good supernatural tale, Hamlet begins in confusion and disorientation, as the sentries Marcellus and Bernardo patrolling Elsinore’s ramparts first espy the silent ghost of the murdered king, with the latter uttering the shaky two-word interrogative. Can you imagine being in the audience, sometime around 1600 when it was a widespread belief that there are more things in heaven and earth than can be dreamt of in our philosophies, and hearing the quivering question asked in the darkness, faces illuminated by tallow candle, the sense that there is something just beyond our experience that has come from beyond? The status of Hamlet’s ghost is ambiguous; some critics have interpreted the specter as a product of the prince’s madness, others claim that the spirit is clearly real. Such uncertainty speaks to what’s fantastic about the ghost, as ambiguity haunts the play. Notice that Bernardo doesn’t ask “What’s there?” His question is phrased towards a personality with agency, even as the immaterial spirit of Hamlet’s dead father exists in some shadow-realm between life and death.

A ghost’s status was no trifling issue—it got to the core of salvation and damnation. Protestants wouldn’t believe that souls could wander the earth; they would either be rewarded in heaven or punished in hell, while ghosts must necessarily be demons. Yet Shakespeare’s play seems to make clear that Hamlet’s father has indeed returned, perhaps as an inhabitant of that way station known as purgatory, that antechamber to eternity whereby the ghost can ascend from the Bardo to skulk around Elsinore for the space of a prologue. Of course, when Shakespeare wrote Hamlet, ostensibly good Protestant that he was, he should have held no faith in purgatory, that abode of ghosts being in large part that which caused Luther to nail his theses to the Wittenberg Cathedral door. When the final Thirty-Nine Articles of the Church of England were ratified in 1571 (three decades before the play’s premier), it was Article 22 that declared belief in purgatory was a ” thing vainly invented, and grounded upon no warranty of scripture; but rather repugnant to the word of God.” According to Stephen Greenblatt’s argument from Hamlet in Purgatory, the ghost isn’t merely Hamlet’s father, but also a haunting from the not-so-distant Catholic past, which the official settlement had supposedly stripped away with rood screens and bejeweled altars. Elsinore’s haunting is not just that of King Hamlet’s ghost, but also of those past remnants that the reformers were unable to completely bury. Greenblatt writes that for Shakespeare purgatory “was a piece of poetry” drawn from a “cultural artery” whereby the author had released a “startling rush of vital energy.” There are a different set of ambiguities at play in Hamlet, not least of which is how this “spirit of health or goblin damned” is to be situated between orthodoxy and heresy. In asking “who” the ghost is, Bernardo is simultaneously asking what it is, where it comes from, and how such a thing can exist. So simple, so understated, so arresting is the first line of Hamlet that I’m apt to say that Bernardo’s question is the great concern of all supernatural literature, if not all literature. Within Hamlet there is a tension between the idea of survival and extinction, for though the prince calls death the “undiscovered country from whose bourn no traveler returns,” he himself must know that’s not quite right. After all, his own father came back from the dead (a role that Shakespeare played himself).

Shakespeare’s ghoul is less ambiguous than those of Charles Dickens, for the ghost of Jacob Marley who visits Ebenezer Scrooge in A Christmas Carol is accused of simply being an “undigested bit of beef, a blot of mustard, a crumb of cheese, a fragment of underdone potato. There’s more of gravy than of grave about you, whatever you are!” Note that the querying nature of the final clause ends in an exclamation rather than a question mark, and there’s no asking who somebody is, now only what they are. Because A Christmas Carol has been filtered through George C. Scott, Patrick Stewart, Bill Murray, and the Muppets, there is a tendency to forget just how terrifying Dickens’s novel actually is. The ultimately repentant Scrooge and his visitations from a trinity of moralistic specters offer up visions of justice that are gothic in their capacity to unsettle. The neuter sprite that is the Ghost of Christmas Past with holly and their summer flowers; hail-fellow-well-met Bacchus that is the Ghost of Christmas Present; and the grim memento mori visage of the reaper who is the Ghost of Christmas Yet to Come (not to mention Marley padlocked and in fetters). Dickens in the mold of Dante, for whom haunting is its own form of retribution, a means of purging us of our inequities and allowing for redemption. Andrew Smith writes in The Ghost Story 1840-1920: A Cultural History that “Dickens’s major contribution to the development of the ghost story lies in how he employs allegory in order to encode wider issues relating to history, money, and identity.” Morality itself—the awesome responsibility impinging on us every second of existence—can be a haunting. The literal haunting of Scrooge is more uncertain, for perhaps they’re gestated from madness, hallucination, nightmare, or as he initially said, indigestion. Dickens’s ghosts are ambiguous—as they always must be—but the didactic sentiment of A Christmas Carol can’t be.

Nothing is more haunting than history, especially a wicked one, and few tales are as cruel as that of the United States. Gild the national narrative all that we want, American triumphalism is a psychological coping mechanism. This country, born out of colonialism, genocide, slavery, is a massive haunted burial ground, and we all know that grave yards are where ghosts dwell. As Leslie Fiedler explained in Love and Death in the American Novel, the nation itself is a “gothic fiction, nonrealistic and negative, sadist and melodramatic—a literature of darkness and the grotesque.” America is haunted by the weight of its injustice; on this continent are the traces of the Pequod and Abenaki, Mohawk and Mohegan, Apache and Navajo whom the settlers murdered; in this country are the filaments of women and men held in bondage for three centuries, and from every tree hangs those murdered by our American monstrosity. That so many Americans willfully turn away from this—the Faustian bargain that demands acquiescence—speaks not to the absence of haunting; to the contrary, it speaks of how we live among the possessed still, a nation of demoniacs. William Faulkner’s observation in Requiem for a Nun that “The past is never dead. It’s not even past” isn’t any less accurate for being so omnipresent. No author conveyed the sheer depth and extent of American haunting quit like Toni Morrison, who for all that she accomplished must also be categorized among the greatest authors of ghost stories. To ascribe such a genre as that to a novel like Morrison’s Beloved is to acknowledge that the most accurate depictions of our national trauma have to be horror stories if they’re to tell the truth. “Anything dead coming back to life hurts”—there’s both wisdom and warning in Morrison’s adage.

Beloved’s plot is as chilling as an autumnal wind off of the Ohio River, the story of the former enslaved woman Sethe whose Cincinnati home is haunted by the ghost of her murdered child, sacrificed in the years before the Civil War to prevent her being returned to bondage in Kentucky. Canonical literature and mythology have explored the cruel incomprehension of infanticide—think of Euripides’s Medea—but Sethe’s not irrational desire to send her “babies where they would be safe” is why Beloved’s tragedy is so difficult to contemplate. When a mysterious young woman named Beloved arrives, Sethe becomes convinced that the girl is the spirit of her murdered child. Ann Hostetler, in her essay from the collection Toni Morrison: Memory and Meaning, writes that Beloved’s ghost “was disconcerting to many readers who expected some form of social or historical realism as they encountered the book for the first time.” She argues, however, that the “representation of history as the return of the repressed…is also a modernist strategy,” whereby “loss, betrayal, and trauma is something that must be exorcized from the psyche before healing can take place.” Just because we might believe ourselves to be done with ghosts doesn’t mean that ghosts are done with us. Phantoms so often function as the allegorical because whether or not specters are real, haunting very much is. We’re haunted by the past, we’re haunted by trauma, we’re haunted by history. “This is not a story to pass on,” Morrison writes, but she understands better than anyone that we can’t help but pass it on.

Historical trauma is more occluded in Stephen King’s The Shining, though that hasn’t stopped exegetes from interpreting the novel about homicide in an off-season Colorado resort as being concerned with the dispossession of Native American, particularly in the version of the story as rendered by director Stanley Kubrick. The Overlook Hotel is built upon a Native American burial ground, Navajo and Apache wall hangings are scattered throughout the resort. Such conjectures about The Shining are explored with delighted aplomb in director Rodney Ascher’s documentary Room 237 (named after the most haunted place in the Overlook), but as a literary critical question, there’s much that’s problematic in asking what any given novel, or poem, or movie is actually about; an analysis is on much firmer ground when we’re concerned with how a text works (a notably different issue).  So, without discounting the hypothesis that The Shining is concerned with the genocide of indigenous Americans, the narrative itself tells a more straightforward story of haunting, as a bevy of spirits drive blocked, recovering alcoholic writer and aspiring family destroyer Jack Torrance insane. Kubrick’s adaptation is iconic—Jack Nicholson as Torrance (with whom he shares a first name) breaking through a door with an axe; his son, young Danny Torrance, escaping through a nightmarish, frozen hedge-maze of topiary animals; the crone in room 237 coming out of the bathtub; the blood pouring from the elevators; the ghostly roaring ’20s speakeasy with its chilling bartender, and whatever the man in the boar costume was. Also, the twins.

Still, it could be observed that the only substantiated supernatural phenomenon is the titular “Shining” that afflicts both Danny and the Overlook’s gentle caretaker, Dick Halloran. “A lot of folks, they got a little bit of shine to them,” Dick explains. “They don’t even know it. But they always seem to show up with flowers when their wives are feelin blue with the monthlies, they do good on school tests they don’t even study for, they got a good idea how people are feelin as soon as they walk into a room.” As hyper-empathy, the shining makes it possible that Danny is merely privy to a variety of psychotic breaks his father is having rather than those visions being actually real. While Jack descends further and further into madness, the status of the spectral beings’ existence is ambiguous (a point not everyone agrees on, however). It’s been noted that in the film, the appearance of a ghost is always accompanied by that of a mirror, so that The Shining’s hauntings are really manifestations of Jack’s fractured psyche. Narrowly violating my own warning concerning the question of “about,” I’ll note how much of The Shining is concerned with Jack’s alcoholism, the ghostly bartender a psychic avatar of all that the writer has refused to face. Not just one of the greatest ghost stories of the 20th century, The Shining is also one of the great novels of addiction, an exploration of how we can be possessed by own deficiencies. Mirrors can be just as haunted as houses. Notably, when King wrote The Shining, he was in the midst of his own full-blown alcoholism, so strung out he barely remembers writing doorstopper novels (he’s now been sober for more than 30 years). As he notes in The Shining, “We sometimes need to create unreal monsters and bogies to stand in for all the things we fear in our real lives.”   

If Hamlet, A Christmas Carol, Beloved, and The Shining report on apparitions, then there are novels, plays, and poems that are imagined by their creators as themselves being chambers haunted by something in between life and death. Such a conceit offers an even more clear-eyed assessment of what’s so unsettling about literature—this medium, this force, this power—capable of affecting what we think, and see, and say as if we ourselves were possessed. As a trope, haunted books literalize a profound truth about the written word, and uneasily push us towards acknowledging the innate spookiness of language, where the simplest of declarations is synonymous with incantation. Richard Chambers’s collection of short stories The King in Yellow conjures one of the most terrifying examples of a haunted text, wherein an imaginary play that shares the title of the book is capable of driving its audience to pure madness. “Strange is the night where the black stars rise, /And strange moons circle through the skies,” reads verse from the play; innocuous, if eerie, though it’s in the subtlety that the demons get you. Chambers would go on to influence H.P. Lovecraft, who conceived of his own haunted book in the form of the celebrated grimoire The Necronomicon, which he explains was “Composed by Abdul Alhazred, a mad poet of Sanaa in Yemen, who was said to have flourished during the period of the Ommiade caliphs, circa 700 A.D.,” and who rendered into his secret book the dark knowledge of the elder gods who were responsible for his being “seized by an invisible monster in broad daylight and devoured horribly before a large number of fright-frozen witnesses.” Despite the Necronomicon’s fictionality, there are a multitude of occultists who’ve claimed over the years that Lovecraft’s haunted volume is based in reality (you can buy said books online).

Then there are the works themselves which are haunted. Shakespeare’s Macbeth is the most notorious example, its themes of witchcraft long lending it an infernal air, with superstitious directors and actors calling it the “Scottish play” in lieu of its actual title, lest some of the spells within bring ruin to a production. Similar rumors dog Shakespeare’s contemporary Christopher Marlowe and his play Doctor Faustus, with a tradition holding that the incantations offered upon the stage summoned actual demons. With less notoriety, the tradition of “book curses” was a full-proof way to guard against the theft of the written word—a practice that dates back as far as the Babylonians, but that reached its apogee during the Middle Ages, when scribes would affix to manuscript colophons warnings about what should befall an unscrupulous thief. “Whoever steals this Book of Prayer/May he be ripped apart by swine, /His heart splintered, this I swear, /And his body dragged along the Rhine,” writes Simon Vostre in his 1502 Book of Hours. To curse a book is perhaps different than a stereotypical haunting, yet both of these phenomenon, not-of-this-world as they are, assume disembodied consciousness as manifest among otherwise inert matter; the curse is a way of imbuing yourself and your influence beyond your demise. It’s to make yourself a ghost, and it worked in leaving those books complete. “If you ripped out a page, you were going to die in agony. You didn’t want to take the chance,” Marc Drogin dryly notes in Anathema! Medieval Scribes and the History of Book Curses.

Not all writing is cursed, but surely all of it is haunted. Literature is a catacomb of past readers, past writers, past books. Traces of those who are responsible for creation linger among the words on a page; Shakespeare can’t hear us, but we can still hear him (and don’t ghosts wander through those estate houses upon the moors unaware that they’ve died?). Disenchantment has supposedly been our lot since Luther, or Newton, or Darwin chased the ghosts away, leaving behind this perfect mechanistic clockwork universe with no need for superfluous hauntings. Though like Hamlet’s father returned from a purgatory that we’re not supposed to believe in, we’re unwilling to acknowledge the specters right in front of us. Of all of the forms of expression that humanity has worked with—painting, music, sculpture—literature is the eeriest. Poetry and fiction are both incantation and conjuration, the spinning of specters and the invoking of ghosts; it is very literally listening to somebody who isn’t there, and might not have been for a long while. All writing is occult, because it’s the creation of something from ether, and magic is simply a way of acknowledging that—a linguistic practice, an attitude, a critical method more than a body of spells. We should be disquieted by literature; we should be unnerved. Most of all, we should be moved by the sheer incandescent amazement that such a thing as fiction, and poetry, and performance are real. Every single volume upon the shelves of Drown, every book in an office, every thumbed and underlined play sitting on a desk, is more haunted than that building. Reader, if you seek enchantments, turn to any printed page. If you look for a ghost, hear my voice in your own head.

Bonus Link:—Binding the Ghost: On the Physicality of Literature

Image Credit: Flickr/Kevin Dooley

The World Is All That Is the Case

- | 9

“Well, God has arrived. I met him on the 5:15 train. He has a plan to stay in Cambridge permanently.” —John Maynard Keynes in a letter to his wife describing Ludwig Wittgenstein (1929)

Somewhere along the crooked scar of the eastern front, during those acrid summer months of the Brusilov Offensive in 1916, when the Russian Empire pierced into the lines of the Central Powers and perhaps more than one million men would be killed from June to September, a howitzer commander stationed with the Austrian 7th Army would pen gnomic observations in a notebook, having written a year before that the “facts of the world are not the end of the matter.” Among the richest men in Europe, the 27-year-old had the option to defer military service, and yet an ascetic impulse compelled Ludwig Wittgenstein into the army, even though he lacked any patriotism for the Austro-Hungarian cause. Only five years before his trench ruminations would coalesce into 1921’s Tractatus Logico-Philosophicus, and the idiosyncratic contours of Wittgenstein’s thinking were already obvious, scribbling away as incendiary explosions echoed across the Polish countryside and mustard gas wafted over fields of corpses. “When my conscience upsets my equilibrium, then I am not in agreement with something. But is this? Is it the world?” he writes. Wittgenstein is celebrated and detested for this aphoristic quality, with pronouncements offered as if directly from the Sibylline grove. “Philosophy,” Wittgenstein argued in the posthumously published Culture and Value, “ought really to be written only as poetic composition.” In keeping with its author’s sentiment, I’d claim that the Tractatus is less the greatest philosophical work of the 20th century than it is one of the most immaculate volumes of modernist poetry written in the past hundred years.

The entire first chapter is only seven sentences, and can easily be arranged as a stanza read for its prosody just as easily as a logician can analyze it for rigor:

The world is all that is the case.

The world is the totality of facts, not of things.

The world is determined by the facts, and by their being all the facts.

For the totality of facts determines what is the case, and also whatever is not the case.

The facts in logical space are the world.

The world divides into facts.

Each item can be the case or not the case while everything else remains the same.

Its repetition unmistakably evokes poetry. The use of anaphora with “The world” at the beginning of the first three lines (and then again at the start of the fifth). The way in which each sentence builds to a crescendo of increasing length, from starting with a simple independent clause to a trio of lines that are composed of independent and dependent clauses, hitting a peak in the exact middle of the stanza, and then returning to independent clauses, albeit the final line being the second longest sentence in the poem. Then there is the diction, the reiteration of certain abstract nouns in place of concrete images—”world,” ‘facts,” “things.” In Wittgenstein’s thought these have definite meanings, but in a general sense they’re also words that are pushed to an extreme of conceptual intensity. They are as vague as is possible, while still connotating a definite something. If Wittgenstein mentioned red wheelbarrows and black petals, it might more obviously read as poetry, but what he’s doing is unique; he’s building verse from the constituent atoms of meaning, using the simplest possible concepts that could be deployed. Finally, the inscrutable nature of Wittgenstein’s pronouncements is what gives him such an oracular aura. If the book is confusing, that’s partially the point. It’s not an argument, it’s a meditation, a book of poetry that exists to do away with philosophy.   

Published a century ago this spring, the Tractatus is certainly one of the oddest books in the history of logic, structured in an unconventional outline of unspooling pronouncements offered without argument, as well as a demonstration of philosophy’s basic emptiness, and thus the unknowability of reality. All great philosophers claim that theirs is the work that demolishes philosophy, and Wittgenstein is only different in that the Tractatus actually achieves that goal. “Most of the propositions and questions to be found in philosophical works are not false but nonsensical,” writes Wittgenstein.  “Consequently, we cannot give any answer to questions of this kind,” where “of this kind” means all of Western philosophy. What results is either poetry transubstantiated into philosophy or philosophy converted into poetry, with the Tractatus itself a paradox, a testament to language that shows the limits of language, where “anyone who understands me eventually recognizes… [my propositions] as nonsensical…He must, so to speak, throw away the ladder after he has climbed up it.” The Tractatus is a self-immolating book, a work that exists to demonstrate its own futility in existing. At its core are unanswerable questions of silence, meaninglessness, and unuttered poetry. The closest that Western philosophy has ever come to the Tao.

Of the Viennese Wittgensteins, Ludwig was raised in an atmosphere of unimaginable wealth. As a boy, the salons of the family mansions (there were 13 in the capital alone) were permeated with the music of Gustav Mahler and Johannes Brahms (performed by the composers themselves), the walls were lined with commissioned golden-shimmer paintings by Gustave Klimt, and the rocky bespoke sculptures of August Rodin punctuated their courtyards. “Each of the siblings was made exceedingly rich,” writes Alexander Waugh in The House of Wittgenstein (and he knows about difficult families), “but the money, to a family obsessed with social morality, brought with it many problems.” Committed to utmost seriousness, dedication, and genius, the Wittgensteins were a cold family, the children forced to live up to the exacting standards of their father Karl Otto Clemens Wittgenstein. Ludwig’s father was an iron man, the Austrian Carnegie, and the son was indulged with virtually every privilege imaginable in fin de siècle Vienna. His four brothers were to be trained for industry, and to be patrons of art, music, poetry, and philosophy, with absolutely no failure in any regard to be countenanced. Only a few generations from the shtetl, the Wittgensteins had assimilated into gentile society, most of them converting to Catholicism, along with the few odd Protestants; Ludwig’s grandfather even had the middle name “Christian” as if to underscore their new position. Wittgenstein had a life-long ambivalence about his own Jewishness—even though three of his four grandparents were raised in the faith—and he had an attraction to a type of post-theological mystical Christianity, while he also claimed that his iconoclastic philosophy was “Hebraic.”

Even more ironically, or perhaps uncannily, Wittgenstein was only the second most famous graduate of Vienna’s secondary Realschule; the other student was Adolph Hitler. There’s a class photograph from 1905 featuring both of them when they were 16. As James Klaage notes in Wittgenstein: Philosophy and Biography, “an encounter with Wittgenstein’s mind would have created resentment and confusion in someone like Hitler,” while to great controversy (and thin evidence) Kimberly Cornish in The Jew of Linz claims that the philosopher had a profound influence on the future dictator, inadvertently inspiring the latter’s antisemitism. Strangely, like many assimilated and converted Jews within Viennese society, a casual antisemitism prevailed among the Wittgensteins. He would even be attracted to the writings of the pseudo-philosopher Otto Weininger, who in his book Sex and Character promulgated a notoriously self-hating antisemitic and misogynistic position, deploring modernity as the “most effeminate of all ages” (the author would ultimately commit suicide in the house where Beethoven had lived as an act of Völkisch sacrifice). When promoting the book, Wittgenstein maintained that he didn’t share in Weininger’s views, but rather found the way the writer was so obviously wrong interesting. Jewishness was certainly not to be discussed in front of the Wittgenstein paterfamilias, nor was anything that to their father reeked of softness, gentleness, or effeminacy, including Ludwig’s bisexuality, which he couldn’t express until decades later. And so at the risk of indulging an armchair version of that other great Viennese vocation of psychoanalysis, Wittgenstein made the impossibility of being able to say certain things the center of his philosophy. As Brahms had remembered, the family chillily acted “towards one another as if they were at court.” Of his four brothers—Rudi drank a mixture of cyanide and milk while in a Berlin cabaret in 1922, distraught over his homosexuality and his father’s rejection; Kurt shot himself in the dwindling days of the Great War after his troops defied him; and Hans, the oldest and a musical prodigy, presumably drowned himself in Chesapeake Bay while on an American sojourn in 1902—only Paul and Ludwig avoided suicide. There were economic benefits to being a Wittgenstein, but little else.

Austere Ludwig—a cinema-handsome man with a personality somehow both dispassionate and intense—tried to methodically shuffle off his wealth, which had hung from his neck along with the anchor of respectability. As it was, eventually the entire fortune would be commandeered by the Nazis, but before that Wittgenstein dispensed with his inheritance literally. When his father died in 1913, Wittgenstein began anonymously sending large sums of money to poets like Rainer Maria Rilke, whose observation in a 1909 lyric that “I am so afraid of people’s words./They describe so distinctly everything” reads almost as a gloss on the Tractatus. With his new independence, Wittgenstein moved to simple log cabin on a Norwegian fjord where he hoped to revolutionize logic. Attracted towards the austere, this was the same Wittgenstein whom in 1923, after the Tractatus had been published, lodged above a grocer in rural Austria and worked as a school teacher, with the visiting philosopher Frank Ramsey describing one of the richest men in Europe as living in “one tiny room, whitewashed, containing a bed, washstand, small table and one hard chair and that is all there is room for. His evening meal which I shared last night is rather unpleasant coarse bread, butter and cocoa.” Monasticism served Wittgenstein, because he’d actually accomplish that task of revolutionizing philosophy. From his trench meditations while facing down the Russians—where he ironically carried only two books—Fyodor Dostoevsky’s The Brothers Karamazov and Leo Tolstoy’s The Gospel in Brief —he birthed the Tractatus, holding to Zossima’s commandment that one should “Above all, avoid falsehood, every kind of falsehood.” The result would be a book whose conclusions were completely true without being real. Logic pushed to the extremes of prosody.  

The Tractatus was the only complete book Wittgenstein published in his lifetime, and the slender volume is composed of a series of propositions arranged within one another like an onion. Its seven main propositions are asserted axiomatically, their self-evidence stated without argument or equivocation—a book not of examples or evidence, but of sentences. A Euclidean project that reads as if hermetic poetry, with claims like “A logical picture of facts is a thought,” which in its poetic abstractions is evocative of something like John Keats’s “Truth is beauty.” Part of its literary quality is the way in which these claims are presented as crystalline abstractions, appearing as timeless refugees from eternity, bolstered not by recourse to anything but themselves (indeed the Tractatus contains no quotes from other philosophers, and virtually no references). His concern was the relationship between language and reality, how logic is able (or not able) to offer a picture of the world, and his conclusions circumscribe philosophy— he solves all metaphysical problems by demonstrating that they are meaningless. “Philosophy aims at the logical clarification of thoughts,” writes Wittgenstein. “Philosophy is not a body of doctrine but an activity… essentially of elucidations.” All of the problems of philosophy—the paradoxes and metaphysical conundrums, the ethical imperatives and the aesthetic judgments—are pseudo-problems. Philosophy exists not to do what natural science does; it doesn’t explain reality, it only clarifies language. When you come to Ludwig Wittgenstein on the road, you must kill him. The knife that you use is entitled the Tractatus, and he’ll hand it to you first.

When Wittgenstein arrived at the Cambridge University office of the great British philosopher Bertrand Russell in 1911, he had no formal training. So bereft was his knowledge that Wittgenstein couldn’t pronounce some of the philosophers’ names correctly. Biographer Ray Monk maintains in Ludwig Wittgenstein: The Duty of Genius that his subject had never even read Aristotle (despite sharing an affinity with him). And yet a few months into their studies together, the latter would declare “I love him & feel he will solve the problems that I am too old to solve.” Though after the Austrian had argued that metaphysical problems are simply an issue of linguistic confusion, the elder thinker would despair that “I could not hope ever again to do fundamental work in philosophy.” At the time that Wittgenstein met with Russell, he was studying aeronautical engineering at the University of Manchester, pushed into a practical field by his father. During his time there he invented several different metal airplane propeller blades exemplary in their ingenuity, but he was unfulfilled and despondent. He had come to believe that only philosophy could cure his spiritual malaise, and so Wittgenstein spent three years at Cambridge, where he travelled in intellectual circles that included the philosopher G.E. Moore and the economist John Maynard Keynes. Finally, he was also able to live openly with a lover, a psychologist named David Pinsent. He would dedicate the Tractatus to Pinsent, his partner ironically killed in an airplane crash training for the war that they fought on the opposite sides of. After his return to Austria, Wittgenstein worked a multitude of disparate jobs—he was a school-teacher in the Alps, unpopular for discharging corporal punishment; he was a gardener in a monastery (and inquired about taking vows); and he was an architect of uncommon brilliance, designing a modernist masterpiece for his sister called the Haus Wittgenstein, which in its alabaster parsimony and cool rectilinear logic recalls the Bauhaus. When that building was completed in 1929, Wittgenstein finally returned to Cambridge, where he would be awarded a PhD even though he took no courses and sat for no exams. Russell had recognized that not all genius need be constrained in the seminar room. And still, after his successful defense, Wittgenstein would clap his advisor on his shoulder and say “Don’t worry, I’ll know you’ll never understand it.” Years latter Russell would remark that he had only produced snowballs—Wittgenstein had generated avalanches.  

Hubris aside, Wittgenstein was dogged by a not unfounded fear that the Tractatus would be misinterpreted, not least of all by analytical philosophers like Russell who valorized logic as holy writ. Twentieth-century philosophy has long suffered schism between two broadly irreconcilable perspectives on what the discipline even exists to do. There are the analytical philosophers like Russell, Gottlieb Frege, G.E. Moore, and so on (including, technically, Wittgenstein), who see the field as indistinguishable from logic, to the point where its practice shares more in common with mathematics than Socrates. Largely focused in the United Kingdom and the United States, analytical philosophers are, as Russell writes in his A History of Western Philosophy (the work which secured his Nobel Prize in Literature), unified in a sense of “scientific truthfulness, by which I mean the habit of basing our beliefs upon observations and inferences as impersonal…as is possible for human beings.” Continental philosophy, however, is much more concerned with the traditional metaphysical and ethical questions which we associate with the discipline, asking what the proper way to live is, or how we find meaning in life. Associated with scholarly work in Europe, largely in France and Germany, a primary question to a continental philosopher like Martin Heidegger could be, as he writes in What Is Metaphysics?, “Why are there things at all, and why not nothing? This is the question.” To Russell that’s not a question at all, it’s pure nonsense. Wittgenstein would perhaps also see it as meaningless—though not at all in the same way as his advisor, and that makes all the difference. As with his interpretation of Weininger, it’s how things are nonsense that’s interesting.

In the city of Wittgenstein’s birth, the dominant philosophical movement at the time he published the Tractatus was a gathering of analytical philosophers known as the Vienna Circle. Composed of figures like Moritz Schlick, Rudolf Carnap, and Kurt Gödel, the Vienna Circle argued something not dissimilar to what Wittgenstein had claimed, understanding metaphysical, ethical, and aesthetic conjectures as nonsense. At the core of much of their work was something called the Verification Principle, an argument that the only propositions that are sensical are from either deduction or induction, either from mathematics or empiricism, and that everything else can be discarded (that the Verification Principle would fail its own test wasn’t important). But if the Vienna Circle agreed with Wittgenstein about how much of traditional philosophy was nonsense, the latter invested that nonsense with a sublimity they were blind toward. Perhaps the earliest to misinterpret the Tractatus, of which the Vienna Circle were nonetheless avid readers, Schlick invited Wittgenstein to address them in 1926. When he arrived, rather than giving a lecture, Wittgenstein turned a chair to the wall, began to rock back and forth and recited the verse of the Bengali poet Rabindranath Tagore. An arresting and moving scene: handsome Wittgenstein in rumpled blue shirt and tweed jacket, a man whom the American philosopher Norman Malcolm would describe as possessing a face that was “lean and brown…aquiline and strikingly beautiful, his head was covered with a curly mass of brown hair,” davening as was the ritual of his ancestors, perhaps chanting Tagore’s line from The Gardner that “We do not stray out of all words in the ever silent,” a disposition more in keeping with the Tractatus than anything written by the audience. Carnap, to his credit, quickly realized their mistake, recalling that Wittgenstein’s point of view was “much more similar to those of a creative artist than to those of a scientist; one might almost say, similar to those of a religious prophet or a seer.”

Though classified as one of the most important logicians of the 20th century, Wittgenstein’s earlier thought is more poetry than philosophy; the curt, aphoristic pronouncements of the Tractatus deserving to be studied alongside Rilke and Paul Celan. Both in form and content, purpose and expression, the Tractatus is lyric verse—it’s poetry that gestures beyond poetry. More than just poetry, it’s a purposefully self-defeating logical argument that exonerates the poetic above the philosophical, for if the Vienna Circle thought that all of the most interesting things were nonsense, then Wittgenstein knew that faith was the very essence of being, even if we could say nothing definitive about it. As he writes in the Tractatus, “even when all possible scientific questions have been answered, the problems of life remain completely untouched. Of course, there are then no questions left, and this itself is the answer.” Something Taoist about Wittgenstein, his utterances evocative of Lao Tzu’s contention in the Tao Te Ching that the “name that can be named is not the eternal name.” With the Tractatus, Wittgenstein mounted the most audacious ars poetica, a book of verse written not in rhythm and meter but in logic, where the purpose was to show the futility of logic itself. “Whereof one cannot speak, thereof one must remain silent;” its most famous line, and the entirety of the last chapter. Things must be left unsaid because they’re unable to be said—but literally everything important is unable to be said. Understood as a genius, interpreted as a rationalist, treated as an enigma, and appearing as a mystic, Wittgenstein was ultimately a poet. “The limits of my language are the limits of my world,” writes Wittgenstein, and this is sometimes misunderstood as consigning everything but logic to oblivion, but the opposite is true. In a world limited by language, then it is language itself that constitutes the world. Poetry, rather than logic, is all that is the case.

On Memory and Literature


My grandmother’s older sister Pauline Stoops, a one-room school teacher born in 1903, had lived in a homestead filled with poetry, which sat on a bluff overlooking the wide and brown Mississippi, as it meandered southward through Hannibal, Missouri. Pauline’s task was to teach children aged six to seventeen in history, math, geography, and science; her students learned about the infernal compromise which admitted Missouri into the union as a slave state and they imagined when the Great Plains were a shallow and warm inland sea millions of years ago; they were taught the strange hieroglyphics of the quadratic equation and the correct whoosh of each cursive letter (and she prepared them oatmeal for lunch every day as well). Most of all, she loved teaching poetry — the gothic morbidities of Edgar Allen Poe and the sober patriotism of John Greenleaf Whittier, the aesthetic purple of Samuel Taylor Coleridge and the mathematical perfection of Shakespeare. A woman whom when I knew her was given to extemporaneous recitations of memorized Walt Whitman. She lived in the Midwest for decades, until she followed the rest of her mother’s family eastward to Pennsylvania, her siblings having moved to Reading en masse during the Depression, tracing backwards a trail that had begun with distant relations. Still, Hannibal remained one of Pauline’s favorite places, in part because of its mythic role, this town where Mark Twain had imagined Huck Finn and Tom Sawyer playing as pirates along the riverbank.

“I had been to school most all the time and could spell and read and write just a little,” recounts the titular protagonist in The Adventures of Huckleberry Finn, “and could say the multiplication table up to six times seven is thirty-five, and I don’t reckon I could ever get any further than that if I was to live forever. I don’t take no stock in mathematics, anyway.” Huck’s estimation of poetry is slightly higher, even if he doesn’t read much. He recalls coming across some books while visiting the wealthy Grangerfords, including John Bunyan’s Pilgrim’s Progress that was filled with statements that “was interesting, but tough” and another entitled Friendship’s Offering that was “full of beautiful stuff and poetry; but I didn’t read the poetry.” Had Huck been enrolled in in Mrs. Stoops’ classroom he would have learned verse from a slender book simply entitled One Hundred and One Poems with a Prose Supplement, compiled by anthologizer Roy Cook in 1916. When clearing out Pauline’s possessions with my grandma, we came across a 1920 edition of Cook’s volume, with favorite lines underlined and pages dog-eared, scraps of paper now a century old used as bookmarks. Cook’s anthology was incongruously printed by the Cable Piano Company of Chicago (conveniently located at the corner of Wabash and Jackson), and included advertisements for their Kingsbury and Conover models, to which they promise student progress even for those with only “a feeble trace of musical ability,” proving that in the United States Mammon can take pilgrimage to Parnassus.

The flyleaf announced that it was “no ordinary collection,” being both “convenient,” “authoritative,” and most humbly “adequate,” while emphasizing that at fifteen cents its purchase would “Save many a trip to the Public Library, or the purchase of a volume ten to twenty times its cost.” Some of the names are familiar — William Wordsworth, Alfred Lord Tennyson, Oliver Wendell Holmes, Rudyard Kipling (even if many are less than critically fashionable today). Others are decidedly less canonical — Francis William Bourdillon, Alexander Anderson, Edgar A. Guest (the last of whom wrote pablum like “You may fail, but you may conquer – / See it through!”). It goes without saying that One Hundred and One Poems with a Prose Supplement is overwhelmingly male and completely white. Regardless, there’s a charm to the book, from the antiquated Victorian sensibility to the huckster commercialism. Even more strange and moving was my grandmother’s reaction to this book bound with a brown hardcover made crooked by ten decades of heat and moisture, cold and entropy, the pages inside turned the texture of fall sweetgum and ash leaves as they drop into the Mississippi. When I mentioned Henry Wadsworth Longfellow, my grandmother (twenty years Pauline’s junior) began to perfectly recite from memory “By the shores of Gitchee Gumee, /By the shining Big-Sea-Water,” going on for several stanzas of Henry Wadsworth Longfellow’s distinctive percussive trochaic tetrameter. No doubt she hadn’t read “The Song of Hiawatha” in decades, perhaps half-a-century, and yet the rhythm and meter came back to my grandmother as if she was the one looking at the book and not me.

My grandmother’s formal education ended at Centerville High School in Mystic, Iowa in 1938; I’ve been fortunate enough to go through graduate school and receive an advanced degree in literature. Of the two of us, only she had large portions of poetry memorized; I on the other hand have a head that’s full of references from The Simpsons. If I’m able to recall more than a quarter of a single Holy Sonnet by John Donne I’d be amazed, yet I have the entirety of the Steve Miller Band’s “The Joker” memorized for some reason. Certainly, I have bits and pieces here and there, “Death be not proud” or “Batter my heart three-personed God” and so on, but when it comes to making such verse part of my bones and marrow, I find that I’m rather dehydrated. Memorization was once central to pedagogy, when it was argued that committing verse to instantaneous recall was a way of preserving cultural legacies, that it trained students in rhetoric, and that it was a means of building character. Something can seem pedantic about such poetry recitation; the provenance of fussy antiquarians, apt to start unspooling long reams of Robert Burns or Edward Lear in an unthinking cadence, readers who properly hit the scansion, but where the meaning comes out with the wrong emphasis. Still, such an estimation can’t help but leave the flavor of sour grapes on my tongue where poetry should be, and so the romanticism of the practice must be acknowledged. 

Writing exists so that we don’t have to memorize, and yet there is something tremendously moving about recalling words decades after you first encountered them. Memorization’s consequence, writes Catherine Robson in Heart Beats: Everyday Life and the Memorized Poem, was that “these verses carried the potential to touch and alter the worlds of the huge numbers of people who took them to heart.” Books can burn, but as long as a poem endures in the consciousness of a person, they are in possession of a treasure.  “When the topic of verse memorization is raised today,” writes Robson, “the invocation is often couched within a lament.” Now we’re all possessors of personal supercomputers that can instantly connect us to whole libraries — there can seem little sense to make iambs and trochees part of one’s soul. Now the soul has been outsourced to our smartphones, and we’ve all become cyborgs, carrying our memories in our pockets rather than our brains. But such melancholy over forgetfulness has an incredibly long history. Socrates formulated the most trenchant of those critiques, with Plato noting in the Phaedrus that his teacher had once warned that people will “cease to exercise memory because they rely on that which is written, calling things to remembrance no longer from within themselves, but by means of external marks.” Important to consider where Socrates places literature, for if it is “within” — like heart, brain, or spleen — rather than some dead thing discarded on inked reeds. According to Socrates, writing is idolatrous; the difference between memorization and actual literature is the equivalent to a painting and reality. Though it must be observed that the only reason we care who Socrates happens to be is because Plato wrote his words down.

Poetry most evokes literature’s first role as a vehicle of memory, because the tricks of prosody – alliteration and assonance; consonance and rhyme – endured because they’re amenable to quick recall. Not only do such attributes make it possible to memorize poetry, they facilitate its composition as well. For literature wasn’t first written on papyrus but rather in the mind, and that was the medium through which it was recorded for most of its immeasurably long history. Since the invention of writing, we’ve tended to think of composition as an issue of a solitary figure committing their ideas to the eternity of paper, but the works of antiquity were a collaborative affair. Albert Lord explains in his 1960 classic The Singer of Tales that “oral epic song is narrative poetry composed in a manner evolved over many generations by singers of tales who did not know how to write; it consists of the building of metrical lines and half lines by means of formulas and formulaic expressions and of the buildings of songs by the use of themes.” Lord had accompanied his adviser, the folklorist and classicist Milman Parry, to the Balkans in 1933 and then again in 1935, where they recorded the oral poetry of the largely illiterate Serbo-Croatian bards. They discovered that recitations were based in “formulas” that made remembering epics not only easier, but also made their performances largely improvisational, even if the contours of a narrative remained consistent. From their observations, Parry and Lord developed the “Oral-Formulaic Theory of Composition,” arguing the pre-literate epics could be mixed and matched in a live telling, by using only a relatively small number of rhetorical tropes, the atoms of spoken literature.

Some of these formulas — phrases like “wine-dark sea” and “rosy-fingered dawn” for example — are familiar to any reader of The Iliad and The Odyssey, and the two discovered that such utterances had a history in Balkans and the Peloponnesus that goes back millennia. There’s an important difference between relatively recent works like Virgil’s The Aeneid (albeit composed two millennia ago) and the epics of Homer that predate the former by at least eight centuries. When Virgil sat down to pen “I sing of arms and man,” he wasn’t actually singing. He was probably writing, while whoever it was — whether woman or man, women or men — that invoked “Sing to me of the man, Muse, the man of twists and turns” most likely did utter those words accompanied by a lyre. The Aeneid is a work of literacy, while those of Homer are of orality, which is to say it was composed through memory. Evocatively, there is some evidence that the name “Homer” isn’t a proper noun. It may be an archaic Greek verb, a rough translation being “to speak,” or better yet “to remember.” There were many homers, each of them remembering their own unique version of such tales, until they were forgotten into the inert volumes of written literature.  

Socrates’ fears aren’t without merit. Just as the ability to Google anything at any moment has made contemporary minds atrophied with relaxation, so too does literacy have an effect on recall. With no need to be skilled in remembering massive amounts of information, reading and writing made our minds surprisingly porous. From the Celtic fringe of Britain to the Indus Valley, from the Australian Outback to the Great Plains of North America, ethnographers recount the massive amounts of information which pre-literate peoples were capable of. Poets, priests, and shamans were able to memorize (and adapt when needed) long passages by deft manipulation of rhetorical trope and mnemonic device. When literacy was introduced in places, there was a marked cognitive decline in peoples’ ability to memorize things. For example, writing in the journal Australian Geography, linguist Nick Reid explains that the oral literature of the aboriginal Narrangga people contains narrative details which demonstrate an accurate understanding of the geography of York Peninsula some 12,500 years ago, before melting glaciers irrevocably altered the coastline. Three hundred generations of Narrangga have memorized and told tale of marshy lagoons which no longer exist, an uninterrupted chain of recitation going back an astounding thirteen millennia. Today, if every single copy of Jonathan Franzen’s The Corrections and David Foster Wallace’s Infinite Jest were to simultaneously vanish, who among us would be able to recreate those books?

It turns out that some folks have been able to train their minds to store massive amounts of language. Among pious Muslims, people designated as Hafiz have long been revered for their ability to memorize the 114 surahs of the holy Quran. Allah’s words are thus written into the heart of the reverential soul, so that language becomes as much a part of a person as the air which fills lungs or the blood which flows in veins. “One does not have to read long in Muslim texts,” writes William Graham in Beyond the Written Word: Oral Aspects of Scripture in the History of Religion, “to discover how the ring of the qur’anic text cadences the thinking, writing, and speaking of those who live with and by the Qur’an.” Still relatively common in the Islamic world, the process of memorizing not just Longfellow or a few fragments of T.S. Eliot, but rather an entire book, is still accomplished to a surprising degree. Then there are those who through some mysterious cognitive gift (or curse depending on perspective) possess eidetic memory, and have the ability to commit entire swaths of text to retrieval without the need for mnemonic devices or formulas. C.S. Lewis could supposedly quote from memory any particular line of John Milton’s Paradise Lost that he was asked about; similar claims have been made about critic Harold Bloom.

Prodigious recall need not only be the purview of otherworldly savants, as people have used similar methods as a Hafiz or a Narrangga to consume a book. Evangelical minister Tom Meyer, also known as the “Bible Memory Man,” has memorized twenty books of scripture, while actor John Bassinger used his stage-skills to memorize all six thousand lines of Paradise Lost, with an analysis some two decades later demonstrating that he was still able to recite the epic with some 88% accuracy. As elaborated on by Lois Parshley in Nautilus, Bassinger used personal associations of physical movement and spatial location to “deep encode” the poem, quoting him as saying that Milton is a “cathedral I carry around in my mind… a place that I can enter and walk around at will.” No other type of art is like this — you can remember what a painting looks like, you can envision a sculpture, but only music and literature can be preserved and carried with you, and the former requires skills beyond memorization. As a scholar I’ve been published in Milton Studies, but if Bassinger and I marooned on an island somewhere, or trapped in the unforgiving desert, only he would be in actual possession of Paradise Lost, while I sadly sputtered half-remembered epigrams about justifying the ways of God to man.

Bassinger, who claims that he still loses his car keys all the time, was able to memorize twelve books of Milton by associating certain lines with particular movements, so that the thrust of an elbow may be man’s first disobedience, the kick of a leg being better to rule in hell than to serve in heaven. There is a commonsensical wisdom in understanding that memory has always been encoded in the body, so that our legs and arms think as surely as our brains do. Walter Ong explains in Orality and Literacy that “Bodily activity beyond mere vocalization is not… contrived in oral communication, but is natural and even inevitable.” Right now, I’m composing while stooped over, in the servile position of desk siting, with pain in my back and crick in my neck, but for the ancient bards of oral cultures the unspooling of literature would have been done through a sweep of the arms or the trot of a leg. Motion and memory being connected in a walk. Paradise Lost as committed by Bassinger was also a “cathedral,” a place that he could go to, and this is one of the most venerable means of being able to memorize massive amounts of writing. During the Renaissance, itinerant humanists used to teach the ars memoriae, a set of practical skills designed to hone memory. Chief among these tutors was the sixteenth-century Italian occultist, defrocked Dominican, and heretic Giordano Bruno, who took as students King Henry III of France and the Holy Roman Emperor Rudolf II (later he’d be burnt at the stake in Rome’s Campo de’ Fiori, though for unrelated reasons).

Bruno used many different methodologies, including mnemonics, associations, and repetitions, but his preferred approach was something called the method of loci. “The first step was to imprint on the memory a series of loci or places,” writes Dame Frances Yates in The Art of Memory. “In order to form a series of places in memory… a building is to be remembered, as spacious and varied a one as possible, the forecourt, the living room, bedrooms, and parlors, not omitting statues and other ornaments with which the rooms are decorated.” In a strategy dating back to Cicero and Quintilian, Bruno taught that the “images by which the speech is to be remembered… are then placed in imagination on the memorial places which have been memorized in the building,” so that “as soon as the memory… requires to be revived, all these places are visited in turn and the various deposits demanded of their custodians.” Bruno had his students build in their minds what are called “memory palaces,” architectural imaginings whereby a line of prose may be associated with an opulent oriental rug, a stanza of poetry with a blue Venetian vase upon a mantle, an entire chapter with a stone finishing room in some chateau; the candle sticks, fireplace kindling, cutlery, and tapestries each hinged to their own fragment of language, so that recall can be accessed through a simple stroll in the castle of your mind.

It all sounds very esoteric, but it actually works. Even today, competitive memorization enthusiasts (this is a real thing) use the same tricks that Bruno taught. Science journalist Joshua Foer recounts how these very same methods were instrumental in his winning the 2006 USA Memory Championship, storing poems in his mind by associating them with places as varied as Camden Yards and the East Wing of the National Gallery of Art, so that he “carved each building up into loci that would serve as cubbyholes for my memories.” The method of loci is older than Bruno, than even Cicero and Quintilian, and from Camden Yards and the National Gallery to Stonehenge and the Nazca Lines, spatial organization has been a powerful tool. Archeologist Lynne Kelly claims that many Neolithic structures actually functioned as means for oral cultures to remember text, arguing in Knowledge and Power in Prehistoric Societies: Orality, Memory, and the Transmission of Culture that “Circles or lines of stones or posts, ditches or mounds enclosing open space… serve as memory theaters beautifully.”

Literature is simultaneously vehicle, medium, preserver, and occasionally betrayer of memory. Just as our own recollections are more mosaic than mirror (gathered from small pieces that we’ve assembled as a narrative with varying degrees of success), so too does writing impose order on one thing after another. Far more than memorized lines, or associating stanzas with rooms, or any mnemonic trick, memory is the ether of identity, but it is fickle, changing, indeterminate, and unreliable. Fiction and non-fiction, poetry and prose, drama and essay — all are built with bricks of memory, but with a foundation set on wet sand. Memory is the half-recalled melody of a Mr. Rogers’ Neighborhood song played for your son while you last heard it decades ago; it’s the way that a certain laundry detergent smells like Glasgow in the fall and a particular deodorant as Boston in the cool summer; how the crack of the bat at PNC Park brings you back to Three Rivers Stadium, and Jagged Little Pill always exists in 1995. And memory is also what we forget. Our identities are simply an accumulation of memories — some the defining moments of our lives, some of them half-present and only to be retrieved later, and some constructed after the fact.

“And once again I had recognized the taste of the crumb of madeleine soaked in her decoration of lime flowers which my aunt used to give me,” Marcel Proust writes in the most iconic scene of In Remembrance of Things Past, and “immediately the old gray house upon the street, where her room was, rose up like the scenery of a theater.” If novels are a means of excavating the foundations of memory, then Proust’s magnum opus is possibly more associated with how the deep recesses of the mind operate than any other fiction. A Proustian madeleine, signifying all the ways in which sensory experiences trigger visceral, almost hallucinatory memories, has become a mainstay, even while most have never read In Remembrance of Things Past. So universal is the phenomenon, the way in which the taste of Dr. Pepper can propel you back to your grandmother’s house, or Paul Simon’s Graceland can place you on the Pennsylvania Turnpike, that Proust’s madeleine has become the totem of how memories remain preserved in tastes, sounds, smells. Incidentally, the olfactory bulb of the brain, which processes odors, is close to the hippocampus where memories are stored, so that Proust’s madeleine is a function of the cerebral cortex. Your madeleine need not be a delicately crumbed French cookie dissolving in tea, it could just as easily be a Gray’s Papaya waterdog, a Pat’s cheesesteak, or a Primanti Brother’s sandwich (all of those work for me). Proust’s understanding of memory is sophisticated, for while we may humor ourselves into thinking that our experiences are recalled with verisimilitude, the reality is that we shuffle and reshuffle the past, we embellish and delete, and what’s happened to us can return as easily as its disappeared. “The uncomfortable reality is that we remember in the same way that Proust wrote,” argues Jonah Lehrer in Proust was a Neuroscientist. “As long as we have memories to recall, the margins of those memories are being modified to fit what we know now.”

Memory is the natural subject of all novels, since the author composes from the detritus of her own experience, but also because the form is (primarily) a genre of nostalgia, of ruminating in the past (even an ostensibly invented one). Some works are more explicitly concerned with memory, their authors reflecting on the malleability, plasticity, and endurance of memory. Consider Tony Webster in Julian Barnes’ The Sense of an Ending, ruminating on the traumas of his school years, noting that we all live with the assumption that “memory equals events plus time. But it’s all much odder than this. Who was it said that memory is what we thought we’d forgotten? And it ought to be obvious to us that time doesn’t act as a fixative, rather as a solvent.” Amnesia is the shadow version of memory, all remembrance haunted by that which we’ve forgotten. Kazuo Ishiguro’s parable of collective amnesia The Buried Giant imagines a post-Arthurian Britannia wherein “this land had become cursed with a mist of forgetfulness,” so that it’s “queer the way the world’s forgetting people and things from only yesterday and the day before that. Like a sickness come over us all.” Jorge Luis Borges imagines the opposite scenario in his short story “Funes the Memorius,” detailing his friendship with a fictional Uruguayan boy who after a horse-riding accident is incapable of forgetting a single detail of his life. He can remember “ever crevice and every molding of the various houses.” What’s clear, despite Funes being “as monumental as bronze,” is that if remembering is the process of building a narrative for ourselves, then ironically it requires forgetting. Funes’ consciousness is nothing but hyper-detail, and with no means to cull based on significance or meaning, it all comes to him as an inchoate mass, so that he was “almost incapable of ideas of a general, Platonic sort.”

Between the cursed amnesiacs of Ishiguro and the damned hyperthymiac of Borges are Barnes’ aging characters, who like most of us remember some things, while finally forgetting most of what’s happened. Tellingly, a character like Tony Webster does something which comes the closest to writing — he preserves the notable stuff and deletes the rest. Funes is like an author who can’t bring himself to edit, and the Arthurian couple of Ishiguro’s tale are those who never put pen to paper in the first place. Leher argues that Proust “believed that our recollections were phony. Although they felt real, they were actually elaborate fabrications,” for we are always in the process of editing and reediting our pasts, making up new narratives in a process of revision that only ends with death. This is to say that memory is basically a type of composition — it’s writing. From an assemblage of things which happen to us — anecdotes, occurrences, traumas, intimacies, dejections, ecstasies, and all the rest — we impose a certain order on the past; not that we necessarily invent memories (though that happens), but rather that we decide which memories are meaningful, we imbue them with significance, and then we structure them so that our lives take on the texture of a narrative. We’re able to say that had we not been in the Starbucks near Union Square that March day, we might never have met our partner, or if we hadn’t slept in and missed that job interview, we’d never have stayed in Chicago. “Nothing was more central to the formation of identity than the power of memory,” writes Oliver Sachs in The River of Consciousness, “nothing more guaranteed one’s continuity as an individual,” even as “memories are continually worked over and revised and that their essence, indeed, is recategorization.” We’re all roman a clef in the picaresque of our own minds, but bit characters in the novels written by others.

A century ago, the analytical philosopher Bertrand Russell wrote in The Analysis of Mind that there is no “logical impossibility in the hypothesis that the world sprang into existence five minutes ago, exactly as it then was, with a population that ‘remembered’ a wholly unreal past.” Like most metaphysical speculation there’ something a bit sophomoric about this, though Russell admits as such when he writes that “I am not here suggesting that the non-existence of the past should be entertained as hypothesis,” only that speaking logically nobody can fully “disprove the hypothesis.” This is a more sophisticated version of something known as the “Omphalos Argument” — a cagey bit of philosophical book-keeping that had been entertained since the eighteenth-century — whereby evidence of the world’s “supposed” antiquity (fossils, geological strata, etc.) were seen as devilish hoaxes, and thus the relative youthfulness of the world’s age could be preserved alongside biblical inerrancy (the multisyllabic Greek word means “naval,” as in Eve and Adam’s bellybutton). The five-minute hypothesis was entertained as a means of thinking about radical skepticism, where not only all that we see, hear, smell, taste, and touch are fictions, but our collective memories are a fantasy as well. Indeed, Russell is correct in a strictly logical sense; writing this at 4:52 P.M. on April 20th, 2021, and there is no way that I can rely on any outside evidence, or my own memories, or your memories, to deductively and conclusively prove with complete certainty that the universe wasn’t created at 4:47 P.M. on April 20th, 2021 (or by whatever calendar our manipulative robot-alien overlords count the hours, I suppose).

Where such a grotesque possibility errs is that it doesn’t matter in the slightest. In some ways, it’s already true; the past is no longer here and the future has yet to occur, we’ve always been just created in this eternal present (whatever time we might ascribe to it). To remember is to narrate, to re-remember is still to narrate, and to narrate is to create meaning. Memories are who we are — the fundamental particles of individuality. Literature then, is a type of cultural memory; a conscious thing whose neurons are words, and whose synapses are what authors do with those words. Writing is memory made manifest, a conduit for preserving our identity outside of the prison of our own skulls. A risk here, though. For memory fails all of us to varying degrees — some in a catastrophic way — but everyone is apt to forget most of what’s happened to them. “Memory allows you to have a sense of who you are and who you’ve been,” argues Lisa Genova in Remember: The Science of Memory and the Art of Forgetting. Those neurological conditions which “ravage the hippocampus” are particularly psychically painful, with Genova writing that “If you’ve witnessed someone stripped bare of his or her personal history by Alzheimer’s disease, you know firsthand how essential memory is to the experience of being human.” To argue that our memories are ourselves is dangerous, for what happens when our past slips away from view? Pauline didn’t suffer from Alzheimer’s, though in her last years she was afflicted by dementia. I no longer remember what she looked like, exactly, this woman alive for both Kitty Hawk and the Apollo mission. I can no longer recall what her voice sounded like. What exists once our memories are deleted from us, when our narratives have unraveled? What remains after that deletion is something called the soul. When I dream about Pauline, I see and hear her perfectly.

Image: Pexels/Jordane Mathieu.

Elegy of the Walker


By the conclusion of Mildred Lissette Norman’s 2,200 mile hike on the Appalachian Trail in 1952—the steep snow-covered peaks of New Hampshire’s White Mountains; the autumnal cacophony of Massachusetts’ brown, orange, and red Berkshires; the verdant greens of New York’s Adirondacks and Pennsylvania’s Alleghanies; the misty roll of Virginia’s Blue Ridge; the lushness of North Carolina and Georgia’s Great Smoky Mountains—she would wear down the soles of her blue Sperry Topsiders into a hatchwork of rubber threads, the rough canvas of the shoes ripping apart at the seams. “There were hills and valleys, lots of hills and valleys, in that growing up period,” Norman would recall, becoming the first woman to hike the trail in its entirety. The Topsiders were lost to friction, but along with 28 additional pairs of shoes over the next three decades, she would also gain a new name—Peace Pilgrim. The former secretary would (legally) rechristen herself after a mystical experience somewhere in New England, convinced that she would “remain a wanderer until mankind has learned the way of peace.”

Peace Pilgrim’s mission began at the Rose Bowl Parade in 1953, gathering signatures on a petition to end the Korean War. From Pasadena she trekked over the Sierra Nevada, the hardscrabble southwest, the expansive Midwestern prairies, the roll of the Appalachians and into the concrete forest of New York City. She gained spectators, acolytes, and detractors; there was fascination with this 46-year-old woman, wearing a simple blue tunic emblazoned in white capital letters with “Walking Coast to Coast for Peace,” her greying hair kept up in a bun and her pockets containing only a collapsible toothbrush, a comb, and a ballpoint pen. By the time she died in 1981, she had traversed the United States seven times. “After I had walked almost all night,” she recalled in one of the interviews posthumously collected into Peace Pilgrim: Her Life and Work in Her Own Words, “I came out into a clearing where the moonlight was shining down… That night I experienced the complete willingness, without any reservations, to give my life to something beyond myself.” It was the same inclination that compelled Abraham to walk into Canaan, penitents to trace Spain’s Camino de Santiago, or of the whirling Mevlevi dervishes traipsing through the Afghan bush. It was an inclination toward God.

Something about the plodding of one foot after another, the syncopation mimicking the regularity of our heartbeat, the single-minded determination to get from point A to point B (wherever those mythic locations are going to be) gives walking the particular enchantments that only the most universal of human activities can have. Whether a stroll, jog, hike, run, saunter, plod, trek, march, parade, patrol, ramble, constitutional, wander, perambulation, or just plain walk, the universal action of moving left foot-right foot-left foot-right foot marks humanity indelibly, so common that it seemingly warrants little comment if you’re not a podiatrist. But when it comes to the subject, there are as many narratives as there are individual routes, for as Robert Macfarlane writes in The Old Ways: A Journey on Foot, “a walk is only a step away from a story, and every path tells.” Loathe we should be to let such an ostensibly basic act pass without some consideration.

Rebecca Solnit writes in Wanderlust: A History of Walking that “Like eating or breathing, [walking] can be invested with wildly different cultural meanings, from the erotic to the spiritual, from the revolutionary to the artistic.” Walking is leisure and punishment, introspection and exploration, supplication and meditation, even composition. As a tool for getting lost, both literally and figuratively, of fully inhabiting our being, walking can empty out our selfhood. A mechanism for transmuting a noun into a verb, or transforming the walker into the walking. When a person has pushed themselves so that their heart pumps like a piston, that they feel the sour burn of blisters, the chaffing of denim, so that breathing’s rapidity is the only focus, then there is something akin to pure consciousness (or possibly I’m just fat). And of course, all that you can simply observe with that consciousness, unhampered by screen, so that walking is “powerful and fundamental,” as Cheryl Strayed writes in her bestseller Wild: From Lost to Found on the Pacific Coast Trail: an account of how Strayed hiked thousands of miles following the death of her mother, and learning “what it was like to walk for miles with no reason other than to witness the accumulation of trees and meadows, mountains and deserts, streams and rocks, rivers and grasses, sunrises and sunsets… It seemed to me it had always felt like this to be a human in the wild.”     

Maybe that sense of being has always attended locomotion, ever since a family of Australopithecus pressed their calloused heals into the cooling volcanic ash of Olduvai Gorge in Tanzania some 3.7 million years ago. Discovered in 1976 by Mary Leaky, the preserved footprints were the earliest example of hominoid bipedalism. Two adults and a child, the traces suggested parents strolling with a toddler, as if they were Adam and Eve with either Cain or Abel. Olduvai’s footprints are smaller than those of a modern human, but they lack the divergent toe of other primates, and they indicate that whoever left them moved from the heel of their feet to the ball, like most of us do. Crucially, there were no knuckle impressions left, so they didn’t move in the manner that chimpanzees and gorillas do. “Mary’s footprint trail was graphically clear,” explains Virginia Morell in Ancestral Passions: The Leakey Family and the Quest for Humankind’s Beginnings, “the early hominoids stood tall and walked as easily on two legs as any Homo sapiens today… it was apparently this upright stance, rather than enlarged crania, that first separated these creatures from other primates.”

The adults were a little under five-feet tall and under 100 pounds, covered in downy brown fur with slopping brow and overbit jaw, with the face of an ape but the uncanny eyes of a human, the upright walking itself transforming them into the latter. Walking preceded words, the ambulation perhaps necessitating the speaking. Australopithecus remained among the pleasant green plains of east Africa, but by the evolution of anatomically modern humans in the area that is now Kenya, Tanzania, and Ethiopia, and walking became the engine by which people disseminated through the world. Meandering was humanity’s insurance, as Nicholas Wade writes in Before the Dawn: Recovering the Lost History of Our Ancestors, that as little as 50,000 years ago and the “ancestral human population, the first to possess the power of fully articulate modern speech, may have numbered only 5,000 people, confined to a homeland in northeast Africa.” In such small numbers, and in such a circumscribed area, humanity was prisoner to circumstance, where an errant volcano, draught, or epidemic could have easily consigned us to oblivion. Walking as far as we could was our salvation.

Humans would walk out of Africa into Asia and perhaps by combination of simple boat and swimming down the Indonesian coast into Australia, across the Bering Strait and into North America, and over the Panamanian isthmus and into South America, with distant islands like Madagascar, New Zealand, and Iceland waiting for sailing technology to ferry people to their shores millennia after we left the shade of the Serengeti’s bulbous baobab trees. We think of our ancestors as living in a small world, but there’s was an expansive realm, all the more so since it wasn’t espied through a screen. Partner to burrowing meerkats peaking over the dry scrub of the Kalahari, nesting barn owls overlooking the crusting, moss-covered bark of the ancient Ciminian Forest, the curious giant softshell tortoises of the Yangtze River. To walk is to be partner to the natural world, it is to fully inhabit being an embodied self. Choosing to be a pedestrian today is to reject the diabolic speed of both automobile and computer. Macfarlane writes in The Wild Places that nature is “invaluable to us precisely because… [it is] uncompromisingly different… you are made briefly aware of a world at work around and beside our own, a world operating in patterns and purposes that you do not share.” Sojourn into a world so foreign was the birthright of the first humans, and it still is today, if you choose it. 

All continents, albeit mostly separated by unsettlingly vast oceans, are in some form or another connected by thin strips of land here or there, something to the advantage of Scottish Victorian explorer Sir George Thompson who walked from Canada to Western Europe, via Siberia. More recently there was the English explorer George Meegan who from 1977 to 1983 endeavored to walk from Patagonia to the northern tip of North America, which involved inching up South America’s Pacific coast, crossing the Darien Gap into Central America, circling the Gulf Coast and walking up the Atlantic shore, following the Canadian border, and then walking from the Yukon into Alaska. Meegan’s expedition covered 19,019 miles, the longest recorded uninterrupted walk. Effected by the nervous propulsion that possibly compelled that first generation to leave home, Meegan explains in The Longest Walk: The Record of our World’s First Crossing of the Entire Americas that “once the idea seemed to be a real possibility, once I thought I could do it, I had to do it.” Along the way Meegan wore out 12 pairs of hiking boots, got stabbed once, and met peanut-farmin’ Jimmy Carter at his Georgian homestead.

Meegan’s route was that of the first Americans, albeit accomplished in reverse. The most recent large landmass to be settled, those earliest walkers observed a verdant expanse, for as Craig Childs describes the Paleolithic continent in Atlas of a Lost World: Travels in Ice Age America, the land east of the Bering Strait was a “mosaic of rivers and grasslands… horizons continuing on as if constantly giving birth to themselves—mammoths, Pleistocene horses, and giant bears strung out as far as the eye could see. It must have seemed as if there was no end, the generosity of this planet unimaginable.” A venerable (and dangerous) tradition to see America as an unspoiled Paradise, but it’s not without its justifications, and one that I’ve been tempted to embrace during my own errands into the wilderness. Raised not far from Pittsburgh’s Frick Park, a 644-acre preserve set within the eastern edge of the city, a bramble of moss-covered rocky creeks and surprisingly steep ravines, a constructed forest primeval meant to look as it did when the Iroquois lived there, and I too could convince myself that I was a settler upon the frontier. Like all sojourns into the woods, I found that my strolls in Frick Park couldn’t help but have a bit of the mythic about them, especially at dusk.

One day when I was a freshman, a friend and I took a stack of cheap, pocket-sized, rough orange-covered New Testaments which the Gideons, who were standing the requisite constitutionally mandated distance from our high school, had been assiduously handing out as part of an ill-considered attempt to convert our fellow students. In a pique of adolescent blasphemy, we went to a Frick Park path, and walked through the cooling October forest as twilight fell, ripping the cheap pages from the bibles and letting them fall like crisp leaves to the woods’ floor, or maybe inadvertently as a trail of Eden’s apple seeds. No such thing as blasphemy unless you already ascent to the numinous, and as our heretical stroll turned God on his head, a different pilgrim had once roamed woods like these. During the Second Great Awakening when revivals of strange fervency and singular belief burnt down the Appalachian edge, a pious adherent of the mystical Swedenborgian faith named John Chapman was celebrated for traipsing through Pennsylvania, Ohio, Indiana, and Illinois. Briefly a resident of the settlement of Grant’s Hill in what is today downtown Pittsburgh, about several miles west of Frick Park, and Chapman’s mission was to spread the gospel of the New Church, along with the planting of orchards. Posterity remembers him as Johnny Appleseed.

Folk memory has Johnny
Appleseed fixed in a particular (and peculiar way): the bearded frontiersmen,
wearing a rough brown burlap coffee sack, a copper pot on his head, and a
trundled bag over his shoulder as the barefoot yeoman plants apples across the
wide expanse of the West. I’d wager there is a strong possibility you thought
he was as apocryphal as John Henry or Paul Bunyan, but Chapman was definitely
real; a Protestant St. Francis of whom it was said that he walked with a tamed wolf
and that due to his creaturely benevolence even the mosquitoes would spare him
their sting. Extreme walkers become aspects of nature, their souls as if the
migratory birds that trace lines over the earth’s curvature. Johnny Appleseed’s
walking was a declaration of common ownership over the enormity of this land. Sometime
in the 1840s, Chapman found himself listening to the outdoor sermonizing of a fire-and-brimstone
Methodist preacher in Mansfield, Ohio. “Where is the primitive Christian, clad
in coarse raiment, walking barefoot to Jerusalem?” the minister implored
the crowd, judging them for their materialism, frivolity, and immorality.
Finally, a heretofore silent Johnny Appleseed, grown tired of the uncharitable
harangue, ascended the speaker’s platform and hiked one grime-covered,
bunion-encrusted, and blistered black foot underneath the preacher’s nose.
“Here’s your primitive Christian,” he supposedly declared. Even
Johnny Appleseed’s gospel was of walking.

“John Chapman’s appearance at the minister’s stump,” writes William Kerrigan in Johnny Appleseed and the American Orchard: A Cultural History, made the horticulturalist a “walking manifestation of a rejection of materialism.” Not just figuratively a walking embodiment of such spirituality, but literally a walking incarnation of it. The regularity of putting one foot after the other has the rhythm of the fingering of rosary beads or the turning of prayer wheels; both intimately physical and yet paradoxically a means of transcending our bodies. “Walking, ideally, is a state in which the mind, the body, and the world are aligned,” writes Solnit, “as though they were three characters finally in conversation together, three notes suddenly making a chord.” Hence walking as religious devotion, from the Australian aborigine on a walkabout amidst the burnt ochre Outback, to the murmuring pilgrim tracing the labyrinth underneath the stone flying buttresses of Chartres Cathedral, and the Hadji walking over hot sands towards Mecca, or the Orthodox Jew graced with the gift of deliberateness as she walks to shul on Shabbat. Contemplative, meditative, and restorative, religiously speaking walking can also be penitential. In 2009 the Irish Augustinian Fr. Michael Mernagh walked from Cork to Dublin on a pilgrimage of atonement that he single-handedly took in penitence for the Church’s shameful silence regarding child sexual abuse. Not just a pilgrimage, but a protest, with Fr. Mernagh saying of the rot infecting the Church that the “more I have walked the more I feel it is widespread beyond our comprehension.” Atonement is uncomfortable, painful even. As pleasant as a leisurely stroll can be, a penitential hike should strain the lungs, burn the muscles. If penitence isn’t freely taken, however, then it’s no longer penitence. Especially if there’s no reason for contrition, then it’s something else—punishment. Or torture.

“The drop outs began,” recalled Lt. Col. William E. Dyess. “It seemed that a great many of the prisoners reached the end of their endurance at about the same time. They went down by twos and threes. Usually, they made an effort to rise. I never can forget their groans and strangled breathing as they tried to get up. Some succeeded.” Many didn’t. A Texas Air Force officer with movie-star good looks, Dyess had been fighting with the 21st Pursuit Squadron in the Philippines when the Japanese Imperial Army invaded in 1942. Along with some 80,000 fellow American and Filipino troops, he would be marched the 70 miles from Marviales to Camp O’Donnell. Denied food and water, forced to walk in the scorching heat of the jungle sun, with temperatures that went well over 100 degrees, it’s estimated that possibly 26,000 prisoners—a third of those captured—perished in that scorching April of 1942.

Bayonet and heat, bullet and sun, all goaded the men to put one foot in front of the other until many of them couldn’t any more. Transferred from the maggot and filth infested Camp O’Donnell, where malaria and dengue fever took even more Filipinos and Americans, Dyess was able to escape and make it back to American lines. While convalescing in White Sulphur Springs, W.Va., Dyess narrated his account—the first eyewitness American testimony to the Bataan Death March—to Chicago Tribune writer Charles Leavelle. Prohibited from release by the military, Leavelle would finally see the publication of Bataan Death March: A Survivor’s Account in 1944. Dyess never read it; he died a year before. His P-38G-10-LO Lightning lost an engine during takeoff at a Glendale, Calif., airport, and rather than risk civilian casualties by abandoning the plane, Dyess crashed it into a vacant lot, so that his life would be taken by flying rather than by walking.

Walking can remind us that we’re alive, so that it’s all the more obscene when such a human act is turned against us, when the pleasure of exertion turns into the horror of exhaustion, the gentle burn in muscles transformed into spasms, breathing mutated into sputtering. Bataan’s nightmare, among several, was that it was walking that couldn’t stop, wasn’t’ allowed to stop. “It is the intense pain that destroys a person’s self and world,” writes philosopher Elaine Scarry in The Body in Pain: The Making and Unmaking of the World, “a destruction experienced spatially as either the contraction of the universe down to the immediate vicinity of the body or as the body swelling to fill the entire universe.” Torture reminds us that we’re reducible to bodies; it particularizes and universalizes our pain. With some irony, walking does something similar, with the exertion of moving legs and swinging arms, our wide-ranging mobility announcing us as citizens of the universe. Hence the hellish irony of Bataan, or the 2,200 miles from Georgia to Oklahoma that more than 100,000 Cherokee, Muscogee, Seminole, Chickasaw, and Choctaw were forced to walk by the U.S. federal government between 1830 and 1850, the January 1945 40-mile march of 56,000 Auschwitz prisoners to the Loslau train station in sub-zero temperatures, or the 2.5 million residents of Phnom Penh, Cambodia, forced to evacuate into the surrounding countryside by the Khmer Rouge in 1975. These walks are as hell, prisoners followed by guards with guns and German shepherds, over the hard, dark ground.

Harriet Tubman’s walks were also in the winter, feet trying to gain uncertain purchase upon frozen bramble, stumbling over cold ground and slick, snow covered brown leaves, and she too was pursued by men with rifles and dogs. She covered similar distances as those who were forced to march, but Tubman was headed to another destination, and that has made all the difference. Crossing the Mason-Dixon line in 1849, Tubman recalled that “I looked at my hands to see if I was the same person. There was such glory over everything; the sun came like gold through the trees, and over the fields, and I felt like I was in Heaven.” But she wasn’t in Heaven, she was in Pennsylvania. For Tubman, and for the seventy enslaved people whom she liberated on thirteen daring missions back into Maryland, walking was arduous, walking was frightening, walking was dangerous, but more than anything walking was the price of freedom.

Tubman would rightly
come to be known as the Moses of her people (another prodigious walker),
descending into the antebellum South like Christ harrowing Hell. The network of
safe-houses and sympathetic abolitionists who shepherded the enslaved out of
Maryland, and Virginia, and North Carolina into Pennsylvania, and New England
and Canada, who quartered the enslaved in cold, dusty, cracked root cellars and
hidden passageways, used multiple means of transportation. People hid in the
backs of wagons underneath moldering produce, they availed themselves of
steamships and sometimes the Underground Railroad was a literal railroad. One
enterprising man named Henry Box Brown even mailed himself from Richmond to
Philadelphia, the same year Tubman arrived in the Quaker City. But if the
Underground Railroad was anything, it was mostly a process of putting one foot
before the other on the long walk to the north.

              Familiar with the ebbs and flows of the
brackish Chesapeake as it lapped upon the western shores of Dorchester County,
Tubman was able to interpret mossy rocks and trees to orient herself, navigating
by the Big Dipper and Polaris. Her preferred time of travel was naturally at
night, and winter was the best season to abscond back, deploying the silence
and cold of the season as camouflage. Dorchester is only a scant 150 miles from
Philadelphia, but those even further south – South Carolina, Georgia, even
Mississippi – would also walk to freedom. Eric Foner explains in Gateway to
Freedom: The Hidden History of the Underground Railroad, that “Even
those who initially escaped by other means ended up having to walk significant
distances.” Those nights on the Underground Railroad must have been
terrifying. Hearing the resounding barks of Cuban hounds straining at slave
catchers’ leashes, the metallic taste of fear sitting in mouths, bile rising up
in throats. Yet what portals of momentary grace and beauty were there, those
intimations of the sought-after freedom? To see the graceful free passage of a
red-tailed hawk over the Green Ridge, the bramble thickets along the
cacophonous Great Falls of the cloudy Potomac, the luminescence of a blue moon
reflected on a patch of thick ice in the Ohio River?

            During that same decade, and the French dictator Napoleon III was, through ambitious city planning, inventing an entirely new category of walker – the peripatetic urban wanderer. Only a few months after Tubman arrived in Philadelphia, and four thousand miles across the ocean, something new in the annals of human experience would open at Paris’ Au Con de la Rue – a department store. For centuries, walking was simply a means of getting from one place to another; from home to the market, from market to the church. With something, like the department store, or the glass covered merchant streets known as arcades, people were enticed not just to walk somewhere, but rather to walk everywhere. Such were the beginnings of the category of perambulator known as the flâneur, a word that is untranslatable, but carries connotations of wandering, idling, loafing, sauntering.

Being a flâneur means simply walking without purpose other than to observe; of strolling down Le Havre Boulevard and eyeing the window displays of fashions weaved in cashmere and mohair at Printemps, of espying the bakeries of Montmartre laying out macarons and Pain au chocalait; of passing the diners in outdoor brasseries of the Left Bank eating coq au vin. Before the public planner Georges-Eugène Hausmann’s radical Second Empire reforms, Paris was a crowded, fetid, confusing and disorganized assemblage of crooked and narrow cobblestoned streets and dilapidated half-timbered houses. Afterwards it became a metropolis of wide, magnificent boulevards, parks, squares, and museums. Most distinctively, there were the astounding 56,573 gas lamps that had been assembled by 1870, lit by a legion of allumeurs at dusk, so that people could walk at night. If Hausmann – and Napoleon III – were responsible for the arrival of the flâneur, then it was because the city finally had things worth seeing. For the privileged flâneur, to walk wasn’t the means to acquire freedom – to walk was freedom. 

“For the perfect flâneur,” writes poet Charles Baudelaire in 1863, “it is an immense joy to set up house in the heart of the multitude, amid the ebb and flow of movement, in the midst of the fugitive and the infinite… we might liken him to a mirror as vast as the crowd itself; or to a kaleidoscope gifted with consciousness.” Every great city’s pedestrian-minded public thoroughfares—the Avenue des Champs-Élysées and Cromwell Road; Fifth Avenue and Sunset Boulevard—is the rightful territory for the universal flâneur. Writers like Baudelaire compared the idyl of city walking to that other 19th-century innovation of photography; the flâneur existed inside a living daguerreotype, as if they had entered the hazy atmosphere of an impressionist painting, the gas lamps illuminating the drizzly fog of a Parisian evening. For the 20th-century German philosopher Walter Benjamin, who analyzed that activity in his uncompleted magnum opus The Arcades Project, the flâneur was the living symbol of modernity, writing that the “crowd was the veil from behind which the familiar city as phantasmagoria beckoned to the flaneur.”

I often played the role of flaneur for the two years my wife and I lived in Manhattan in a bit of much-coveted rent-controlled bliss. When your estate is 200 square feet, there’s not much choice but to be a flâneur, and so we occupied ourselves with night strolls through the electric city, becoming familiar with the breathing and perspiring of the metropolis. Each avenue has its espirit de place: stately residential York, commercial First and Second with their banks and storefronts, imperial Madison with its regal countenance, Park with its aura of old money, barely reformed Lexington with its intimations of past seediness. In the city at night, we availed ourselves of looking at the intricate pyrotechnic window displays of Barneys and Bloomingdales, of the bohemian leisure of the Strand’s displays in front of Central Park, of the linoleum cavern underneath the 59th Street Bridge, and of course Grand Central Station, the United States’ least disappointing public space.

Despite gentrification, rising inequity, and now the pandemic, New York still amazingly functions according to what the geographer Edward Soja describes in Thirdspace as being a place where “everything comes together… subjectivity and objectivity, the abstract and the concrete, the real and the imagined, the knowable and the unimaginable.” Yet Manhattan is perhaps more a nature preserve for the flâneur, as various economic and social forces over the past few decades have conspired to make our species extinct. The automobile would seem to be a natural predator for the type, and yet even in the deepest environs of the pedestrian unfriendly suburbs the (now largely closed) American shopping mall fulfilled much the same function as Baudelaire’s arcades. To stroll, to see, to be seen. A new threat has emerged in the form of Amazon, which portends to end the brick-and-mortar establishment, the coronavirus perhaps the final death of the flâneur. If that type of walker was birthed by the industrial revolution, then it now appears late capitalism is his demise, undone by our new tyrant Jeff Bezos.

The rights of the flâneur were never equally distributed, with scant mention of the flâneur needing to be hyperaware of his surroundings, of needing to carry keys in his fist, or having to arm himself with mace. While it’s not an entirely unknown word, flâneuse is a much rarer term, and it’s clear that the independence and assumed safety that pedestrian exploration implies is more often than not configured as masculine. Women have, of course, been just as prodigious in mapping the urban space with their feet as have men, with Lauren Elkin joking in Flâneuse: Women Walk the City in Paris, New York, Tokyo, Venice, and London that many accounts assume that a “penis were a requisite walking appendage, like a cane.” She provides necessary corrective to the male-heavy history of the flaneur, while also acknowledging that the risks are different for women. Describing the anonymity that such walking requires, Elkin writes that “We would love to be invisible the way a man is. We’re not the ones to make ourselves visible… it’s the gaze of the flaneur that makes the woman who would join his ranks too visible to slip by unnoticed.”

As a means of addressing this inequity that denies more than half the world’s population safe passage through public spaces, the activist movement Take Back the Night held its first march in Philadelphia, after the 1975 murder of microbiologist Susan Alexander Speeth as she was walking home. Take Back the Night used one of the most venerable of protest strategies—the march—as a means of expressing solidarity, security, defiance, and rage. Andrea Dworkin stated the issue succinctly in her treatise “The Night and Danger,” explaining that “Women are often told to be extra careful and take precautions when going out at night… So when women struggle for freedom, we must start at the beginning by fighting for freedom of movement… We must recognize that freedom of movement is a precondition for everything else.” Often beginning with a candlelight vigil, participants do exactly that which they’re so often prevented from doing—walking freely at night. So often paeons to walking that are penned by men wax rhapsodic about the freedom of the flaneur, but forget how gendered the simple act of walking is. Dworkin’s point is that women never forget it.

Few visuals are quite as powerful as seeing thousands of women and men moving with intentionality through a public space, hoisting placards and signs, chanting slogans, and reminding the powers that be what mass mobilization looks like. There is a debate to be had about the efficacy of protest. But at their absolute most charged, a protest seems like it can change the world; thousands of feet walking as one, every marcher a small cell in a mighty Leviathan. In that uncharacteristically warm February of 2003, I joined the 5,000 activists who marched through the Pittsburgh neighborhood of Oakland against the impending war in Iraq. There were the old hippies wearing t-shirts against the Vietnam War, the slightly drugged out looking college-aged Nader voters, Muslim women in vermillion hijabs and men in olive keffiyeh, the Catholic Workers, and the Jews for Palestine, the slightly menacing balaclava wearing anarchists, and of course your well-meaning liberals such as myself.

We marched past Carnegie-Mellon’s frat row, young Republicans jeering us with cans of Milwaukee’s Best, through the brutalist concrete caverns of the University of Pittsburgh’s campus, and under the watchful golem that was the towering gothic Cathedral of Learning, up to the Fifth Avenue headquarters of CMU’s Software Engineering Institute, a soulless mirrored cube reflecting the granite gargoyles blackened by decades of steel mill exhaust who were watchfully positioned on St. Paul’s Cathedral across the street. Supposedly both the SEI and the adjacent RAND Institute had DoD contracts, developing software that would be used for drone strikes and smart bombs. With righteous (and accurately placed) indignation, the incensed crowd chanted, and we felt as a singular being. On that same day, in 650 cities around the world, 11 million others marched, history’s largest global protest. It felt as if by walking we’d stop the invasion. Reader, we did not stop the invasion.      

Despite those failures, the experience is indicative of how walking alters consciousness. Not just in a political sense, but in a personal one as well (though those things are not easily extricated). There is a methodology for examining how walking alters our subjectivity, a discipline with the lofty and vaguely threatening name of “psychogeography.” Theorist Guy Debord saw the practice as a means of reenchanting space and place, developing a concept called dérive, which translates from the French as “drifting,” whereby participants “drop their usual motives for movement and action, their relations, their work and leisure activities, and let themselves be drawn by the attractions of the terrain and the encounters they find there,” as he is quoted in the Situationist International Anthology. Sort of a hyper-attenuated version of being the flaneur, psychogeographers perceived familiar streets, squares, and cities from an entirely different perspective. Other psychogeographical activities included tracing out words by the route a walker takes through the city, or mapping smells and sounds. The whole thing has an anarchic sensibility about it, but with the whimsy of the Dadaists, while just as enthused with praxis as with theory.

For example, in his travelogue Psychogeography the Anglo-American novelist Will Self journeys from JFK to Manhattan while on foot. Sneakers crunching over refuse alongside the Van Wyck, the metropolitan detritus that exists in those scrubby brown patches that populate the null-voids that exist between somewhere and somewhere else. Nothing can really compare to entering New York on foot, as Self did. It’s fundamentally different from arriving in a cab driving underneath the iconic steel girders of the Manhattan Bridge, or being ferried into the Parthenon that is Grand Central, or even disembarking from a Peter Pan Bus in the grody cavern of Port Authority. Walking is a “means of dissolving the mechanized matrix which compresses the space-time continuum” Self writes, with the walker acting as “an insurgent against the contemporary world, an ambulatory time traveler.” For the psychogeographers, how we move is how we think, so that if we wish to change the later, we must first alter the former.

So it would seem. Writing in the 18th century, Jean-Jacques Rousseau remarked in The Confessions that “I can only meditate when I’m walking… my mind only works with my legs.” In his autobiography Ecce Homo, Friedrich Nietzsche injuncted, “Sit as little as possible; do not believe any idea that was not born in the open air and of free movement…. All prejudices emanate from the bowels.” Meanwhile, his contemporary Søren Kierkegaard wrote that “I have walked myself into my best thoughts.” Most celebrated of the walking philosophers, Immanuel Kant, had daily constitutionals across Konigsberg’s bridges that merchants set their watches by him. Wallace Stevens famously used to write his poems as he stomped off across the antiseptic landscape of Hartford, Conn. He walked as scansion, his wingtips pounding out iambs and trochees with the wisdom that understands verse is as much of the body as it is of the mind, so that “Perhaps/The truth depends on a walk.” Walking is a type of consciousness, a type of thinking. Walking is a variety of reading, the landscape unspooling as the most shining of verse, so that every green leaf framed against a church’s gothic tower in a dying steel town, both glowing with an inner light out of the luminescence of the golden hour, is the most perfect of poems, only to be seen by she who gives it the time to be considered.

Image Credit: SnappyGoat