Nothing Outside the Text

- | 3

Let’s pretend that you work in a windowless office on Madison, or Lex, or Park in the spring of 1954, and you’re hurrying to Grand Central to avoid the rush back to New Rochelle, or Hempstead, or Weehawken. Walking through the Main Concourse with its Beaux Arts arches and its bronzed clock atop the ticketing booth, its cosmic green ceiling fresco depicting the constellations slowly stained by the accumulated cigarette smoke, you decide to purchase an egg-salad sandwich from a vendor near the display with its ticking symphony of arrivals and departures. You glance down at a stack of slender magazines held together with thick twine. The cover illustration is of a buxom brunette wearing a yellow pa’u skirt and hauling an unconscious astronaut in a silver spacesuit through a tropical forest while fur-covered extraterrestrials look on between the palm trees. It’s entitled Universe Science Fiction. And if you were the sort of Manhattan worker who, after clocking in eight hours at the Seagram Building or the Pan Am Building, settles into the commuter train’s seat—after loosening his black-knit tie and lighting a Lucky Strike while watching Long Island go by—ready to be immersed in tales of space explorers and time travelers, then perhaps you parted with a quarter so that Universe Science Fiction’s cheap print would smudge your gray flannel suit. The sort of reading you’d want to cocoon yourself in with nothing outside the text. As advertised, you find a story of just a few hundred words entitled “The Immortal Bard,” by a writer named Isaac Asimov.

Reprinted three years later in Earth Is Room Enough, “The Immortal Bard” is set among sherry-quaffing, tweedy university faculty at their Christmas party, where a boozed-up physicist named Phineas Welch corners the English professor Scott Robertson, and explains how he’s invented a method of “temporal transference” able to propel historical figures into the present. Welch resurrects Archimedes, Galileo, and Isaac Newton, but “They couldn’t get used to our way of life… They got terribly lonely and frightened. I had to send them back.” Despite their genius, their thought wasn’t universal, and so Welch conjures William Shakespeare, believing him to be “someone who knew people well enough to be able to live with them centuries away from his own time.” Robertson humors the physicist, treating such claims with a humanist’s disdain, true to C.P. Snow’s observation in The Two Cultures and the Scientific Revolution that “at one pole we have the literary intellectuals, at the other scientists… Between the two a gulf of mutual incomprehension.” Welch explains that the Bard was surprised that he was still taught and studied, after all, he wrote Hamlet in a few weeks, just polishing up an old plot. But when introduced to literary criticism, Shakespeare can’t comprehend anything. “God ha’ mercy! What cannot be racked from words in five centuries? One could wring, methinks, a flood from a damp clout!” So, Welch enrolls him in a Shakespeare course, and suddenly Robertson begins to fear that this story isn’t just a delusion, for he remembers the bald man with a strange brogue who had been his student. “I had to send him back,” Welch declares, because our most flexible and universal of minds had been humiliated. The physicist downed his cocktail and mutters “you poor simpleton, you flunked him.”

“The Immortal Bard” doesn’t contain several of the details that I include—no sherry, no tweed (though there are cocktails). We have no sense of the characters’ appearances; Welch’s clothes are briefly described, but Robertson is a total blank. A prolific author, penning over 1,000 words a day, by Asimov’s death in 1992 he had published more than 500 books across all categories of the Dewey Decimal System (including Asimov’s Guide to Shakespeare). Skeletal parsimony was Asimov’s idiom; in his essay “On Style” from Who Done It?, coedited with Alice Laurence, he described his prose as “short words and simple sentence structure,” bemoaning that this characteristic “grates on people who like things that are poetic, weighty, complex, and, above all, obscure.” Had his magisterial Foundation been about driving an ambulance in the First World War rather than a galactic empire spanning billions of light years, it’d be the subject of hundreds of dissertations. If his short story “The Nine Billion Names of God” had been written in Spanish, then he’d be Jorge Luis Borges; had Asimov’s “The Last Question” originally been in Italian, then he’d be Italo Calvino (American critics respect fantasy in an accent, but then they call it “magical realism”). As it was, critical evaluation was more lackluster, with the editors of 1981’s Dictionary of Literary Biography claiming that since Asimov’s stories “clearly state what they mean in unambiguous language [they] are… difficult for a scholar to deal with because there is little to be interpreted.”

Asimov’s dig comes into sharper focus having admitted that “The Immortal Bard” was revenge on those professors who’d rankled him by misinterpreting stories—his and other’s. A castigation of the discipline as practiced in 1954, which meant a group that had dominated the study of literature for three decades—the New Critics. With their rather uninspired name, the New Critics—including I.A. Richards, John Crow Ransome, Cleanth Brooks, Allen K. Tate, Robert Penn Warren, William Empson, and a poet of some renown named T.S. Eliot (among others)—so thoroughly revolutionized how literature is studied that explaining why they’re important is like explaining air to a bird. From Yale, Cambridge, and Kenyon, the New Criticism would disseminate, and then it trickled down through the rest of the educational system. If an English teacher asked you to analyze metaphors in Shakespeare’s “Sonnet 12″—that was because of the New Critics. If an AP instructor asked you to examine symbolism in F. Scott Fitzgerald’s The Great Gatsby—that was because of the New Critics. If a college professor made you perform a rhetorical analysis of Virginia Woolf’s Mrs. Dalloway—that was because of the New Critics. Most of all, if anyone ever made you conduct a “close reading,” it is the New Critics who are to blame.

According to the New Critics, their job wasn’t an aesthetic one, parsing what was beautiful about literature or how it moved people (the de facto approach in Victorian classrooms, and still is in popular criticism), but an analytical one. The task of the critic was scientific—to understand how literature worked, and to be as rigorous, objective, and meticulous as possible. That meant bringing nothing to the text but the text. Neither the critic’s concerns—or the author’s—mattered more than the placement of a comma, the connotation of a particular word, the length of a sentence. True that they often unfairly denigrated worthy writing because their detailed explications only lent themselves to certain texts. Poetry was elevated above prose; the metaphysical over the Romantic; the “literary” over genre. But if the New Critics were snobbish in their preferences, they also weren’t wrong that there was utility in the text’s authority. W.K. Wimsatt and Monroe Beardsley argued in a 1946 issue of The Sewanee Review that the “design or intention of the author is neither available nor desirable as a standard for judging the success of a work of literary art.” In a more introspective interview, Asimov admitted that what inspired “The Immortal Bard” was his inadequacy at answering audience questions about his own writing. Asimov was right that he deserved more attention from academics, but wrong in assuming that he’d know more than them. The real kicker of the story might be that Shakespeare actually earned his failing grade.     

When New Critics are alluded to in popular culture, it’s as anal-retentive killjoys. Maybe the most salient (and unfair) drumming the New Critics received was in the 1989 (mis)beloved Peter Weir film Dead Poets Society, featuring Robin Williams’s excellent portrayal of John Keating, an English teacher at elite Welton Academy in 1959. Keating arrives at the rarefied school urging the boys towards carpe diem, confidently telling them that the “powerful play goes on, and you may contribute a verse.” With vitality and passion, Keating introduces the students to Tennyson, Whitman, and Thoreau, and the audience comprehends that this abundantly conservative curriculum is actually an act of daring, at least when contrasted to the New Critical orthodoxy that had previously stultified the children of millionaires. On his first day, wearing corduroy with leather arm patches, Keating asks a student to recite from their approved textbook by Professor J. Evans Pritchard. In a monotone, the student reads “If the poem’s score for perfection is plotted on the horizontal of a graph and its importance is plotted on the vertical, then calculating the total area of the poem yields the measure of its greatness.” We are to understand that after the cold scalpel of analysis cuts the warm flesh of the poem—that if it wasn’t already dead—then it certainly perished thereafter. Keating pronounces the argument to be “Excrement,” and demands his students rip the page out, so that a film that defends the humanities depicts the destruction of books. “But poetry, beauty, romance, love, these are what we stay alive for,” Keating tells his charges, and how could dusty Dr. Pritchard compete with a Cartesian coordinate plane ?

The fictional Pritchard’s essay is an allusion to Brook and Warren’s influential 1938 Understanding Poetry, where they write that “Poetry gives us knowledge… of ourselves in relation to the world of experience, and to that world considered… in terms of human purposes and values.” Not soaring in its rhetoric, but not the cartoon from Dead Poets Society either, though also notably not what Keating advocated. He finds room for poetry, beauty, romance, and love, but neglects truth. What Keating champions isn’t criticism, but appreciation, and while the former requires rigor and objectivity, the latter only needs enjoyment. Appreciation in and of itself is fine—but it doesn’t necessarily ask anything of us either. Kevin Detmar in his defenestration of the movie from The Atlantic writes that “passion alone, divorced from the thrilling intellectual work of real analysis, is empty, even dangerous.” Pontificating in front of his captive audience, Keating recites (and misinterprets) poems from Robert Frost and Percy Shelley, demanding that “When you read, don’t just consider what the author thinks, consider what you think.” Estimably sensible, for on the surface an enjoinder towards critical thought and independence seems reasonable, certainly when reading an editorial, or a column, or policy proposal—but this is poetry.

His invocation is diametrically opposed to another New Critical principle, also defined by Wimsatt and Beardsley in The Sewanee Review in 1946, and further explicated in their book The Verbal Icon: Studies in the Meaning of Poetry, published the same year as Asimov’s story. Wimsatt and Beardsley say that genuine criticism doesn’t “talk of tears; prickles or other physiological symptoms, of feeling angry, joyful, hot, cold, or intense, or of vaguer states of emotional disturbance, but of shades of distinction and relation between objects of emotion.” In other words, it’s not all about you. Being concerned with the poem’s effect tells us everything about the reader but little about the poem. Such a declaration might seem arid, sterile, and inert—especially compared to Keating’s energy—and yet there is paradoxically more life in The Verbal Icon. “Boys, you must strive to find your own voice,” Keating tells a room full of the children of wealth, power, and privilege, who will spend the whole 20th century ensuring that nobody has the option not to hear them. Rather than sounding the triumphant horn of independence, this is mere farting on the bugle of egoism. Poetry’s actual power is that it demands we shut up and listen. Wimsatt and Beardsley aren’t asking the reader not to be changed by a poem—it’s the opposite. They’re demanding that we don’t make an idol from our relative, contingent, arbitrary reactions. A poem is a profoundly alien place, a foreign place, a strange place. We do not go there to meet ourselves; we go there to meet something that doesn’t even have a face. Keating treats poems like mirrors, but they’re windows.

With austerity and sternness, New Criticism is an approach that with sola Scriptura exactitude understood nothing above, below, behind, or beyond the text. Equal parts mathematician and mystic, the New Critic deigns objectivity the preeminent goal, for in the novel, or poem, or play properly interpreted she has entered a room separate, an empire of pure alterity. An emphasis on objectivity doesn’t entail singularity of interpretation, for though the New Critics believed in right and wrong readings, good and bad interpretations, they reveled in nothing as much as ambiguity and paradox. But works aren’t infinite. If a text can be many things, it also can’t mean everything. What is broken are the tyrannies of relativism, the cult of “I feel” that defined conservative Victorian criticism and ironically some contemporary therapeutic manifestations as well. New Criticism drew from the French tradition of explication de texte, the rigorous parsing of grammar, syntax, diction, punctuation, imagery, and narrative. Not only did they supplant Victorian aesthetic criticism’s woolliness, their method was estimably democratic (despite their sometimes-conservative political inclinations, especially among the movement known as the Southern Agrarians). Democratic because none of the habituated knowledge of the upper-class—the mores of Eton or Philips Exeter, summering at Bath or Newport, proper dress from Harrod’s or Brooks Brothers—now made a difference in appreciating Tennyson or Byron. Upper class codes were replaced with the severe, rigorous, logical skill of being able to understand Tennyson or Byron, with no recourse to where you came from or who you were, but only the words themselves. Appreciation is about taste, discernment, and breeding—it’s about acculturation. Analysis? That’s about poetry.

From that flurry of early talent came Richards’s 1924 Principles of Literary Criticism and 1929 Practical Criticism, Empson’s 1930 Seven Types of Ambiguity, Brook and Warren’s 1938 Understanding Poetry, Brooks’s classic 1947 The Well Wrought Urn: Studies in the Structure of Poetry, and Wimsatt and Beardsley’s 1954 The Verbal Icon, as well as several essays of Ransome and Eliot. The New Critics did nothing less than upend literature’s study by focusing on words, words, words (as Hamlet would say). “A book is a machine to think with,” Richards wrote in Principles of Literary Criticism, and disdain that as chilly, but machines do for us that which we can’t do for ourselves. Reading yourself into a poem is as fallacious as sitting in a parked car and thinking that will get you to Stop & Shop. Lest you think that Richards is sterile, he also affirmed that “Poetry is capable of saving us,” and that’s not in spite of it being a machine, but because of it. They sanctified literature by cordoning it off and making it a universe unto itself, while understanding that its ideal rigidity is like absolute zero, an abstraction of readerly investment that by necessity always lets a little heat in. In practice, the job of the critic is profound in its prosaicness. Vivian Bearing, a fictional English professor in Margaret Edson’s harrowing and beautiful play W;t, which contains the most engaging dramatization of close reading ever committed to stage or screen, describes the purpose of criticism as not to reaffirm whatever people want to believe, but that rather by reading in an “uncompromising way, one learns something from [the] poem, wouldn’t you say?”

There have been assaults upon this bastion over the last several decades, yet even while that mélange of neo-orthodoxies that became ascendant in the ’70s and ’80s when English professor was still a paying job are sometimes interpreted as dethroning the New Critics—the structuralists and post-structuralists, the New Historicists and the Marxists, the Queer Theorists and the post-colonial theorists—their success was an ironic confirmation of the staying power of Wimsatt, Beardsley, Brooks, Warren, Richards, Empson, and so on. “Like all schools of criticism, the New Critics have been derided by their successors,” writes William Logan in his forward to Garrick Davis’s Praising It New: The Best of the New Criticism, “but they retain an extraordinary influence on the daily practice of criticism.” After all, when a post-structuralist writes about binary oppositions, there are shades of Empson’s paradox and ambiguity; when Roland Barthes wrote in 1967 that “the reader is without history, biography, psychology” there is a restatement about effect, and that he was writing in a book called The Death of the Author is the ultimate confirmation against intention. And Jacques Derrida’s deconstruction, that much maligned and misinterpreted word, bane to conservative and balm to radical? It’s nothing more than a different type of close reading—a hyper attenuated and pure form of it—where pages could be produced on a single comma in James Joyce’s Ulysses. We are still their children.

Though far from an unequivocal celebrant of the New Critics, Terry Eagleton writes in Literary Theory: An Introduction that close reading provided “a valuable antidote to aestheticist chit-chat,” explaining that the method does “more than insist on due attentiveness to the text. It inescapably suggests an attention to… the ‘words on the page.'” Close reading is sometimes slandered as brutal vivisection, but it’s really a manner of possession. In the sifting through of diction and syntax, grammar and punctuation, image and figuration, there are pearls. Take a sterling master—Helen Vendler. Here she is examining Shakespeare’s Sonnet 73, where he writes “In me thou see’st the glowing of such fire/That on the ashes of his youth doth lie.” She notes that the narrator across the lyric “Defines himself not by contrast but by continuity with his earlier state. He is the glowing—a positive word, unlike ruin or fade—of a fire.” Or Vendler on the line in Emily Dickinson’s poem 1779. The poet writes that “To make a prairie it takes a clover and a bee,” and the critic writes “Why ‘prairie’ instead of ‘garden’ or ‘meadow?’ Because only ‘prairie’ rhymes with ‘bee’ and ‘revery,’ and because ‘prairie’ is a distinctly American word.” Or, Vendler, once again, on Walt Whitman, in her book Poets Thinking: Pope, Whitman, Dickinson, Yeats. In Leaves of Grass he begins, “I celebrate myself, and sing myself,” and she observes that “The smallest parallels… come two to a line… When the parallels grow more complex, each requires a whole line, and we come near to the psalmic parallel, so often imitated by [the poet], in which the second verse adds something to the substance of the first.” And those are just examples from Helen Vendler.

When I queried literary studies Twitter about their favorite close readings—to which they responded generously and enthusiastically—I was directed towards Erich Auerbach’s Dante: The Poet of the Secular (to which I’d add Mimesis: The Representation of Reality in Western Literature); Marjorie Garber’s reading of Robert Lowell’s poem “For the Union Dead” in Field Work: Sites in Literary and Cultural Studies; the interpretation of Herman Melville’s Billy Budd in The Barbara Johnson Reader: The Surprise of Otherness; Shoshana Felman’s paper on Henry James titled “Turning the Screw of Interpretation” in Yale French Studies; Susan Howe’s My Emily Dickinson; Randall Jarry writing about Robert Frost’s “Home Burial” in The Third Book of Criticism; Olga Springer’s Ambiguity in Charlotte Bronte’s Villette; Northrop Frye’s Fearful Symmetry: A Study of William Blake; Vladimir Nabokov on Charles Dickens and Gustave Flaubert in Lectures on Literature; Ian Watt’s The Rise of the Novel: Studies in Defoe, Richardson, and Fielding. Nina Baym on Nathaniel Hawthorne in Novels, Readers, Reviewers: Responses to Fiction in Antebellum America; Minrose Gwin on The Sound and the Fury in The Feminine and Faulkner; Edward Said on Jane Austen’s Mansfield Park in Orientalism; Christopher Ricks’s Milton’s Grand Study (to which I’d add Dylan’s Vision of Sin, which refers not to Thomas but Bob), and so on, and so on, and so on. If looking to analyze my previous gargantuan sentence, just note that I organized said critics by no schema, save to observe how such a diversity includes the old and young, the dead and alive, the traditional and the radical, all speaking to the vitality of something with as stuffy a name as close reading. Risking sentimentality, I’d add another exemplary participant—all of those anonymous graduate students parsing the sentence, all of those undergraduates scanning the line, and every dedicated reader offering attentiveness to the words themselves. That’s all close reading really is.

Naïve to assume that such a practice offers a way out of our current imbroglio. However, when everyone has an opinion but nobody has done the reading, where an article title alone is enough to justify judgement, and criticism tells us more about the critic than the writing, then perhaps slow, methodical, humble close reading might provide more than just explication. Interpretations are multifaceted, but they are not relative, and for all of its otherworldliness, close reading is built on evidence. Poorly done close reading is merely fan fiction. Something profound in acknowledging that neither the reader—nor the author—are preeminent, but the text is rather the thing. It doesn’t serve to affirm what you already know, but rather to instruct you in something new. To not read yourself into a poem, or a novel, or a play, but to truly encounter another mind—not that of the author—but of literature itself, is as close to religion as the modern age countenances. Close reading is the most demonstrative way to experience that writing and reading are their own dimension.

Let’s pretend that you’re a gig worker, and while waiting to drive for Uber or Seamless, you scroll through an article entitled “Nothing Outside the Text.” It begins in the second-person, inserting the reader into the narrative. The author invents a mid-century office worker who is travelling home. Place names are used as signifiers; the author mentions “New Rochelle,” “Hempstead,” and “Weehawken,” respectively in Westchester County, Long Island, and New Jersey, gesturing towards how the city’s workers radiate outward. Sensory details are emphasized—the character buys an “egg-salad sandwich,” and we’re told that he stands near the “display with its ticking symphony,” the last two words unifying the mechanical with the artistic—with the first having an explosive connotation—yet there is an emphasis on regularity, harmony, design. We are given a description of a science fiction magazine the man buys, and are told that he settles into his train seat where the magazine’s “cheap print… smudge[s]” his “gray flannel suit,” possibly an allusion to the 1955 Sloane Wilson novel of that name. This character is outwardly conformist, but his desire to be “immersed in tales of space explorers and time travelers” signals something far richer about his inner life. Finally, if you’re this modern gig worker reading about that past office worker, you might note that the latter is engaging the “sort of reading you’d want to cocoon yourself in, a universe to yourself with nothing outside the text.” And in close reading, whether you’re you or me, the past reader or the present, the real or imagined, all that the text ever demands of us—no more and no less—is to enter into that universe on its own terms. For we have always been, if we’re anything, citizens of this text.   

Image Credit: Pixabay

This Isn’t the Essay’s Title

- | 1

“The test of a first-rate intelligence is the ability to hold two opposing ideas in mind at the same time and still retain the ability to function.” —F. Scott Fitzgerald, “The Great Crack Up” (1936)

“You’re so vain, you probably think this song is about you.” —Carly Simon (1971)

On a December morning in 1947 when three fellows at Princeton’s Institute for Advanced Study set out for the Third Circuit Court in Trenton, it was decided that the job of making sure that the brilliant but naively innocent logician Kurt Gödel didn’t say something intemperate at his citizenship hearing would fall to Albert Einstein. Economist Oscar Morgenstern would drive, Einstein rode shotgun, and a nervous Gödel sat in the back. With squibs of low winter light, both wave and particle, dappled across the rattling windows of Morgenstern’s car, Einstein turned back and asked, “Now, Gödel, are you really well prepared for this examination?” There had been no doubt that the philosopher had adequately studied, but as to whether it was proper to be fully honest was another issue. Less than two centuries before, and the signatories of the U.S. Constitution had supposedly crafted a document defined by separation of powers and coequal government, checks and balances, action and reaction. “The science of politics,” wrote Alexander Hamilton in “Federalist Paper No. 9,” “has received great improvement,” though as Gödel discovered, clearly not perfection. With a completism that only a Teutonic logician was capable of, Gödel had carefully read the foundational documents of American political theory, he’d poured over the Federalist Papers and the Constitution, and he’d made an alarming discovery.

It’s believed that while studying Article V, the portion that details the process of amendment, Gödel realized that there was no safeguard against that article itself being amended. Theoretically, a sufficiently powerful political movement with legislative and executive authority could rapidly amend the articles of amendment so that a potential demagogue would be able to rule by fiat, all while such tyranny was perfectly constitutional. A paradox at the heart of the Constitution—something that supposedly guaranteed democracy having coiled within it rank authoritarianism. All three men driving to Trenton had a keen awareness of tyranny; all were refugees from Nazi Germany; all had found safe-haven on the pristine streets of suburban Princeton. After the Anschluss, Gödel was a stateless man, and though raised Protestant he was suspect by the Nazis and forced to emigrate. Gödel, with his wife, departed Vienna by the Trans-Siberian railroad, crossed from Japan to San Francisco, and then took the remainder of his sojourn by train to Princeton. His path had been arduous and he’d earned America, so when Gödel found a paradox at the heart of the Constitution, his desire to rectify it was born from patriotic duty. At the hearing, the judge asked Gödel how it felt to become a citizen of a nation where it was impossible for the government to fall into anti-democratic tyranny. But it could, Gödel told him, and “I can prove it.” Apocryphally, Einstein kicked the logician’s chair and ended that syllogism.

Born in Austria-Hungary, citizen of Czechoslovakia, Austria, Germany, and finally the United States, Gödel’s very self-definition was mired in incompleteness, contradiction, and unknowability. From parsing logical positivism among luminaries such as Rudolph Carnap and Moritz Schlick, enjoying apfelstrudel and espresso at the Café Reichsrat on Rathausplatz while they discussed the philosophy of mathematics, Gödel now rather found himself eating apple pie and weak coffee in the Yankee Doodle Tap Room on Nassau Street—and he was grateful.  Gone were the elegant Viennese wedding-cake homes of the Ringstrasse, replaced with Jersey’s clapboard colonials; no more would Gödel debate logic among the rococo resplendence of the University of Vienna, but at Princeton he was at least across the hall from Einstein. “The Institute was to be a new kind of research center,” writes Ed Regis in Who Got Einstein’s Office?: Eccentricity and Genius at the Institute for Advanced Study. “It would have no students, no teachers, and no classes,” the only responsibility being pure thought, so that its fellows could be purely devoted to theory. Its director J. Robert Oppenheimer (of Manhattan Project fame) called it an “intellectual hotel;” physicist Richard Feynman was less charitable, referring to it as a “lovely house by the woods” for “poor bastards” no longer capable of keeping up. Regardless, it was to be Gödel’s final home, and there was something to that.

Seventeen years before his trip to Trenton, and it was at the Café Reichsrat where he presented the discovery for which he’d forever be intractably connected—Gödel’s Incompleteness Theorems. In 1930 he had irrevocably altered mathematics when Gödel demonstrated that the dream of completism that had dogged deduction since antiquity was only a mirage. “Any consistent formal system,” argues Gödel in his first theorem, “is incomplete… there are statements of the language… which can neither be proved nor disproved.” In other words, it’s an impossibility that any set of axioms can be demonstrated to be true as part of a self-contained system—the rationalist dream of a unified, self-evidently provable system is only so much fantasy. Math, it turns out, will never be depleted, since there can never be a solution to all mathematical problems. In Gödel’s formulation, a system must either sometimes produce falsehoods, or it must sometimes generate unprovable truths, but it can never consistently render only completely provable truths. As the cognitive scientist Douglas Hofstadter explained in his countercultural classic Gödel, Escher, Bach: An Eternal Golden Braid, “Relying on words to lead you to the truth is like relying on an incomplete formal system to lead you to the truth. A formal system will give you some truth, but… a formal system, no matter how powerful—cannot lead to all truths.” In retrospect, the smug certainties of American exceptionalism should have been no match for Gödel, whose scalpel-like mind had already eviscerated mathematics, philosophy, and logic, to say nothing of some dusty parchment once argued over in Philadelphia.

His theorems rest on a variation of what’s known as the “Liar’s Paradox,” which asks what the logical status of a proposition such as “This statement is false” might be. If that sentence is telling the truth, then it must be false, but if it’s false, then it must be true, ad infinitum, in an endless loop. For Gödel, that proposition is amended to “This sentence is not provable,” with his reasoning demonstrating that a sufficiently formal system of logic can’t demonstrate that proposition, regardless of its truth value, since to prove the statement is to make it unprovable, but if unprovable, then it’s proved, again ad infinitum in yet another grueling loop. As with the Constitution and its paeans to democracy, so must mathematics be rendered perennially useful while still falling short of perfection. The elusiveness of certainty bedeviled Gödel throughout his life; a famously paranoid man, the assassination of his friend Schlick by a Nazi student in 1936 pushed the logician into a scrupulous anxiety. After the death of his best friend Einstein in 1955 he became increasingly isolated. “Gödel’s sense of intellectual exile deepened,” explains Rebecca Goldstein in Incompleteness: The Proof and Paradox of Kurt Gödel. “The young man in the dapper white suit shriveled into an emaciated man, entombed in a heavy overcoat and scarf even in New Jersey’s hot humid summers, seeing plots everywhere… His profound isolation, even alienation, from his peers provided fertile soil for that rationality run amuck which is paranoia.” When his beloved wife fell ill in 1977, Gödel quit eating since she could no longer prepare his meals. The ever-logical man whose entire career had demonstrated the fallibility of rationality had concluded that only his wife could be trusted not to poison his food, and so when she was unable to cook, he properly reasoned (by the axioms that were defined) that it made more sense to simply quit eating. When he died, Gödel weighed only 50 pounds.

Gödel’s thought was enmeshed in that orphan of logic that we call paradox. As was Einstein’s, that man who converted time into space and space into time, who explained how energy and mass were the same thing so that (much to his horror) the apocalyptic false dawn of Hiroshima was the result. Physics in the 20th century had cast off the intuitive coolness of classical mechanics, discovering that contradiction studded the foundation of reality. There was Werner Heisenberg with his uncertainty over the location of individual subatomic particles, Louis de Broglie and the strange combination of wave and particle that explained the behavior of light, Niels Bohr who understood atomic nuclei as if they were smeared across space, and the collapsing wave functions of Erwin Schrödinger for whom it could be imagined that a hypothetical feline was capable of being simultaneously alive and dead. Science journalist John Gribbin explains in Schrödinger’s Kittens and the Search for Reality: Solving the Quantum Mysteries that contemporary physics is defined by “paradoxical phenomena as photons (particles of light) that can be in two places at the same time, atoms that go two ways at once… [and how] time stands still for a particle moving at light speed.” Western thought has long prized logical consistency, but physics in the 20th century abolished all of that in glorious absurdity, and from those contradictions emerged modernity—the digital revolution, semiconductors, nuclear power, all built on paradox.

The keystone of classical logic is the so-called “Law of Non-Contradiction.” Simply put, something cannot both be and not be what it happens to be simultaneously, or if symbolic logic is your jam:  ¬(p ∧ ¬p), and I promise you that’s the only formula you will see in this essay. Aristotle said that between two contradictory statements one must be correct and the other false—”it will not be possible to be and not to be the same thing” he writes in The Metaphysics—but the anarchic potential of the paradox greedily desires truth and its antecedents. And again, in the 17th century the philosopher Wilhelm Gottfried Leibnitz tried to succinctly ward off contradiction in his New Essays on Human Understanding when he declared, “Every judgement is either true or false,” and yet paradoxes fill the history of metaphysics like landmines studded across the Western Front. Paradox is the great counter-melody of logic—it is the question of whether an omnipotent God could will Himself unable to do something, and it’s the eye-straining M.C. Escher lithograph “Waterfall” with its intersecting Penrose triangles showing a stream cascading from an impossible trough. Paradox is the White Queen’s declaration in Lewis Carroll’s Through the Looking Glass that “sometimes I believed as many as six impossible things before breakfast,” and the Church Father Tertullian’s creedal statement that “I believe it because it is absurd.” The cracked shadow logic of our intellectual tradition, paradox is confident though denounced by philosophers as sham-faced; it is troublesome and not going anywhere. When a statement is made synonymous with its opposite, then traditional notions of propriety are dispelled and the fun can begin. “But one must not think ill of the paradox,” writes Søren Kierkegaard in Philosophical Fragments, “for the paradox is the passion of thought, and the thinker without the paradox is like the lover without passion: a mediocre fellow.”

As a concept, it may have found its intellectual origin on the sunbaked, dusty, scrubby, hilly countryside of Crete. The mythic homeland of the Minotaur, who is man and beast, human and bull, a walking, thinking, raging horned paradox covered in cowhide and imprisoned within the labyrinth. Epimenides, an itinerant philosopher some seven centuries before Christ, supposedly said that “All Cretans are liars” (St. Paul actually quotes this assertion in his epistle to Titus). A version of the aforementioned Liar’s Paradox thus ensues. If Epimenides is telling the truth then he is lying, and if he is lying then he is telling the truth. This class of paradoxes has multiple variations (in the Middle Ages they were known as “insolubles”—the unsolvable). For example, consider two sentences vertically arranged; the upper one is written “The statement below is true” and the lower says “The statement above is false,” and again the reader is caught in a maddening feedback loop. Martin Gardner, who for several decades penned the delightful “Mathematical Games” column in Scientific American, asks in Aha! Gotcha: Paradoxes to Puzzle and Delight, “Why does this form of the paradox, in which a sentence talks about itself, make the paradox clearer? Because it eliminates all ambiguity over whether a liar always lies and a truth-teller always tells the truth.” The paradox is a function of language, and in that way is the cousin to tautology, save for the former describing propositions that are always necessarily both true and false.

Some intrinsic meaning is elusive in all of this this, so that it would be easy to reject all of it as rank stupidity, but paradoxes provide a crucial service. In paradox, we experience the breakdown of language and of literalism. Whether or not paradoxes are glitches in how we arrange our words or due to something more intrinsic, they signify a null-space where the regular ways of thinking, of understanding, of writing, no longer hold. Few crafters of the form are as synonymous with paradox as the fifth-century BCE philosopher Zeno of Elea. Consider his famed dichotomy paradox, wherein Zeno concludes that motion itself must be impossible, since the movement from point A to point B always necessitates a halving of distance, forever (and so the destination itself can never be reached). Or his celebrated arrow paradox, wherein Aristotle explains in Physics that “If everything when it occupies an equal space is at rest at that instant of time, and if that which is in location is always occupying such a space at any moment, the flying arrow is therefore motionless at that instant of time and at the next instant of time.” And yet the arrow still moves. Roy Sorenson explains in A Brief History of the Paradox that the form “developed from the riddles of Greek folklore” (as with the Sphinx’s famous query in Sophocles’s Oedipus Rex), so that words have always mediated these conundrums, while Anthony Gottlieb writes in The Dream of Reason: A History of Philosophy from the Greeks to the Renaissance that “ingenious paradoxes… try to discredit commonsense views by demonstrating that they lead to unacceptable consequences,” in a gambit as rhetorical as it is analytical. Often connected primarily with mathematics and philosophy, paradox is fundamentally a literary genre, and one ironically (or paradoxically?) associated with the failure of language itself. All of the great authors of paradox—the pre-Socratics, Zen masters, Jesus Christ—were at their core storytellers, they were writers. Words stretched to incomprehension and narrative unspooling is their fundamental medium. Epimenides’s utterance triggers a collapse of meaning, but where the literal perishes there is room made for the figurative. Paradox is the mother of poetry.

I’d venture that the contradictions of life are the subject of all great literature, but paradoxes appear in more obvious forms, too. “There was only one catch and that was Catch-22,” writes Joseph Heller. The titular regulation of Heller’s Catch-22 concerned the mental state of American pilots fighting in the Mediterranean during the Second World War, with the policy such that if somebody requests that they don’t want to fly a mission because of mental infirmity, they’ve only demonstrated their own sanity, since anyone who would want to fly must clearly be insane, so that it’s impossible to avoid fighting. The captain was “moved very deeply by the absolute simplicity of this clause of Catch-22 and let out a respectful whistle.” Because politics is often the collective social function of reducto ad absurdum, political novels make particularly adept use of paradox. George Orwell did something similar in his celebrated (and oft-misinterpreted) novel of dystopian horror 1984, wherein the state apparatus trumpets certain commandments, such as “War is peace. /Freedom is slavery. /Ignorance is strength.” Perhaps such dialectics are the (non-Marxist) socialist Orwell’s parody of Hegelian double-speak, a mockery of that supposed engine of human progress that goes through thesis-antithesis-synthesis. Within paradox there is a certain freedom, the ability to understand that contradiction is an attribute of our complex experience, but when statements are also defined as their opposite, meaning itself can be the casualty. Paradox understood as a means to enlightenment bestows anarchic freedom; paradox understood as a means unto itself is nihilism.

Political absurdities are born out of the inanity of rhetoric and the severity of regulation, but paradox can entangle not just society, but the fabric of reality as well. Science fiction is naturally adept at examining the snarls of existential paradox, with time travel a favored theme. Paul Nahin explains in Time Machines: Time Travel in Physics, Metaphysics, and Science Fiction that temporal paradoxes are derived from the simple question of “What might happen if a time traveler changed the past?” This might seem an issue entirely of hermetic concern, save for in contemporary physics neither general relativity nor quantum mechanics preclude time travel (indeed certain interpretations of those theories downright necessitate it). So even the idea of being able to move freely through past, present, and future has implications for how reality is constituted, whether or not we happen to be the ones stepping out of the tesseract. “The classic change-the-past paradox is, of course, the so-called grandfather paradox,” writes Nahin, explaining that it “poses the question of what happens if an assassin goes back in time and murders his grandfather before his (the time-travelling murderer’s) own father is born.” The grandfather’s murder requires a murderer, but for the murderer in question to be born there is also the requirement that the grandfather not be murdered, so that the murderer is able to travel back in time and kill his ancestor, and again we’re in a strange loop.

Variations exist as far back as the golden age of the pulps, appearing in magazines like Amazing Stories as early as 1929. More recently, Ray Bradbury explored the paradox in “A Sound of Thunder,” where he is explicit about the paradoxical implications that any travel to the past will alter the future in baroque ways, with a 21st century tourist accidentally killing a butterfly in the Cretaceous, leading to the election of an openly fascistic U.S. president millions of years into the future (though the divergence of parallel universes is often proffered as a means of avoiding such implications). In Bradbury’s estimation, every single thing in history, every event, every incident, is “an exquisite thing,” so that a “small thing that could upset balances and knock down a line of small dominoes and then big dominoes and then gigantic dominoes all down the years across Time.” This conundrum need not only be phrased in patricidal terms, for what all temporal paradoxes have at their core is an issue of causality—if we imagine that time progresses from past through future, then what happens when those terms get all mixed up? How can we possibly understand a past that’s influenced by a future that in turn has been affected by the past?

Again, no issue of scholastic quibbling, for though we experience time as moving forward like one of Zeno’s arrows, the physics itself tells us that past, present, and future are constituted in entirely stranger ways. One version of the grandfather paradox involves, rather than grisly murder, the transfer of information from the future to the past; for example, in Tim Powers’s novel The Anubis Gates, a time traveler is stranded in the early 19th century. The character realizes that “I could invent things—the light bulb, the internal combustion engine… flush toilets.” But he abandons this hubris, for “any such tampering might cancel the trip I got here by, or even the circumstances under which my mother and father met.” Many readers will perhaps be aware of temporal paradoxes from the Robert Zemeckis Back to the Future film trilogy (which for what they lack in patricide they make up for in Oedipal sentiments), notably a scene in which Marty McFly inadvertently introduces Chuck Berry to his own song “Johnny B. Goode.” Ignoring the troubling implications that a suburban white teenager had to somehow teach the Black inventor of rock ‘n’ roll his own music, Back to the Future presents a classic temporal paradox—if McFly first heard “Johnny B. Goode” from Berry records, and Berry first heard the song from McFly, then from whence was the song actually composed? (Perhaps from God).

St. Augustine asks in The City of God “What is time, then? If nobody asks me, I know; but if I were desirous to explain it to one who should ask me, I plainly do not know.” Paradox sprouts from the fertile soil of our own incomprehension, and to its benefit there is virtually nothing that humans really understand, at least not really. Time is the oddest thing of all, if we honestly confront the enormity of it. I’m continually surprised that I can’t easily walk into 1992 as if it were a room in my house. No surprise then that time and space are so often explored in the literature of paradox. Oxymoron and irony are the milquetoast cousins of paradox, but poetry at its most polished, pristine, and adamantine elevates contradiction into an almost religious principle. Among the 17th-century poets who worked in the stead of John Donne, paradox was often a central aspect of what critics have called a “metaphysical conceit.” These brilliant, crystalline, rhetorical turns are often like Zeno’s paradoxes rendered into verse, expanding and compressing time and space with a dialectical glee. An example of this from the good Dr. Donne, master of both enigma and the erotic, who in his poem “The Good-Morrow” imagined two lovers for whom they have made “one little room an everywhere.” The narrator and the beloved’s bed-chamber—perhaps there is heavy wooden paneling on the wall and a canopy bed near a fireplace burning green wood, a full moon shining through the mottled crown glass window—are as if a singularity where north, south, east and west; past, present, and future; are all collapsed into a point. Even more obvious is Donne in “The Paradox,” wherein he writes that “Once I loved and died; and am now become/Mine epitaph and tomb;/Here dead men speak their last, and so do I,” the talking corpse its own absurdity made flesh.

So taken were the 20th-century scholars known as the New Critics with the ingenuity of metaphysical conceits that Cleanth Brooks would argue in his classic The Well Wrought Urn: Studies in the Structure of Poetry that the “language of poetry is the language of paradox.” Donne and Andrew Marvell, George Herbert, and Henry Vaughan used paradox as a theme and a subject—but to write poetry itself is paradoxical. To write fiction is paradoxical. Even to write nonfiction is paradoxical. To write at all is paradoxical. A similar sentiment concerning the representational arts is conveyed in the Belgian surrealist painter Rene Magritte’s much parodied 1929 work “The Treachery of Images.” Magritte presents an almost absurdly recognizable smoking pipe, polished to a totemistic brown sheen with a shiny black mouth piece, so basically obvious that it might as well be from an advertisement, and beneath it he writes in cursive script “Ceci n’est pas une pipe”—”This is not a pipe.” A seeming blatant contradiction, for what could the words possibly relate to other than the picture directly above them? But as Magritte told an interviewer, “if I had written on my picture ‘This is a pipe,’ I’d have been lying!” For you see, Magritte’s image is not a pipe, it is an image of a pipe. Like Zeno’s paradoxes, what may first initially seem to be simple-minded contrarianism, a type of existential trolling if you will, belies a more subtle observation. The philosopher Michel Foucault writes in his slender volume This Is Not a Pipe that “Contradiction could exist only between two statements,” but that in the painting “there is clearly but one, and it cannot be contradictory because the subject of the propositions is a simple demonstrative.” According to Foucault, the picture, though self-referential, is not a paradox in the logical sense of the word. And yet there is an obvious contradiction between the viewer’s experience of the painting, and the reality that they’ve not looked upon some carefully carved and polished pipe, but rather only brown and black oil carefully applied to stretched canvas.

This, then, is the “treachery” of which Magritte speaks, the paradox that is gestated within that gulf where meaning resides, a valley strung between the-thing-in-itself and the way in which we represent the-thing-in-itself. Writing is in some ways even more treacherous than painting, for at least Magritte’s picture looks like a pipe—perhaps other than some calligraphic art, literature appears as nothing so much as abstract squiggles. Moby-Dick is not a whale and Jay Gatsby is not a man. They are less than a picture of a pipe, for we have not even images of them, only ink-stained books, and the abject abstraction of mere letters. And yet the paradox is that from that nothingness is generated the most sumptuous something; just as the illusion of painting can trick one into the experience of the concrete, so does the more bizarre phenomenon of the literary imagination make you hallucinate characters that are generated from the non-figurative alphabet. From this essay, if I’ve done even a somewhat adequate job, you’ve hopefully been able to envision Gödel and Einstein bundled into a car on the Jersey turnpike, windows frosted with nervous breath and laughter, the sun rising over the wooded Pine Barrens—or to imagine John and Anne Donne bundled together under an exquisite blanket of red and yellow and blue and green, the heavy oak door of their chamber closed tight against the English frost—but of course you’ve seen no such thing. You’ve only skimmed through your phone while sitting on the toilet, or toggled back and forth between open tabs on your laptop. Literature is paradoxical because it necessitates the invention of entire realities out of the basest nothing; the treachery of representation is that “This is not a pipe” is a principle that applies to absolutely all of the written word, and yet when we read a novel or a poem we can smell the burning tobacco.

All of literature is a great enigma, a riddle, a paradox. What the Zen masters of Japanese Buddhism call a koan. Religion is too often maligned for being haunted by the hobgoblin straw-man of consistency, and yet the only real faith is one mired in contradiction, and few practices embrace paradox quite like Zen. Central to Zen is the breaking down of the dualities that separate all of us from absolute being, the distinction between the I and the not-I. As a means to do this, Zen masters deploy the enigmatic stories, puzzles, sayings, and paradoxes of koan, with the goal of forcing the initiate toward the para-logical, a catalyst for the instantaneous enlightenment known as satori. Sometimes reduced to the “What is the sound of one-hand clapping?” variety of puzzle (though that is indeed a venerable koan), the monk and master D.T. Suzuki explains in An Introduction to Zen Buddhism that these apparently “paradoxical statements are not artificialities contrived to hide themselves behind a screen of obscurity; but simply because the human tongue is not an adequate organ for expressing the deepest truth of Zen, the latter cannot be made the subject of logical exposition; they are to be experienced in the inmost soul when they become for the first time intelligible.” A classic koan, attributed to the ninth-century Chinese monk Linji Yixuan, famously says “If you meet the Buddha, kill him.” Linji’s point is similar to Magritte’s—”This is not the Buddha.” It’s a warning about falling into the trap of representation, of refusing to resist the treachery of images, and yet the paradox is that the only way we have of communicating is through the fallible, inexact, medium of words. Zen is the only religion whose purpose is to overcome religion, and everything else for that matter. It asks us to use its paradoxes as a ladder to which we can climb toward ultimate being—and then we’re to kick that ladder over. In its own strange way, literature is the ultimate koan, all of these novels and plays, poems and essays, all words, words, words meaning nothing and signifying everything, gesturing towards a Truth beyond truth, and yet nothing but artfully arranged lies (and even less than that, simply arrayed squiggles on a screen). To read is to court its own type of enlightenment, of transcendence, and not just because of the questions literature raises, but because of literature’s very existence in the first place.

Humans are themselves the greatest of paradoxes: someone who is kind can harbor flashes of rage, the cruelest of people are capable of genuine empathy, our greatest pains often lead to salvation and we’re sometimes condemned by that which we love. In a famous 1817 letter to his brothers, the English Romantic poet John Keats extolled the most sublime of literature’s abilities that was to dwell in “uncertainties, mysteries, doubts, without any irritable reaching after fact and reason,” a quality that he called “negative capability.” An irony in our present’s abandonment of nuance, for ours is a paradoxical epoch through and through—an era of unparalleled technological superiority and appalling barbarity, of instantaneous knowledge and virtually no wisdom. A Manichean age as well—which valorizes consistency above all other virtues, though it is that most suburban of values—yet Keats understood that if we’re to give any credit to literature, and for that matter any credit to people, we must be comfortable with complexity and contradiction. Negative capability is what separates the moral from the merely didactic. In all of our baroque complexity, paradox is the operative mode of literature, the only rhetorical gambit commensurate with displaying the full spectrum of what it means to be a human. We are all such glorious enigmas—creatures of finite dimension and infinite worth. None of us deserve grace, and yet all of us are worthy of it, a moral paradox that makes us beautiful not in spite of its cankered reality, but because of it. The greatest of paradoxes is that within that contradictory form, there is the possibility of genuine freedom—of liberation.

Image Credit: Wikipedia

Circles of the Damned

-

Maybe during this broiling summer you’ve seen the footage—in one striking video, women and men stand dazed on a boat sailing away from the Greek island of Evia, watching as ochre flames consume their homes in the otherwise dark night. Similar hellish scenes are unfolding in Algeria, Tunisia, and Libya, as well as in Turkey and Spain. Currently Siberia is experiencing the largest wildfire in recorded history, an unlikely place for such a conflagration, joined by large portions of Canada. As California burns, the global nature of our immolation is underscored by horrific news around the world, a demonstration of the United Nation’s Intergovernmental Report on Climate Change’s conclusion that such disasters are “unequivocally” anthropogenic, with the authors signaling a “code red” for the continuation of civilization. On Facebook, the mayor of a town in Calabria mourned that “We are losing our history, our identity is turning to ashes, our soul is burning,” and though he was writing specifically about the fires raging in southern Italy, it’s a diagnosis for a dying world as well.

Seven centuries ago, another Italian wrote in The Divine Comedy, “Abandon all hope ye who enter here,” which seems just as applicable in 2021 as it did in 1221. That exiled Florentine had similar visions of conflagration, describing how “Above that plain of sand, distended flakes/of fire showered down; their fall was slow –/as snow descends on alps when no wind blows… when fires fell, /intact and to the ground.” This September sees the 700th anniversary of both the completion of The Divine Comedy and the death of its author Dante Alighieri. But despite the chasms of history that separate us, his writing about how “the never-ending heat descended” holds a striking resonance. During our supposedly secular age, the singe of the inferno feels hotter when we’ve pushed our planet to the verge of apocalyptic collapse. Dante, you must understand, is ever applicable in our years of plague and despair, tyranny and treachery.

People are familiar with The Divine Comedy’s tropes even if they’re unfamiliar with Dante. Because all of it—the flames and sulphur, the mutilations and the shrieks, the circles of the damned and the punishments fitted to the sin, the descent into subterranean perdition and the demonic cacophony—find their origin with him. Neither Reformation nor revolution has dispelled the noxious fumes from inferno. There must be a distinction between the triumphalist claim that Dante says something vital about the human condition, and the objective fact that in some ways Dante actually invented the human condition (or a version of it).  When watching the videos of people escaping from Evia, it took me several minutes to understand what it was that I was looking at, and yet those nightmares have long existed in our culture, as Dante gave us a potent vocabulary to describe Hell centuries ago. For even to deny Hell is to deny something first completely imagined by a Medieval Florentine.

Son of a prominent family, enmeshed in conflicts between the papacy and the Holy Roman Empire, a respected though ultimately exiled citizen, Dante resided far from the shores of modernity, though the broad contours of his poem, with its visceral conjurations of the afterlife, are worth repeating. “Midway upon the journey of our life/I found myself within a forest dark,” Dante famously begins. At the age of 35, Dante descends into the underworld, guided by the ancient Roman poet Virgil and sustained by thoughts of his platonic love for the lady Beatrice. That Inferno constitutes only the first third of The Divine Comedy—subsequent sections consider Purgatory and Heaven—yet that it is the most read, speaks to something cursedly intrinsic in us. The poet descends like Orpheus, Ulysses, and Christ before him deep into the underworld, journeying through nine concentric circles, each more brutal than the previous. Perdition is a space of “sighs and lamentations and loud cries” filled with “Strange utterances, horrible pronouncements, /accents of anger, words of suffering, /and voice shrill and faint and beating hands” who are buffeted “forever through that turbid, timeless air, /like sand that eddies when a whirlwind swirls.” Cosmology is indistinguishable from ethics, so that each circle was dedicated to particular sins: the second circle is reserved for crimes of lust, the third to those of gluttony, the fourth to greed, the wrathful reside in the fifth circle, the sixth is domain of the heretics, the seventh is for the violent, all those guilty of fraud live in the eighth, and at the very bottom that first rebel Satan is eternally punished alongside all traitors.

Though The Divine Comedy couldn’t help but reflect the concerns of Dante’s century, he still formulated a poetics of damnation so tangible and disturbing that it’s still the measure of hellishness, wherein he “saw one/Rent from the chin to where one breaks wind. /Between his legs were hanging down his entrails;/His heart was visible, and the dismal sack/That makes excrement of what is eaten.” Lest it be assumed that this is simply sadism, Dante is cognizant of how gluttony, envy, lust, wrath, sloth, covetousness, and pride could just as easily reserve him a space. Which is part of his genius; Dante doesn’t just describe Hell, which in its intensity provides an unparalleled expression of pain, but he also manifests a poetry of justice, where he’s willing to implicate himself (even while placing several of his own enemies within the circles of the damned). 

No doubt the tortures meted out—being boiled alive for all eternity, forever swept up in a whirlwind, or masticated within the freezing mouth of Satan—are monstrous. The poet doesn’t disagree—often he expresses empathy for the condemned. But the disquiet that we and our fellow moderns might feel is in part born out of a broad theological movement that occurred over the centuries in how people thought about sin. During the Reformation, both Catholics and Protestants began to shift the model of what sin is away from the Seven Deadly Sins, and towards the more straightforward Ten Commandments. For sure there was nothing new about the Decalogue, and the Seven Deadly Sins haven’t exactly gone anywhere, but what took hold—even subconsciously—was a sense that sins could be reduced to a list of literal injunctions. Don’t commit adultery, don’t steal, don’t murder. Because we often think of sin as simply a matter of broken rules, the psychological acuity of Dante can be obscured. But the Seven Deadly Sins are rather more complicated—we all have to eat, but when does it become gluttony? We all have to rest, but when is that sloth?

An interpretative brilliance of the Seven Deadly Sins is that they explain how an excess of otherwise necessary human impulses can pervert us. Every human must eat; most desire physical love; we all need the regeneration of rest—but when we slide into gluttony, lust, sloth, and so on, it can feel as if we’re sliding into the slime that Dante describes. More than a crime, sin is a mental state which causes pain—both within the person who is guilty and to those who suffer because of those actions. In Dante’s portrayal of the stomach-dropping, queasy, nauseous, never-ending uncertainty of the damned’s lives, the poet conveys a bit of their inner predicament. The Divine Comedy isn’t some punitive manual, a puritan’s little book of punishments. Rather than a catalogue of what tortures match which crimes, Dante’s book expresses what sin feels like. Historian Jeffrey Burton Russell writes in Lucifer: The Devil in the Middle Ages how in hell “we are weighted down by sin and stupidity… we sink downward and inward… narrowly confined and stuffy, our eyes gummed shut and our vision turned within ourselves, drawn down, heavy, closed off from reality, bound by ourselves to ourselves, shut in and shut off… angry, hating, and isolated.”

If such pain were only experienced by the guilty, that would be one thing, but sin effects all within the human community who suffer as a result of pride, greed, wrath, and so on. There is a reason why the Seven Deadly Sins are what they are. In a world of finite resources, to valorize the self above all others is to take food from the mouths of the hungry, to hoard wealth that could be distributed to the needy, to claim vengeance as one’s own when it is properly the purview of society, to enjoy recreation upon the fruits of somebody else’s labor, to reduce another human being to a mere body, to covet more than you need, or to see yourself as primary and all others as expendable. Metaphor is the poem’s currency, and what’s more real than the intricacies of how organs are pulled from orifices is how sin—that disconnect between the divine and humanity—is experienced. You don’t need to believe in a literal hell—I don’t—to see what’s radical in Dante’s vision. What Inferno offers isn’t just grotesque descriptions, increasingly familiar though they may be on our warming planet, but also a model of thinking about responsibility to each other in a connected world.

Such is the feeling of the anonymous authors of the 2018 anarchist manifesto The Invisible Committee—a surprise hit when published in France—who opined that “No bonds are innocent” in our capitalist era, for “We are already situated within the collapse of a civilization,” structuring their tract around the circles of Dante’s inferno. Since its composition, The Divine Comedy has run like a molten vein through culture both rarefied and popular; from being considered by T.S. Eliot, Samuel Becket, Primo Levi, and Dereck Walcott, to being referenced in horror films, comic books, and rock music. Allusion is one thing, but what we can see with our eyes is another—as novelist Francine Prose writes in The Guardian, those images of people fleeing from Greek wildfires are “as if Dante filmed the Inferno on his iPhone.” For centuries artists have mined Inferno for raw materials, but now in the sweltering days of the Anthropocene we are enacting it. To note that our present appears as a place where Hell has been pulled up from the bowels of the earth is a superficial observation, for though Dante presciently gives us a sense of what perdition feels like, he crucially also provided a means to identify the wicked.

Denizens of the nine circles were condemned because they worshiped the self over everybody else; now the rugged individualism that is the heretical ethos of our age has made man-made apocalypse probable. ExxonMobil drills for petroleum in the heating Arctic and the apocalyptic QAnon cult proliferates across the empty chambers of Facebook and Twitter; civil wars are fought in the Congo over the tin, tungsten, and gold in the circuit boards of the Android you’re reading this essay with, children in Vietnam and Malaysia sew the clothing we buy at The Gap and Old Navy, and the simplest request for people to wear masks so as to protect the vulnerable goes unheeded in the name of “freedom” as our American Midas Jeff Bezos barely flies to outer space while workers in Amazon warehouses are forced to piss in bottles rather than be granted breaks. Responsibility for our predicament is unequally distributed, those in the lowest circle are the ones who belched out carbon dioxide for profit knowing full well the effects, those who promoted a culture of expendable consumerism and valorized the rich at the expense of the poor. Late capitalism’s operative commandment is to pretend that all seven of the deadly sins are virtues. Literary scholar R.W.B. Lewis describes the “Dantean principle that individuals cannot lead a truly good life unless they belong to a good society,” which means that all of us are in a lot of trouble. Right now, the future looks a lot less like paradise and more like inferno. Dante writes that “He listens well who takes notes.” Time to pay attention.

Bonus Link:—Is There a Poet Laureate of the Anthropocene?

Image Credit: Wikipedia

Who’s There?: Every Story Is a Ghost Story

-

“One need not be a chamber to be haunted.” —Emily Dickinson

Drown Memorial Hall was only a decade old when it was converted into a field hospital for students stricken with the flu in the autumn of 1918. A stolid, grey building of three stories and a basement, Drown Hall sits half-way up South Mountain where it looks over the Lehigh Valley to the federal portico of the white-washed Moravian Central Church across the river, and the hulking, rusting ruins of Bethlehem Steel a few blocks away. Composed of stone the choppy texture of the north Atlantic in the hour before a squall, with yellow windows buffeted by mountain hawks and grey Pennsylvania skies. Built in honor of Lehigh University’s fourth president, a mustachioed Victorian chemistry professor, Drown was intended as a facility for leisure, exercise, and socialization, housing (among other luxuries) bowling alleys and chess rooms. Catherine Drinker Bowen enthused in her 1924 History of Lehigh University that Drown exuded “dignity and, at the same time, a certain at-home-ness to every function held there,” that the building “carries with it a flavor and spice which makes the hotel or country club hospitality seem thin, flat and unprofitable.” If Drown was a monument to youthful exuberance, innocent pluck, and boyish charm, then by the height of the pandemic it had become a cenotaph to cytokine storms. Only a few months after basketballs and Chuck Taylors would have skidded across its gymnasium floor, and those same men would lay on cots hoping not to succumb to the illness. Twelve men would die of the influenza in Drown.

After its stint as a hospital, Drown would return to being a student center, then the Business Department, and by the turn of our century the English Department. It was in that final purpose that I got to know Drown a decade ago, when I was working on my PhD. Toward the end of my first year, I had to go to my office in the dusk after-hour, when lurid orange light breaks through the cragged and twisted branches of still leafless trees in the cold spring, looking nothing so much like jaundiced fingers twisting the black bars of a broken cage, or like the spindly embers of a church’s burnt roof, fires still cackling through the collapsing wood.  I had to print a seminar paper for a class on 19th-century literature, and to then quickly adjourn to my preferred bar. When I keyed into the locked building, it was empty, silent save for the eerie neon hum of the never-used vending machines and the unnatural pooling luminescence of perennially flickering fluorescent lights in the stairwells at either end of the central hall. While in a basement computer lab, I suddenly heard a burst of noise upstairs come from one end of the hall rapidly progress towards the other—the unmistakable sound of young men dribbling a basketball. Telling myself that it must be the young children of one of the department’s professors, I shakily ascended. As soon as I got to the top the noise ceased. The lights were out. The building was still empty. Never has an obese man rolled down that hill quite as quickly as I did in the spring of 2011.

There are several rational explanations—students studying in one of the classrooms even after security would have otherwise locked up. Or perhaps the sound did come from some faculty kids (though to my knowledge nobody was raising adolescents at that time). Maybe there was something settling strangely, concrete shifting oddly or water rushing quickly through a pipe (as if I didn’t know the difference between a basketball game and a toilet flushing). When depleted of all explanations, I know what I heard and what it sounded like, and I still have no idea what it was. Nor is this the only ghost story that I could recount—there was the autumn of 2003 when walking back at 2 a.m. after the close of the library at Washington and Jefferson College, feet unsteady on slicked brown leaves blanketing the frosted sidewalk, that I noted an unnatural purple light emanating from a half-basement window of Macmillan Hall, built in 1793 (having been the encampment of Alexander Hamilton during the Whisky Rebellion) and the oldest university building west of the Alleghenies. A few seconds after observing the shining, I heard a high-pitched, unnatural, inhuman banshee scream—some kind of poltergeist cry—and being substantially thinner in that year I was able to book it quickly back to my dorm. Or in 2007 while I was living in Scotland, when I toured the cavernous subterranean vaults underneath the South Bridge between the Old and New towns of Edinburgh, and I saw a young chav, who decided to make obscene hand gestures within a cairn that the tour guide assured us had “evil trapped within it,” later break down as if he was being assaulted by some unseen specter. Then there was the antebellum farm house in the Shenandoah Valley that an ex-girlfriend lived in, one room being so perennially cold and eerie that nobody who visited ever wanted to spend more than a few minutes in it. A haunted space in a haunted land where something more elemental than intellect screams at you that something cruel happened there.

Paranormal tales are popular, even among those who’d never deign to believe in something like a poltergeist, because they speak to the ineffable that we feel in those arm-hair-raised, scalp-shrinking, goose-bumped-moments where we can’t quite fully explain what we felt, or heard, or saw. I might not actually believe in ghosts, but when I hear the dribbling of a basketball down an empty and dark hall, I’m not going to stick around to find out what it is. No solitary person is ever fully a skeptic when they’re alone in a haunted house. Count me on the side of science journalist Mary Roach, who in Spook: Science Tackles the Afterlife writes that “I guess I believe that not everything we humans encounter in our lives can be neatly and convincingly tucked away inside the orderly cabinetry of science. Certainly, most things can… but not all. I believe in the possibility of something more.” Haunting is by definition ambiguous—if with any certainty we could say that the supernatural was real it would, I suppose, simply be the natural.

Franco-Bulgarian philosopher Tzvetan Todorov formulated a critical model of the supernatural in his study The Fantastic: A Structural Approach to a Literary Genre, in which he argued that stories about unseen realms could be divided between the “uncanny” and the “marvelous.” The former are narrative elements that can ultimately be explained rationally, i.e., supernatural plot points that prove to be dreams, hallucinations, drug trips, hoaxes, illusions, or anything unmasked by the Scooby-Doo gang. The latter are things that are actually occult, supernatural, divine. When it’s unclear as to whether or not a given incident in a story is uncanny or marvelous, then it’s in that in-between space of the fantastic, which is the same place any ghostly experience has had to be honestly categorized. “The fantastic is that hesitation experienced by a person who knows only the laws of nature, confronting an apparently supernatural event,” writes Todorov, and that is a succinct description of my Drown anomaly. Were it to be simply uncanny, then I suppose my spectral fears would have been assuaged if upon my ascent I found a group of living young men playing impromptu pick-up basketball. For that experience to be marvelous, I’d have to know beyond any doubt that what I heard were actual spirits. As it is, it’s the uncertainty of the whole event—the strange, spooky, surreal ambiguity—that makes the incident fantastic. “What I’m after is proof,” writes Roach. “Or evidence, anyway —evidence that some form of disembodied consciousness persists when the body closes up shop. Or doesn’t persist.” I’ve got no proof or disproof either, only the distant memory of sweaty palms and a racing heart.

Ghosts may haunt chambers, but they also haunt books; they might float through the halls of Drown, but they even more fully possess the books that line that building’s halls. Traditional ghosts animate literature, from the canon to the penny dreadful, including what the Victorian critic Matthew Arnold grandiosely termed the “best which has been thought and said” as well as lurid paperbacks with their garish covers. We’re so obsessed with something seen just beyond the field of vision that vibrates at a frequency that human ears can’t quite detect—from Medieval Danish courts to the Overlook Hotel, Hill House to the bedroom of Ebenezer Scrooge—that we’re perhaps liable to wonder if there is something to ghostly existence. After all, places are haunted, lives are haunted, stories are haunted. Such is the nature of ghosts; we may overlook their presence, their flitting and meandering through the pages of our canonical literature, but they’re there all the same (for a place can be haunted whether you notice that it is or not).

How often do you forget that the work that is the greatest in the language is basically a ghost story? William Shakespeare’s Hamlet is fundamentally a good old-fashioned yarn about a haunted house (in addition to being a revenge tragedy and pirate tale). The famed soliloquy of the Danish prince dominates our cultural imagination, but the most cutting bit of poetry is the eerie line that begins the play: “Who’s there?” Like any good supernatural tale, Hamlet begins in confusion and disorientation, as the sentries Marcellus and Bernardo patrolling Elsinore’s ramparts first espy the silent ghost of the murdered king, with the latter uttering the shaky two-word interrogative. Can you imagine being in the audience, sometime around 1600 when it was a widespread belief that there are more things in heaven and earth than can be dreamt of in our philosophies, and hearing the quivering question asked in the darkness, faces illuminated by tallow candle, the sense that there is something just beyond our experience that has come from beyond? The status of Hamlet’s ghost is ambiguous; some critics have interpreted the specter as a product of the prince’s madness, others claim that the spirit is clearly real. Such uncertainty speaks to what’s fantastic about the ghost, as ambiguity haunts the play. Notice that Bernardo doesn’t ask “What’s there?” His question is phrased towards a personality with agency, even as the immaterial spirit of Hamlet’s dead father exists in some shadow-realm between life and death.

A ghost’s status was no trifling issue—it got to the core of salvation and damnation. Protestants wouldn’t believe that souls could wander the earth; they would either be rewarded in heaven or punished in hell, while ghosts must necessarily be demons. Yet Shakespeare’s play seems to make clear that Hamlet’s father has indeed returned, perhaps as an inhabitant of that way station known as purgatory, that antechamber to eternity whereby the ghost can ascend from the Bardo to skulk around Elsinore for the space of a prologue. Of course, when Shakespeare wrote Hamlet, ostensibly good Protestant that he was, he should have held no faith in purgatory, that abode of ghosts being in large part that which caused Luther to nail his theses to the Wittenberg Cathedral door. When the final Thirty-Nine Articles of the Church of England were ratified in 1571 (three decades before the play’s premier), it was Article 22 that declared belief in purgatory was a ” thing vainly invented, and grounded upon no warranty of scripture; but rather repugnant to the word of God.” According to Stephen Greenblatt’s argument from Hamlet in Purgatory, the ghost isn’t merely Hamlet’s father, but also a haunting from the not-so-distant Catholic past, which the official settlement had supposedly stripped away with rood screens and bejeweled altars. Elsinore’s haunting is not just that of King Hamlet’s ghost, but also of those past remnants that the reformers were unable to completely bury. Greenblatt writes that for Shakespeare purgatory “was a piece of poetry” drawn from a “cultural artery” whereby the author had released a “startling rush of vital energy.” There are a different set of ambiguities at play in Hamlet, not least of which is how this “spirit of health or goblin damned” is to be situated between orthodoxy and heresy. In asking “who” the ghost is, Bernardo is simultaneously asking what it is, where it comes from, and how such a thing can exist. So simple, so understated, so arresting is the first line of Hamlet that I’m apt to say that Bernardo’s question is the great concern of all supernatural literature, if not all literature. Within Hamlet there is a tension between the idea of survival and extinction, for though the prince calls death the “undiscovered country from whose bourn no traveler returns,” he himself must know that’s not quite right. After all, his own father came back from the dead (a role that Shakespeare played himself).

Shakespeare’s ghoul is less ambiguous than those of Charles Dickens, for the ghost of Jacob Marley who visits Ebenezer Scrooge in A Christmas Carol is accused of simply being an “undigested bit of beef, a blot of mustard, a crumb of cheese, a fragment of underdone potato. There’s more of gravy than of grave about you, whatever you are!” Note that the querying nature of the final clause ends in an exclamation rather than a question mark, and there’s no asking who somebody is, now only what they are. Because A Christmas Carol has been filtered through George C. Scott, Patrick Stewart, Bill Murray, and the Muppets, there is a tendency to forget just how terrifying Dickens’s novel actually is. The ultimately repentant Scrooge and his visitations from a trinity of moralistic specters offer up visions of justice that are gothic in their capacity to unsettle. The neuter sprite that is the Ghost of Christmas Past with holly and their summer flowers; hail-fellow-well-met Bacchus that is the Ghost of Christmas Present; and the grim memento mori visage of the reaper who is the Ghost of Christmas Yet to Come (not to mention Marley padlocked and in fetters). Dickens in the mold of Dante, for whom haunting is its own form of retribution, a means of purging us of our inequities and allowing for redemption. Andrew Smith writes in The Ghost Story 1840-1920: A Cultural History that “Dickens’s major contribution to the development of the ghost story lies in how he employs allegory in order to encode wider issues relating to history, money, and identity.” Morality itself—the awesome responsibility impinging on us every second of existence—can be a haunting. The literal haunting of Scrooge is more uncertain, for perhaps they’re gestated from madness, hallucination, nightmare, or as he initially said, indigestion. Dickens’s ghosts are ambiguous—as they always must be—but the didactic sentiment of A Christmas Carol can’t be.

Nothing is more haunting than history, especially a wicked one, and few tales are as cruel as that of the United States. Gild the national narrative all that we want, American triumphalism is a psychological coping mechanism. This country, born out of colonialism, genocide, slavery, is a massive haunted burial ground, and we all know that grave yards are where ghosts dwell. As Leslie Fiedler explained in Love and Death in the American Novel, the nation itself is a “gothic fiction, nonrealistic and negative, sadist and melodramatic—a literature of darkness and the grotesque.” America is haunted by the weight of its injustice; on this continent are the traces of the Pequod and Abenaki, Mohawk and Mohegan, Apache and Navajo whom the settlers murdered; in this country are the filaments of women and men held in bondage for three centuries, and from every tree hangs those murdered by our American monstrosity. That so many Americans willfully turn away from this—the Faustian bargain that demands acquiescence—speaks not to the absence of haunting; to the contrary, it speaks of how we live among the possessed still, a nation of demoniacs. William Faulkner’s observation in Requiem for a Nun that “The past is never dead. It’s not even past” isn’t any less accurate for being so omnipresent. No author conveyed the sheer depth and extent of American haunting quit like Toni Morrison, who for all that she accomplished must also be categorized among the greatest authors of ghost stories. To ascribe such a genre as that to a novel like Morrison’s Beloved is to acknowledge that the most accurate depictions of our national trauma have to be horror stories if they’re to tell the truth. “Anything dead coming back to life hurts”—there’s both wisdom and warning in Morrison’s adage.

Beloved’s plot is as chilling as an autumnal wind off of the Ohio River, the story of the former enslaved woman Sethe whose Cincinnati home is haunted by the ghost of her murdered child, sacrificed in the years before the Civil War to prevent her being returned to bondage in Kentucky. Canonical literature and mythology have explored the cruel incomprehension of infanticide—think of Euripides’s Medea—but Sethe’s not irrational desire to send her “babies where they would be safe” is why Beloved’s tragedy is so difficult to contemplate. When a mysterious young woman named Beloved arrives, Sethe becomes convinced that the girl is the spirit of her murdered child. Ann Hostetler, in her essay from the collection Toni Morrison: Memory and Meaning, writes that Beloved’s ghost “was disconcerting to many readers who expected some form of social or historical realism as they encountered the book for the first time.” She argues, however, that the “representation of history as the return of the repressed…is also a modernist strategy,” whereby “loss, betrayal, and trauma is something that must be exorcized from the psyche before healing can take place.” Just because we might believe ourselves to be done with ghosts doesn’t mean that ghosts are done with us. Phantoms so often function as the allegorical because whether or not specters are real, haunting very much is. We’re haunted by the past, we’re haunted by trauma, we’re haunted by history. “This is not a story to pass on,” Morrison writes, but she understands better than anyone that we can’t help but pass it on.

Historical trauma is more occluded in Stephen King’s The Shining, though that hasn’t stopped exegetes from interpreting the novel about homicide in an off-season Colorado resort as being concerned with the dispossession of Native American, particularly in the version of the story as rendered by director Stanley Kubrick. The Overlook Hotel is built upon a Native American burial ground, Navajo and Apache wall hangings are scattered throughout the resort. Such conjectures about The Shining are explored with delighted aplomb in director Rodney Ascher’s documentary Room 237 (named after the most haunted place in the Overlook), but as a literary critical question, there’s much that’s problematic in asking what any given novel, or poem, or movie is actually about; an analysis is on much firmer ground when we’re concerned with how a text works (a notably different issue).  So, without discounting the hypothesis that The Shining is concerned with the genocide of indigenous Americans, the narrative itself tells a more straightforward story of haunting, as a bevy of spirits drive blocked, recovering alcoholic writer and aspiring family destroyer Jack Torrance insane. Kubrick’s adaptation is iconic—Jack Nicholson as Torrance (with whom he shares a first name) breaking through a door with an axe; his son, young Danny Torrance, escaping through a nightmarish, frozen hedge-maze of topiary animals; the crone in room 237 coming out of the bathtub; the blood pouring from the elevators; the ghostly roaring ’20s speakeasy with its chilling bartender, and whatever the man in the boar costume was. Also, the twins.

Still, it could be observed that the only substantiated supernatural phenomenon is the titular “Shining” that afflicts both Danny and the Overlook’s gentle caretaker, Dick Halloran. “A lot of folks, they got a little bit of shine to them,” Dick explains. “They don’t even know it. But they always seem to show up with flowers when their wives are feelin blue with the monthlies, they do good on school tests they don’t even study for, they got a good idea how people are feelin as soon as they walk into a room.” As hyper-empathy, the shining makes it possible that Danny is merely privy to a variety of psychotic breaks his father is having rather than those visions being actually real. While Jack descends further and further into madness, the status of the spectral beings’ existence is ambiguous (a point not everyone agrees on, however). It’s been noted that in the film, the appearance of a ghost is always accompanied by that of a mirror, so that The Shining’s hauntings are really manifestations of Jack’s fractured psyche. Narrowly violating my own warning concerning the question of “about,” I’ll note how much of The Shining is concerned with Jack’s alcoholism, the ghostly bartender a psychic avatar of all that the writer has refused to face. Not just one of the greatest ghost stories of the 20th century, The Shining is also one of the great novels of addiction, an exploration of how we can be possessed by own deficiencies. Mirrors can be just as haunted as houses. Notably, when King wrote The Shining, he was in the midst of his own full-blown alcoholism, so strung out he barely remembers writing doorstopper novels (he’s now been sober for more than 30 years). As he notes in The Shining, “We sometimes need to create unreal monsters and bogies to stand in for all the things we fear in our real lives.”   

If Hamlet, A Christmas Carol, Beloved, and The Shining report on apparitions, then there are novels, plays, and poems that are imagined by their creators as themselves being chambers haunted by something in between life and death. Such a conceit offers an even more clear-eyed assessment of what’s so unsettling about literature—this medium, this force, this power—capable of affecting what we think, and see, and say as if we ourselves were possessed. As a trope, haunted books literalize a profound truth about the written word, and uneasily push us towards acknowledging the innate spookiness of language, where the simplest of declarations is synonymous with incantation. Richard Chambers’s collection of short stories The King in Yellow conjures one of the most terrifying examples of a haunted text, wherein an imaginary play that shares the title of the book is capable of driving its audience to pure madness. “Strange is the night where the black stars rise, /And strange moons circle through the skies,” reads verse from the play; innocuous, if eerie, though it’s in the subtlety that the demons get you. Chambers would go on to influence H.P. Lovecraft, who conceived of his own haunted book in the form of the celebrated grimoire The Necronomicon, which he explains was “Composed by Abdul Alhazred, a mad poet of Sanaa in Yemen, who was said to have flourished during the period of the Ommiade caliphs, circa 700 A.D.,” and who rendered into his secret book the dark knowledge of the elder gods who were responsible for his being “seized by an invisible monster in broad daylight and devoured horribly before a large number of fright-frozen witnesses.” Despite the Necronomicon’s fictionality, there are a multitude of occultists who’ve claimed over the years that Lovecraft’s haunted volume is based in reality (you can buy said books online).

Then there are the works themselves which are haunted. Shakespeare’s Macbeth is the most notorious example, its themes of witchcraft long lending it an infernal air, with superstitious directors and actors calling it the “Scottish play” in lieu of its actual title, lest some of the spells within bring ruin to a production. Similar rumors dog Shakespeare’s contemporary Christopher Marlowe and his play Doctor Faustus, with a tradition holding that the incantations offered upon the stage summoned actual demons. With less notoriety, the tradition of “book curses” was a full-proof way to guard against the theft of the written word—a practice that dates back as far as the Babylonians, but that reached its apogee during the Middle Ages, when scribes would affix to manuscript colophons warnings about what should befall an unscrupulous thief. “Whoever steals this Book of Prayer/May he be ripped apart by swine, /His heart splintered, this I swear, /And his body dragged along the Rhine,” writes Simon Vostre in his 1502 Book of Hours. To curse a book is perhaps different than a stereotypical haunting, yet both of these phenomenon, not-of-this-world as they are, assume disembodied consciousness as manifest among otherwise inert matter; the curse is a way of imbuing yourself and your influence beyond your demise. It’s to make yourself a ghost, and it worked in leaving those books complete. “If you ripped out a page, you were going to die in agony. You didn’t want to take the chance,” Marc Drogin dryly notes in Anathema! Medieval Scribes and the History of Book Curses.

Not all writing is cursed, but surely all of it is haunted. Literature is a catacomb of past readers, past writers, past books. Traces of those who are responsible for creation linger among the words on a page; Shakespeare can’t hear us, but we can still hear him (and don’t ghosts wander through those estate houses upon the moors unaware that they’ve died?). Disenchantment has supposedly been our lot since Luther, or Newton, or Darwin chased the ghosts away, leaving behind this perfect mechanistic clockwork universe with no need for superfluous hauntings. Though like Hamlet’s father returned from a purgatory that we’re not supposed to believe in, we’re unwilling to acknowledge the specters right in front of us. Of all of the forms of expression that humanity has worked with—painting, music, sculpture—literature is the eeriest. Poetry and fiction are both incantation and conjuration, the spinning of specters and the invoking of ghosts; it is very literally listening to somebody who isn’t there, and might not have been for a long while. All writing is occult, because it’s the creation of something from ether, and magic is simply a way of acknowledging that—a linguistic practice, an attitude, a critical method more than a body of spells. We should be disquieted by literature; we should be unnerved. Most of all, we should be moved by the sheer incandescent amazement that such a thing as fiction, and poetry, and performance are real. Every single volume upon the shelves of Drown, every book in an office, every thumbed and underlined play sitting on a desk, is more haunted than that building. Reader, if you seek enchantments, turn to any printed page. If you look for a ghost, hear my voice in your own head.

Bonus Link:—Binding the Ghost: On the Physicality of Literature

Image Credit: Flickr/Kevin Dooley

The World Is All That Is the Case

- | 9

“Well, God has arrived. I met him on the 5:15 train. He has a plan to stay in Cambridge permanently.” —John Maynard Keynes in a letter to his wife describing Ludwig Wittgenstein (1929)

Somewhere along the crooked scar of the eastern front, during those acrid summer months of the Brusilov Offensive in 1916, when the Russian Empire pierced into the lines of the Central Powers and perhaps more than one million men would be killed from June to September, a howitzer commander stationed with the Austrian 7th Army would pen gnomic observations in a notebook, having written a year before that the “facts of the world are not the end of the matter.” Among the richest men in Europe, the 27-year-old had the option to defer military service, and yet an ascetic impulse compelled Ludwig Wittgenstein into the army, even though he lacked any patriotism for the Austro-Hungarian cause. Only five years before his trench ruminations would coalesce into 1921’s Tractatus Logico-Philosophicus, and the idiosyncratic contours of Wittgenstein’s thinking were already obvious, scribbling away as incendiary explosions echoed across the Polish countryside and mustard gas wafted over fields of corpses. “When my conscience upsets my equilibrium, then I am not in agreement with something. But is this? Is it the world?” he writes. Wittgenstein is celebrated and detested for this aphoristic quality, with pronouncements offered as if directly from the Sibylline grove. “Philosophy,” Wittgenstein argued in the posthumously published Culture and Value, “ought really to be written only as poetic composition.” In keeping with its author’s sentiment, I’d claim that the Tractatus is less the greatest philosophical work of the 20th century than it is one of the most immaculate volumes of modernist poetry written in the past hundred years.

The entire first chapter is only seven sentences, and can easily be arranged as a stanza read for its prosody just as easily as a logician can analyze it for rigor:

The world is all that is the case.

The world is the totality of facts, not of things.

The world is determined by the facts, and by their being all the facts.

For the totality of facts determines what is the case, and also whatever is not the case.

The facts in logical space are the world.

The world divides into facts.

Each item can be the case or not the case while everything else remains the same.

Its repetition unmistakably evokes poetry. The use of anaphora with “The world” at the beginning of the first three lines (and then again at the start of the fifth). The way in which each sentence builds to a crescendo of increasing length, from starting with a simple independent clause to a trio of lines that are composed of independent and dependent clauses, hitting a peak in the exact middle of the stanza, and then returning to independent clauses, albeit the final line being the second longest sentence in the poem. Then there is the diction, the reiteration of certain abstract nouns in place of concrete images—”world,” ‘facts,” “things.” In Wittgenstein’s thought these have definite meanings, but in a general sense they’re also words that are pushed to an extreme of conceptual intensity. They are as vague as is possible, while still connotating a definite something. If Wittgenstein mentioned red wheelbarrows and black petals, it might more obviously read as poetry, but what he’s doing is unique; he’s building verse from the constituent atoms of meaning, using the simplest possible concepts that could be deployed. Finally, the inscrutable nature of Wittgenstein’s pronouncements is what gives him such an oracular aura. If the book is confusing, that’s partially the point. It’s not an argument, it’s a meditation, a book of poetry that exists to do away with philosophy.   

Published a century ago this spring, the Tractatus is certainly one of the oddest books in the history of logic, structured in an unconventional outline of unspooling pronouncements offered without argument, as well as a demonstration of philosophy’s basic emptiness, and thus the unknowability of reality. All great philosophers claim that theirs is the work that demolishes philosophy, and Wittgenstein is only different in that the Tractatus actually achieves that goal. “Most of the propositions and questions to be found in philosophical works are not false but nonsensical,” writes Wittgenstein.  “Consequently, we cannot give any answer to questions of this kind,” where “of this kind” means all of Western philosophy. What results is either poetry transubstantiated into philosophy or philosophy converted into poetry, with the Tractatus itself a paradox, a testament to language that shows the limits of language, where “anyone who understands me eventually recognizes… [my propositions] as nonsensical…He must, so to speak, throw away the ladder after he has climbed up it.” The Tractatus is a self-immolating book, a work that exists to demonstrate its own futility in existing. At its core are unanswerable questions of silence, meaninglessness, and unuttered poetry. The closest that Western philosophy has ever come to the Tao.

Of the Viennese Wittgensteins, Ludwig was raised in an atmosphere of unimaginable wealth. As a boy, the salons of the family mansions (there were 13 in the capital alone) were permeated with the music of Gustav Mahler and Johannes Brahms (performed by the composers themselves), the walls were lined with commissioned golden-shimmer paintings by Gustave Klimt, and the rocky bespoke sculptures of August Rodin punctuated their courtyards. “Each of the siblings was made exceedingly rich,” writes Alexander Waugh in The House of Wittgenstein (and he knows about difficult families), “but the money, to a family obsessed with social morality, brought with it many problems.” Committed to utmost seriousness, dedication, and genius, the Wittgensteins were a cold family, the children forced to live up to the exacting standards of their father Karl Otto Clemens Wittgenstein. Ludwig’s father was an iron man, the Austrian Carnegie, and the son was indulged with virtually every privilege imaginable in fin de siècle Vienna. His four brothers were to be trained for industry, and to be patrons of art, music, poetry, and philosophy, with absolutely no failure in any regard to be countenanced. Only a few generations from the shtetl, the Wittgensteins had assimilated into gentile society, most of them converting to Catholicism, along with the few odd Protestants; Ludwig’s grandfather even had the middle name “Christian” as if to underscore their new position. Wittgenstein had a life-long ambivalence about his own Jewishness—even though three of his four grandparents were raised in the faith—and he had an attraction to a type of post-theological mystical Christianity, while he also claimed that his iconoclastic philosophy was “Hebraic.”

Even more ironically, or perhaps uncannily, Wittgenstein was only the second most famous graduate of Vienna’s secondary Realschule; the other student was Adolph Hitler. There’s a class photograph from 1905 featuring both of them when they were 16. As James Klaage notes in Wittgenstein: Philosophy and Biography, “an encounter with Wittgenstein’s mind would have created resentment and confusion in someone like Hitler,” while to great controversy (and thin evidence) Kimberly Cornish in The Jew of Linz claims that the philosopher had a profound influence on the future dictator, inadvertently inspiring the latter’s antisemitism. Strangely, like many assimilated and converted Jews within Viennese society, a casual antisemitism prevailed among the Wittgensteins. He would even be attracted to the writings of the pseudo-philosopher Otto Weininger, who in his book Sex and Character promulgated a notoriously self-hating antisemitic and misogynistic position, deploring modernity as the “most effeminate of all ages” (the author would ultimately commit suicide in the house where Beethoven had lived as an act of Völkisch sacrifice). When promoting the book, Wittgenstein maintained that he didn’t share in Weininger’s views, but rather found the way the writer was so obviously wrong interesting. Jewishness was certainly not to be discussed in front of the Wittgenstein paterfamilias, nor was anything that to their father reeked of softness, gentleness, or effeminacy, including Ludwig’s bisexuality, which he couldn’t express until decades later. And so at the risk of indulging an armchair version of that other great Viennese vocation of psychoanalysis, Wittgenstein made the impossibility of being able to say certain things the center of his philosophy. As Brahms had remembered, the family chillily acted “towards one another as if they were at court.” Of his four brothers—Rudi drank a mixture of cyanide and milk while in a Berlin cabaret in 1922, distraught over his homosexuality and his father’s rejection; Kurt shot himself in the dwindling days of the Great War after his troops defied him; and Hans, the oldest and a musical prodigy, presumably drowned himself in Chesapeake Bay while on an American sojourn in 1902—only Paul and Ludwig avoided suicide. There were economic benefits to being a Wittgenstein, but little else.

Austere Ludwig—a cinema-handsome man with a personality somehow both dispassionate and intense—tried to methodically shuffle off his wealth, which had hung from his neck along with the anchor of respectability. As it was, eventually the entire fortune would be commandeered by the Nazis, but before that Wittgenstein dispensed with his inheritance literally. When his father died in 1913, Wittgenstein began anonymously sending large sums of money to poets like Rainer Maria Rilke, whose observation in a 1909 lyric that “I am so afraid of people’s words./They describe so distinctly everything” reads almost as a gloss on the Tractatus. With his new independence, Wittgenstein moved to simple log cabin on a Norwegian fjord where he hoped to revolutionize logic. Attracted towards the austere, this was the same Wittgenstein whom in 1923, after the Tractatus had been published, lodged above a grocer in rural Austria and worked as a school teacher, with the visiting philosopher Frank Ramsey describing one of the richest men in Europe as living in “one tiny room, whitewashed, containing a bed, washstand, small table and one hard chair and that is all there is room for. His evening meal which I shared last night is rather unpleasant coarse bread, butter and cocoa.” Monasticism served Wittgenstein, because he’d actually accomplish that task of revolutionizing philosophy. From his trench meditations while facing down the Russians—where he ironically carried only two books—Fyodor Dostoevsky’s The Brothers Karamazov and Leo Tolstoy’s The Gospel in Brief —he birthed the Tractatus, holding to Zossima’s commandment that one should “Above all, avoid falsehood, every kind of falsehood.” The result would be a book whose conclusions were completely true without being real. Logic pushed to the extremes of prosody.  

The Tractatus was the only complete book Wittgenstein published in his lifetime, and the slender volume is composed of a series of propositions arranged within one another like an onion. Its seven main propositions are asserted axiomatically, their self-evidence stated without argument or equivocation—a book not of examples or evidence, but of sentences. A Euclidean project that reads as if hermetic poetry, with claims like “A logical picture of facts is a thought,” which in its poetic abstractions is evocative of something like John Keats’s “Truth is beauty.” Part of its literary quality is the way in which these claims are presented as crystalline abstractions, appearing as timeless refugees from eternity, bolstered not by recourse to anything but themselves (indeed the Tractatus contains no quotes from other philosophers, and virtually no references). His concern was the relationship between language and reality, how logic is able (or not able) to offer a picture of the world, and his conclusions circumscribe philosophy— he solves all metaphysical problems by demonstrating that they are meaningless. “Philosophy aims at the logical clarification of thoughts,” writes Wittgenstein. “Philosophy is not a body of doctrine but an activity… essentially of elucidations.” All of the problems of philosophy—the paradoxes and metaphysical conundrums, the ethical imperatives and the aesthetic judgments—are pseudo-problems. Philosophy exists not to do what natural science does; it doesn’t explain reality, it only clarifies language. When you come to Ludwig Wittgenstein on the road, you must kill him. The knife that you use is entitled the Tractatus, and he’ll hand it to you first.

When Wittgenstein arrived at the Cambridge University office of the great British philosopher Bertrand Russell in 1911, he had no formal training. So bereft was his knowledge that Wittgenstein couldn’t pronounce some of the philosophers’ names correctly. Biographer Ray Monk maintains in Ludwig Wittgenstein: The Duty of Genius that his subject had never even read Aristotle (despite sharing an affinity with him). And yet a few months into their studies together, the latter would declare “I love him & feel he will solve the problems that I am too old to solve.” Though after the Austrian had argued that metaphysical problems are simply an issue of linguistic confusion, the elder thinker would despair that “I could not hope ever again to do fundamental work in philosophy.” At the time that Wittgenstein met with Russell, he was studying aeronautical engineering at the University of Manchester, pushed into a practical field by his father. During his time there he invented several different metal airplane propeller blades exemplary in their ingenuity, but he was unfulfilled and despondent. He had come to believe that only philosophy could cure his spiritual malaise, and so Wittgenstein spent three years at Cambridge, where he travelled in intellectual circles that included the philosopher G.E. Moore and the economist John Maynard Keynes. Finally, he was also able to live openly with a lover, a psychologist named David Pinsent. He would dedicate the Tractatus to Pinsent, his partner ironically killed in an airplane crash training for the war that they fought on the opposite sides of. After his return to Austria, Wittgenstein worked a multitude of disparate jobs—he was a school-teacher in the Alps, unpopular for discharging corporal punishment; he was a gardener in a monastery (and inquired about taking vows); and he was an architect of uncommon brilliance, designing a modernist masterpiece for his sister called the Haus Wittgenstein, which in its alabaster parsimony and cool rectilinear logic recalls the Bauhaus. When that building was completed in 1929, Wittgenstein finally returned to Cambridge, where he would be awarded a PhD even though he took no courses and sat for no exams. Russell had recognized that not all genius need be constrained in the seminar room. And still, after his successful defense, Wittgenstein would clap his advisor on his shoulder and say “Don’t worry, I’ll know you’ll never understand it.” Years latter Russell would remark that he had only produced snowballs—Wittgenstein had generated avalanches.  

Hubris aside, Wittgenstein was dogged by a not unfounded fear that the Tractatus would be misinterpreted, not least of all by analytical philosophers like Russell who valorized logic as holy writ. Twentieth-century philosophy has long suffered schism between two broadly irreconcilable perspectives on what the discipline even exists to do. There are the analytical philosophers like Russell, Gottlieb Frege, G.E. Moore, and so on (including, technically, Wittgenstein), who see the field as indistinguishable from logic, to the point where its practice shares more in common with mathematics than Socrates. Largely focused in the United Kingdom and the United States, analytical philosophers are, as Russell writes in his A History of Western Philosophy (the work which secured his Nobel Prize in Literature), unified in a sense of “scientific truthfulness, by which I mean the habit of basing our beliefs upon observations and inferences as impersonal…as is possible for human beings.” Continental philosophy, however, is much more concerned with the traditional metaphysical and ethical questions which we associate with the discipline, asking what the proper way to live is, or how we find meaning in life. Associated with scholarly work in Europe, largely in France and Germany, a primary question to a continental philosopher like Martin Heidegger could be, as he writes in What Is Metaphysics?, “Why are there things at all, and why not nothing? This is the question.” To Russell that’s not a question at all, it’s pure nonsense. Wittgenstein would perhaps also see it as meaningless—though not at all in the same way as his advisor, and that makes all the difference. As with his interpretation of Weininger, it’s how things are nonsense that’s interesting.

In the city of Wittgenstein’s birth, the dominant philosophical movement at the time he published the Tractatus was a gathering of analytical philosophers known as the Vienna Circle. Composed of figures like Moritz Schlick, Rudolf Carnap, and Kurt Gödel, the Vienna Circle argued something not dissimilar to what Wittgenstein had claimed, understanding metaphysical, ethical, and aesthetic conjectures as nonsense. At the core of much of their work was something called the Verification Principle, an argument that the only propositions that are sensical are from either deduction or induction, either from mathematics or empiricism, and that everything else can be discarded (that the Verification Principle would fail its own test wasn’t important). But if the Vienna Circle agreed with Wittgenstein about how much of traditional philosophy was nonsense, the latter invested that nonsense with a sublimity they were blind toward. Perhaps the earliest to misinterpret the Tractatus, of which the Vienna Circle were nonetheless avid readers, Schlick invited Wittgenstein to address them in 1926. When he arrived, rather than giving a lecture, Wittgenstein turned a chair to the wall, began to rock back and forth and recited the verse of the Bengali poet Rabindranath Tagore. An arresting and moving scene: handsome Wittgenstein in rumpled blue shirt and tweed jacket, a man whom the American philosopher Norman Malcolm would describe as possessing a face that was “lean and brown…aquiline and strikingly beautiful, his head was covered with a curly mass of brown hair,” davening as was the ritual of his ancestors, perhaps chanting Tagore’s line from The Gardner that “We do not stray out of all words in the ever silent,” a disposition more in keeping with the Tractatus than anything written by the audience. Carnap, to his credit, quickly realized their mistake, recalling that Wittgenstein’s point of view was “much more similar to those of a creative artist than to those of a scientist; one might almost say, similar to those of a religious prophet or a seer.”

Though classified as one of the most important logicians of the 20th century, Wittgenstein’s earlier thought is more poetry than philosophy; the curt, aphoristic pronouncements of the Tractatus deserving to be studied alongside Rilke and Paul Celan. Both in form and content, purpose and expression, the Tractatus is lyric verse—it’s poetry that gestures beyond poetry. More than just poetry, it’s a purposefully self-defeating logical argument that exonerates the poetic above the philosophical, for if the Vienna Circle thought that all of the most interesting things were nonsense, then Wittgenstein knew that faith was the very essence of being, even if we could say nothing definitive about it. As he writes in the Tractatus, “even when all possible scientific questions have been answered, the problems of life remain completely untouched. Of course, there are then no questions left, and this itself is the answer.” Something Taoist about Wittgenstein, his utterances evocative of Lao Tzu’s contention in the Tao Te Ching that the “name that can be named is not the eternal name.” With the Tractatus, Wittgenstein mounted the most audacious ars poetica, a book of verse written not in rhythm and meter but in logic, where the purpose was to show the futility of logic itself. “Whereof one cannot speak, thereof one must remain silent;” its most famous line, and the entirety of the last chapter. Things must be left unsaid because they’re unable to be said—but literally everything important is unable to be said. Understood as a genius, interpreted as a rationalist, treated as an enigma, and appearing as a mystic, Wittgenstein was ultimately a poet. “The limits of my language are the limits of my world,” writes Wittgenstein, and this is sometimes misunderstood as consigning everything but logic to oblivion, but the opposite is true. In a world limited by language, then it is language itself that constitutes the world. Poetry, rather than logic, is all that is the case.

On Memory and Literature

-

My grandmother’s older sister Pauline Stoops, a one-room school teacher born in 1903, had lived in a homestead filled with poetry, which sat on a bluff overlooking the wide and brown Mississippi, as it meandered southward through Hannibal, Missouri. Pauline’s task was to teach children aged six to seventeen in history, math, geography, and science; her students learned about the infernal compromise which admitted Missouri into the union as a slave state and they imagined when the Great Plains were a shallow and warm inland sea millions of years ago; they were taught the strange hieroglyphics of the quadratic equation and the correct whoosh of each cursive letter (and she prepared them oatmeal for lunch every day as well). Most of all, she loved teaching poetry — the gothic morbidities of Edgar Allen Poe and the sober patriotism of John Greenleaf Whittier, the aesthetic purple of Samuel Taylor Coleridge and the mathematical perfection of Shakespeare. A woman whom when I knew her was given to extemporaneous recitations of memorized Walt Whitman. She lived in the Midwest for decades, until she followed the rest of her mother’s family eastward to Pennsylvania, her siblings having moved to Reading en masse during the Depression, tracing backwards a trail that had begun with distant relations. Still, Hannibal remained one of Pauline’s favorite places, in part because of its mythic role, this town where Mark Twain had imagined Huck Finn and Tom Sawyer playing as pirates along the riverbank.

“I had been to school most all the time and could spell and read and write just a little,” recounts the titular protagonist in The Adventures of Huckleberry Finn, “and could say the multiplication table up to six times seven is thirty-five, and I don’t reckon I could ever get any further than that if I was to live forever. I don’t take no stock in mathematics, anyway.” Huck’s estimation of poetry is slightly higher, even if he doesn’t read much. He recalls coming across some books while visiting the wealthy Grangerfords, including John Bunyan’s Pilgrim’s Progress that was filled with statements that “was interesting, but tough” and another entitled Friendship’s Offering that was “full of beautiful stuff and poetry; but I didn’t read the poetry.” Had Huck been enrolled in in Mrs. Stoops’ classroom he would have learned verse from a slender book simply entitled One Hundred and One Poems with a Prose Supplement, compiled by anthologizer Roy Cook in 1916. When clearing out Pauline’s possessions with my grandma, we came across a 1920 edition of Cook’s volume, with favorite lines underlined and pages dog-eared, scraps of paper now a century old used as bookmarks. Cook’s anthology was incongruously printed by the Cable Piano Company of Chicago (conveniently located at the corner of Wabash and Jackson), and included advertisements for their Kingsbury and Conover models, to which they promise student progress even for those with only “a feeble trace of musical ability,” proving that in the United States Mammon can take pilgrimage to Parnassus.

The flyleaf announced that it was “no ordinary collection,” being both “convenient,” “authoritative,” and most humbly “adequate,” while emphasizing that at fifteen cents its purchase would “Save many a trip to the Public Library, or the purchase of a volume ten to twenty times its cost.” Some of the names are familiar — William Wordsworth, Alfred Lord Tennyson, Oliver Wendell Holmes, Rudyard Kipling (even if many are less than critically fashionable today). Others are decidedly less canonical — Francis William Bourdillon, Alexander Anderson, Edgar A. Guest (the last of whom wrote pablum like “You may fail, but you may conquer – / See it through!”). It goes without saying that One Hundred and One Poems with a Prose Supplement is overwhelmingly male and completely white. Regardless, there’s a charm to the book, from the antiquated Victorian sensibility to the huckster commercialism. Even more strange and moving was my grandmother’s reaction to this book bound with a brown hardcover made crooked by ten decades of heat and moisture, cold and entropy, the pages inside turned the texture of fall sweetgum and ash leaves as they drop into the Mississippi. When I mentioned Henry Wadsworth Longfellow, my grandmother (twenty years Pauline’s junior) began to perfectly recite from memory “By the shores of Gitchee Gumee, /By the shining Big-Sea-Water,” going on for several stanzas of Henry Wadsworth Longfellow’s distinctive percussive trochaic tetrameter. No doubt she hadn’t read “The Song of Hiawatha” in decades, perhaps half-a-century, and yet the rhythm and meter came back to my grandmother as if she was the one looking at the book and not me.

My grandmother’s formal education ended at Centerville High School in Mystic, Iowa in 1938; I’ve been fortunate enough to go through graduate school and receive an advanced degree in literature. Of the two of us, only she had large portions of poetry memorized; I on the other hand have a head that’s full of references from The Simpsons. If I’m able to recall more than a quarter of a single Holy Sonnet by John Donne I’d be amazed, yet I have the entirety of the Steve Miller Band’s “The Joker” memorized for some reason. Certainly, I have bits and pieces here and there, “Death be not proud” or “Batter my heart three-personed God” and so on, but when it comes to making such verse part of my bones and marrow, I find that I’m rather dehydrated. Memorization was once central to pedagogy, when it was argued that committing verse to instantaneous recall was a way of preserving cultural legacies, that it trained students in rhetoric, and that it was a means of building character. Something can seem pedantic about such poetry recitation; the provenance of fussy antiquarians, apt to start unspooling long reams of Robert Burns or Edward Lear in an unthinking cadence, readers who properly hit the scansion, but where the meaning comes out with the wrong emphasis. Still, such an estimation can’t help but leave the flavor of sour grapes on my tongue where poetry should be, and so the romanticism of the practice must be acknowledged. 

Writing exists so that we don’t have to memorize, and yet there is something tremendously moving about recalling words decades after you first encountered them. Memorization’s consequence, writes Catherine Robson in Heart Beats: Everyday Life and the Memorized Poem, was that “these verses carried the potential to touch and alter the worlds of the huge numbers of people who took them to heart.” Books can burn, but as long as a poem endures in the consciousness of a person, they are in possession of a treasure.  “When the topic of verse memorization is raised today,” writes Robson, “the invocation is often couched within a lament.” Now we’re all possessors of personal supercomputers that can instantly connect us to whole libraries — there can seem little sense to make iambs and trochees part of one’s soul. Now the soul has been outsourced to our smartphones, and we’ve all become cyborgs, carrying our memories in our pockets rather than our brains. But such melancholy over forgetfulness has an incredibly long history. Socrates formulated the most trenchant of those critiques, with Plato noting in the Phaedrus that his teacher had once warned that people will “cease to exercise memory because they rely on that which is written, calling things to remembrance no longer from within themselves, but by means of external marks.” Important to consider where Socrates places literature, for if it is “within” — like heart, brain, or spleen — rather than some dead thing discarded on inked reeds. According to Socrates, writing is idolatrous; the difference between memorization and actual literature is the equivalent to a painting and reality. Though it must be observed that the only reason we care who Socrates happens to be is because Plato wrote his words down.

Poetry most evokes literature’s first role as a vehicle of memory, because the tricks of prosody – alliteration and assonance; consonance and rhyme – endured because they’re amenable to quick recall. Not only do such attributes make it possible to memorize poetry, they facilitate its composition as well. For literature wasn’t first written on papyrus but rather in the mind, and that was the medium through which it was recorded for most of its immeasurably long history. Since the invention of writing, we’ve tended to think of composition as an issue of a solitary figure committing their ideas to the eternity of paper, but the works of antiquity were a collaborative affair. Albert Lord explains in his 1960 classic The Singer of Tales that “oral epic song is narrative poetry composed in a manner evolved over many generations by singers of tales who did not know how to write; it consists of the building of metrical lines and half lines by means of formulas and formulaic expressions and of the buildings of songs by the use of themes.” Lord had accompanied his adviser, the folklorist and classicist Milman Parry, to the Balkans in 1933 and then again in 1935, where they recorded the oral poetry of the largely illiterate Serbo-Croatian bards. They discovered that recitations were based in “formulas” that made remembering epics not only easier, but also made their performances largely improvisational, even if the contours of a narrative remained consistent. From their observations, Parry and Lord developed the “Oral-Formulaic Theory of Composition,” arguing the pre-literate epics could be mixed and matched in a live telling, by using only a relatively small number of rhetorical tropes, the atoms of spoken literature.

Some of these formulas — phrases like “wine-dark sea” and “rosy-fingered dawn” for example — are familiar to any reader of The Iliad and The Odyssey, and the two discovered that such utterances had a history in Balkans and the Peloponnesus that goes back millennia. There’s an important difference between relatively recent works like Virgil’s The Aeneid (albeit composed two millennia ago) and the epics of Homer that predate the former by at least eight centuries. When Virgil sat down to pen “I sing of arms and man,” he wasn’t actually singing. He was probably writing, while whoever it was — whether woman or man, women or men — that invoked “Sing to me of the man, Muse, the man of twists and turns” most likely did utter those words accompanied by a lyre. The Aeneid is a work of literacy, while those of Homer are of orality, which is to say it was composed through memory. Evocatively, there is some evidence that the name “Homer” isn’t a proper noun. It may be an archaic Greek verb, a rough translation being “to speak,” or better yet “to remember.” There were many homers, each of them remembering their own unique version of such tales, until they were forgotten into the inert volumes of written literature.  

Socrates’ fears aren’t without merit. Just as the ability to Google anything at any moment has made contemporary minds atrophied with relaxation, so too does literacy have an effect on recall. With no need to be skilled in remembering massive amounts of information, reading and writing made our minds surprisingly porous. From the Celtic fringe of Britain to the Indus Valley, from the Australian Outback to the Great Plains of North America, ethnographers recount the massive amounts of information which pre-literate peoples were capable of. Poets, priests, and shamans were able to memorize (and adapt when needed) long passages by deft manipulation of rhetorical trope and mnemonic device. When literacy was introduced in places, there was a marked cognitive decline in peoples’ ability to memorize things. For example, writing in the journal Australian Geography, linguist Nick Reid explains that the oral literature of the aboriginal Narrangga people contains narrative details which demonstrate an accurate understanding of the geography of York Peninsula some 12,500 years ago, before melting glaciers irrevocably altered the coastline. Three hundred generations of Narrangga have memorized and told tale of marshy lagoons which no longer exist, an uninterrupted chain of recitation going back an astounding thirteen millennia. Today, if every single copy of Jonathan Franzen’s The Corrections and David Foster Wallace’s Infinite Jest were to simultaneously vanish, who among us would be able to recreate those books?

It turns out that some folks have been able to train their minds to store massive amounts of language. Among pious Muslims, people designated as Hafiz have long been revered for their ability to memorize the 114 surahs of the holy Quran. Allah’s words are thus written into the heart of the reverential soul, so that language becomes as much a part of a person as the air which fills lungs or the blood which flows in veins. “One does not have to read long in Muslim texts,” writes William Graham in Beyond the Written Word: Oral Aspects of Scripture in the History of Religion, “to discover how the ring of the qur’anic text cadences the thinking, writing, and speaking of those who live with and by the Qur’an.” Still relatively common in the Islamic world, the process of memorizing not just Longfellow or a few fragments of T.S. Eliot, but rather an entire book, is still accomplished to a surprising degree. Then there are those who through some mysterious cognitive gift (or curse depending on perspective) possess eidetic memory, and have the ability to commit entire swaths of text to retrieval without the need for mnemonic devices or formulas. C.S. Lewis could supposedly quote from memory any particular line of John Milton’s Paradise Lost that he was asked about; similar claims have been made about critic Harold Bloom.

Prodigious recall need not only be the purview of otherworldly savants, as people have used similar methods as a Hafiz or a Narrangga to consume a book. Evangelical minister Tom Meyer, also known as the “Bible Memory Man,” has memorized twenty books of scripture, while actor John Bassinger used his stage-skills to memorize all six thousand lines of Paradise Lost, with an analysis some two decades later demonstrating that he was still able to recite the epic with some 88% accuracy. As elaborated on by Lois Parshley in Nautilus, Bassinger used personal associations of physical movement and spatial location to “deep encode” the poem, quoting him as saying that Milton is a “cathedral I carry around in my mind… a place that I can enter and walk around at will.” No other type of art is like this — you can remember what a painting looks like, you can envision a sculpture, but only music and literature can be preserved and carried with you, and the former requires skills beyond memorization. As a scholar I’ve been published in Milton Studies, but if Bassinger and I marooned on an island somewhere, or trapped in the unforgiving desert, only he would be in actual possession of Paradise Lost, while I sadly sputtered half-remembered epigrams about justifying the ways of God to man.

Bassinger, who claims that he still loses his car keys all the time, was able to memorize twelve books of Milton by associating certain lines with particular movements, so that the thrust of an elbow may be man’s first disobedience, the kick of a leg being better to rule in hell than to serve in heaven. There is a commonsensical wisdom in understanding that memory has always been encoded in the body, so that our legs and arms think as surely as our brains do. Walter Ong explains in Orality and Literacy that “Bodily activity beyond mere vocalization is not… contrived in oral communication, but is natural and even inevitable.” Right now, I’m composing while stooped over, in the servile position of desk siting, with pain in my back and crick in my neck, but for the ancient bards of oral cultures the unspooling of literature would have been done through a sweep of the arms or the trot of a leg. Motion and memory being connected in a walk. Paradise Lost as committed by Bassinger was also a “cathedral,” a place that he could go to, and this is one of the most venerable means of being able to memorize massive amounts of writing. During the Renaissance, itinerant humanists used to teach the ars memoriae, a set of practical skills designed to hone memory. Chief among these tutors was the sixteenth-century Italian occultist, defrocked Dominican, and heretic Giordano Bruno, who took as students King Henry III of France and the Holy Roman Emperor Rudolf II (later he’d be burnt at the stake in Rome’s Campo de’ Fiori, though for unrelated reasons).

Bruno used many different methodologies, including mnemonics, associations, and repetitions, but his preferred approach was something called the method of loci. “The first step was to imprint on the memory a series of loci or places,” writes Dame Frances Yates in The Art of Memory. “In order to form a series of places in memory… a building is to be remembered, as spacious and varied a one as possible, the forecourt, the living room, bedrooms, and parlors, not omitting statues and other ornaments with which the rooms are decorated.” In a strategy dating back to Cicero and Quintilian, Bruno taught that the “images by which the speech is to be remembered… are then placed in imagination on the memorial places which have been memorized in the building,” so that “as soon as the memory… requires to be revived, all these places are visited in turn and the various deposits demanded of their custodians.” Bruno had his students build in their minds what are called “memory palaces,” architectural imaginings whereby a line of prose may be associated with an opulent oriental rug, a stanza of poetry with a blue Venetian vase upon a mantle, an entire chapter with a stone finishing room in some chateau; the candle sticks, fireplace kindling, cutlery, and tapestries each hinged to their own fragment of language, so that recall can be accessed through a simple stroll in the castle of your mind.

It all sounds very esoteric, but it actually works. Even today, competitive memorization enthusiasts (this is a real thing) use the same tricks that Bruno taught. Science journalist Joshua Foer recounts how these very same methods were instrumental in his winning the 2006 USA Memory Championship, storing poems in his mind by associating them with places as varied as Camden Yards and the East Wing of the National Gallery of Art, so that he “carved each building up into loci that would serve as cubbyholes for my memories.” The method of loci is older than Bruno, than even Cicero and Quintilian, and from Camden Yards and the National Gallery to Stonehenge and the Nazca Lines, spatial organization has been a powerful tool. Archeologist Lynne Kelly claims that many Neolithic structures actually functioned as means for oral cultures to remember text, arguing in Knowledge and Power in Prehistoric Societies: Orality, Memory, and the Transmission of Culture that “Circles or lines of stones or posts, ditches or mounds enclosing open space… serve as memory theaters beautifully.”

Literature is simultaneously vehicle, medium, preserver, and occasionally betrayer of memory. Just as our own recollections are more mosaic than mirror (gathered from small pieces that we’ve assembled as a narrative with varying degrees of success), so too does writing impose order on one thing after another. Far more than memorized lines, or associating stanzas with rooms, or any mnemonic trick, memory is the ether of identity, but it is fickle, changing, indeterminate, and unreliable. Fiction and non-fiction, poetry and prose, drama and essay — all are built with bricks of memory, but with a foundation set on wet sand. Memory is the half-recalled melody of a Mr. Rogers’ Neighborhood song played for your son while you last heard it decades ago; it’s the way that a certain laundry detergent smells like Glasgow in the fall and a particular deodorant as Boston in the cool summer; how the crack of the bat at PNC Park brings you back to Three Rivers Stadium, and Jagged Little Pill always exists in 1995. And memory is also what we forget. Our identities are simply an accumulation of memories — some the defining moments of our lives, some of them half-present and only to be retrieved later, and some constructed after the fact.

“And once again I had recognized the taste of the crumb of madeleine soaked in her decoration of lime flowers which my aunt used to give me,” Marcel Proust writes in the most iconic scene of In Remembrance of Things Past, and “immediately the old gray house upon the street, where her room was, rose up like the scenery of a theater.” If novels are a means of excavating the foundations of memory, then Proust’s magnum opus is possibly more associated with how the deep recesses of the mind operate than any other fiction. A Proustian madeleine, signifying all the ways in which sensory experiences trigger visceral, almost hallucinatory memories, has become a mainstay, even while most have never read In Remembrance of Things Past. So universal is the phenomenon, the way in which the taste of Dr. Pepper can propel you back to your grandmother’s house, or Paul Simon’s Graceland can place you on the Pennsylvania Turnpike, that Proust’s madeleine has become the totem of how memories remain preserved in tastes, sounds, smells. Incidentally, the olfactory bulb of the brain, which processes odors, is close to the hippocampus where memories are stored, so that Proust’s madeleine is a function of the cerebral cortex. Your madeleine need not be a delicately crumbed French cookie dissolving in tea, it could just as easily be a Gray’s Papaya waterdog, a Pat’s cheesesteak, or a Primanti Brother’s sandwich (all of those work for me). Proust’s understanding of memory is sophisticated, for while we may humor ourselves into thinking that our experiences are recalled with verisimilitude, the reality is that we shuffle and reshuffle the past, we embellish and delete, and what’s happened to us can return as easily as its disappeared. “The uncomfortable reality is that we remember in the same way that Proust wrote,” argues Jonah Lehrer in Proust was a Neuroscientist. “As long as we have memories to recall, the margins of those memories are being modified to fit what we know now.”

Memory is the natural subject of all novels, since the author composes from the detritus of her own experience, but also because the form is (primarily) a genre of nostalgia, of ruminating in the past (even an ostensibly invented one). Some works are more explicitly concerned with memory, their authors reflecting on the malleability, plasticity, and endurance of memory. Consider Tony Webster in Julian Barnes’ The Sense of an Ending, ruminating on the traumas of his school years, noting that we all live with the assumption that “memory equals events plus time. But it’s all much odder than this. Who was it said that memory is what we thought we’d forgotten? And it ought to be obvious to us that time doesn’t act as a fixative, rather as a solvent.” Amnesia is the shadow version of memory, all remembrance haunted by that which we’ve forgotten. Kazuo Ishiguro’s parable of collective amnesia The Buried Giant imagines a post-Arthurian Britannia wherein “this land had become cursed with a mist of forgetfulness,” so that it’s “queer the way the world’s forgetting people and things from only yesterday and the day before that. Like a sickness come over us all.” Jorge Luis Borges imagines the opposite scenario in his short story “Funes the Memorius,” detailing his friendship with a fictional Uruguayan boy who after a horse-riding accident is incapable of forgetting a single detail of his life. He can remember “ever crevice and every molding of the various houses.” What’s clear, despite Funes being “as monumental as bronze,” is that if remembering is the process of building a narrative for ourselves, then ironically it requires forgetting. Funes’ consciousness is nothing but hyper-detail, and with no means to cull based on significance or meaning, it all comes to him as an inchoate mass, so that he was “almost incapable of ideas of a general, Platonic sort.”

Between the cursed amnesiacs of Ishiguro and the damned hyperthymiac of Borges are Barnes’ aging characters, who like most of us remember some things, while finally forgetting most of what’s happened. Tellingly, a character like Tony Webster does something which comes the closest to writing — he preserves the notable stuff and deletes the rest. Funes is like an author who can’t bring himself to edit, and the Arthurian couple of Ishiguro’s tale are those who never put pen to paper in the first place. Leher argues that Proust “believed that our recollections were phony. Although they felt real, they were actually elaborate fabrications,” for we are always in the process of editing and reediting our pasts, making up new narratives in a process of revision that only ends with death. This is to say that memory is basically a type of composition — it’s writing. From an assemblage of things which happen to us — anecdotes, occurrences, traumas, intimacies, dejections, ecstasies, and all the rest — we impose a certain order on the past; not that we necessarily invent memories (though that happens), but rather that we decide which memories are meaningful, we imbue them with significance, and then we structure them so that our lives take on the texture of a narrative. We’re able to say that had we not been in the Starbucks near Union Square that March day, we might never have met our partner, or if we hadn’t slept in and missed that job interview, we’d never have stayed in Chicago. “Nothing was more central to the formation of identity than the power of memory,” writes Oliver Sachs in The River of Consciousness, “nothing more guaranteed one’s continuity as an individual,” even as “memories are continually worked over and revised and that their essence, indeed, is recategorization.” We’re all roman a clef in the picaresque of our own minds, but bit characters in the novels written by others.

A century ago, the analytical philosopher Bertrand Russell wrote in The Analysis of Mind that there is no “logical impossibility in the hypothesis that the world sprang into existence five minutes ago, exactly as it then was, with a population that ‘remembered’ a wholly unreal past.” Like most metaphysical speculation there’ something a bit sophomoric about this, though Russell admits as such when he writes that “I am not here suggesting that the non-existence of the past should be entertained as hypothesis,” only that speaking logically nobody can fully “disprove the hypothesis.” This is a more sophisticated version of something known as the “Omphalos Argument” — a cagey bit of philosophical book-keeping that had been entertained since the eighteenth-century — whereby evidence of the world’s “supposed” antiquity (fossils, geological strata, etc.) were seen as devilish hoaxes, and thus the relative youthfulness of the world’s age could be preserved alongside biblical inerrancy (the multisyllabic Greek word means “naval,” as in Eve and Adam’s bellybutton). The five-minute hypothesis was entertained as a means of thinking about radical skepticism, where not only all that we see, hear, smell, taste, and touch are fictions, but our collective memories are a fantasy as well. Indeed, Russell is correct in a strictly logical sense; writing this at 4:52 P.M. on April 20th, 2021, and there is no way that I can rely on any outside evidence, or my own memories, or your memories, to deductively and conclusively prove with complete certainty that the universe wasn’t created at 4:47 P.M. on April 20th, 2021 (or by whatever calendar our manipulative robot-alien overlords count the hours, I suppose).

Where such a grotesque possibility errs is that it doesn’t matter in the slightest. In some ways, it’s already true; the past is no longer here and the future has yet to occur, we’ve always been just created in this eternal present (whatever time we might ascribe to it). To remember is to narrate, to re-remember is still to narrate, and to narrate is to create meaning. Memories are who we are — the fundamental particles of individuality. Literature then, is a type of cultural memory; a conscious thing whose neurons are words, and whose synapses are what authors do with those words. Writing is memory made manifest, a conduit for preserving our identity outside of the prison of our own skulls. A risk here, though. For memory fails all of us to varying degrees — some in a catastrophic way — but everyone is apt to forget most of what’s happened to them. “Memory allows you to have a sense of who you are and who you’ve been,” argues Lisa Genova in Remember: The Science of Memory and the Art of Forgetting. Those neurological conditions which “ravage the hippocampus” are particularly psychically painful, with Genova writing that “If you’ve witnessed someone stripped bare of his or her personal history by Alzheimer’s disease, you know firsthand how essential memory is to the experience of being human.” To argue that our memories are ourselves is dangerous, for what happens when our past slips away from view? Pauline didn’t suffer from Alzheimer’s, though in her last years she was afflicted by dementia. I no longer remember what she looked like, exactly, this woman alive for both Kitty Hawk and the Apollo mission. I can no longer recall what her voice sounded like. What exists once our memories are deleted from us, when our narratives have unraveled? What remains after that deletion is something called the soul. When I dream about Pauline, I see and hear her perfectly.

Image: Pexels/Jordane Mathieu.

Elegy of the Walker

-

By the conclusion of Mildred Lissette Norman’s 2,200 mile hike on the Appalachian Trail in 1952—the steep snow-covered peaks of New Hampshire’s White Mountains; the autumnal cacophony of Massachusetts’ brown, orange, and red Berkshires; the verdant greens of New York’s Adirondacks and Pennsylvania’s Alleghanies; the misty roll of Virginia’s Blue Ridge; the lushness of North Carolina and Georgia’s Great Smoky Mountains—she would wear down the soles of her blue Sperry Topsiders into a hatchwork of rubber threads, the rough canvas of the shoes ripping apart at the seams. “There were hills and valleys, lots of hills and valleys, in that growing up period,” Norman would recall, becoming the first woman to hike the trail in its entirety. The Topsiders were lost to friction, but along with 28 additional pairs of shoes over the next three decades, she would also gain a new name—Peace Pilgrim. The former secretary would (legally) rechristen herself after a mystical experience somewhere in New England, convinced that she would “remain a wanderer until mankind has learned the way of peace.”


Peace Pilgrim’s mission began at the Rose Bowl Parade in 1953, gathering signatures on a petition to end the Korean War. From Pasadena she trekked over the Sierra Nevada, the hardscrabble southwest, the expansive Midwestern prairies, the roll of the Appalachians and into the concrete forest of New York City. She gained spectators, acolytes, and detractors; there was fascination with this 46-year-old woman, wearing a simple blue tunic emblazoned in white capital letters with “Walking Coast to Coast for Peace,” her greying hair kept up in a bun and her pockets containing only a collapsible toothbrush, a comb, and a ballpoint pen. By the time she died in 1981, she had traversed the United States seven times. “After I had walked almost all night,” she recalled in one of the interviews posthumously collected into Peace Pilgrim: Her Life and Work in Her Own Words, “I came out into a clearing where the moonlight was shining down… That night I experienced the complete willingness, without any reservations, to give my life to something beyond myself.” It was the same inclination that compelled Abraham to walk into Canaan, penitents to trace Spain’s Camino de Santiago, or of the whirling Mevlevi dervishes traipsing through the Afghan bush. It was an inclination toward God.

Something about the plodding of one foot after another, the syncopation mimicking the regularity of our heartbeat, the single-minded determination to get from point A to point B (wherever those mythic locations are going to be) gives walking the particular enchantments that only the most universal of human activities can have. Whether a stroll, jog, hike, run, saunter, plod, trek, march, parade, patrol, ramble, constitutional, wander, perambulation, or just plain walk, the universal action of moving left foot-right foot-left foot-right foot marks humanity indelibly, so common that it seemingly warrants little comment if you’re not a podiatrist. But when it comes to the subject, there are as many narratives as there are individual routes, for as Robert Macfarlane writes in The Old Ways: A Journey on Foot, “a walk is only a step away from a story, and every path tells.” Loathe we should be to let such an ostensibly basic act pass without some consideration.

Rebecca Solnit writes in Wanderlust: A History of Walking that “Like eating or breathing, [walking] can be invested with wildly different cultural meanings, from the erotic to the spiritual, from the revolutionary to the artistic.” Walking is leisure and punishment, introspection and exploration, supplication and meditation, even composition. As a tool for getting lost, both literally and figuratively, of fully inhabiting our being, walking can empty out our selfhood. A mechanism for transmuting a noun into a verb, or transforming the walker into the walking. When a person has pushed themselves so that their heart pumps like a piston, that they feel the sour burn of blisters, the chaffing of denim, so that breathing’s rapidity is the only focus, then there is something akin to pure consciousness (or possibly I’m just fat). And of course, all that you can simply observe with that consciousness, unhampered by screen, so that walking is “powerful and fundamental,” as Cheryl Strayed writes in her bestseller Wild: From Lost to Found on the Pacific Coast Trail: an account of how Strayed hiked thousands of miles following the death of her mother, and learning “what it was like to walk for miles with no reason other than to witness the accumulation of trees and meadows, mountains and deserts, streams and rocks, rivers and grasses, sunrises and sunsets… It seemed to me it had always felt like this to be a human in the wild.”     

Maybe that sense of being has always attended locomotion, ever since a family of Australopithecus pressed their calloused heals into the cooling volcanic ash of Olduvai Gorge in Tanzania some 3.7 million years ago. Discovered in 1976 by Mary Leaky, the preserved footprints were the earliest example of hominoid bipedalism. Two adults and a child, the traces suggested parents strolling with a toddler, as if they were Adam and Eve with either Cain or Abel. Olduvai’s footprints are smaller than those of a modern human, but they lack the divergent toe of other primates, and they indicate that whoever left them moved from the heel of their feet to the ball, like most of us do. Crucially, there were no knuckle impressions left, so they didn’t move in the manner that chimpanzees and gorillas do. “Mary’s footprint trail was graphically clear,” explains Virginia Morell in Ancestral Passions: The Leakey Family and the Quest for Humankind’s Beginnings, “the early hominoids stood tall and walked as easily on two legs as any Homo sapiens today… it was apparently this upright stance, rather than enlarged crania, that first separated these creatures from other primates.”

The adults were a little under five-feet tall and under 100 pounds, covered in downy brown fur with slopping brow and overbit jaw, with the face of an ape but the uncanny eyes of a human, the upright walking itself transforming them into the latter. Walking preceded words, the ambulation perhaps necessitating the speaking. Australopithecus remained among the pleasant green plains of east Africa, but by the evolution of anatomically modern humans in the area that is now Kenya, Tanzania, and Ethiopia, and walking became the engine by which people disseminated through the world. Meandering was humanity’s insurance, as Nicholas Wade writes in Before the Dawn: Recovering the Lost History of Our Ancestors, that as little as 50,000 years ago and the “ancestral human population, the first to possess the power of fully articulate modern speech, may have numbered only 5,000 people, confined to a homeland in northeast Africa.” In such small numbers, and in such a circumscribed area, humanity was prisoner to circumstance, where an errant volcano, draught, or epidemic could have easily consigned us to oblivion. Walking as far as we could was our salvation.

Humans would walk out of Africa into Asia and perhaps by combination of simple boat and swimming down the Indonesian coast into Australia, across the Bering Strait and into North America, and over the Panamanian isthmus and into South America, with distant islands like Madagascar, New Zealand, and Iceland waiting for sailing technology to ferry people to their shores millennia after we left the shade of the Serengeti’s bulbous baobab trees. We think of our ancestors as living in a small world, but there’s was an expansive realm, all the more so since it wasn’t espied through a screen. Partner to burrowing meerkats peaking over the dry scrub of the Kalahari, nesting barn owls overlooking the crusting, moss-covered bark of the ancient Ciminian Forest, the curious giant softshell tortoises of the Yangtze River. To walk is to be partner to the natural world, it is to fully inhabit being an embodied self. Choosing to be a pedestrian today is to reject the diabolic speed of both automobile and computer. Macfarlane writes in The Wild Places that nature is “invaluable to us precisely because… [it is] uncompromisingly different… you are made briefly aware of a world at work around and beside our own, a world operating in patterns and purposes that you do not share.” Sojourn into a world so foreign was the birthright of the first humans, and it still is today, if you choose it. 

All continents, albeit mostly separated by unsettlingly vast oceans, are in some form or another connected by thin strips of land here or there, something to the advantage of Scottish Victorian explorer Sir George Thompson who walked from Canada to Western Europe, via Siberia. More recently there was the English explorer George Meegan who from 1977 to 1983 endeavored to walk from Patagonia to the northern tip of North America, which involved inching up South America’s Pacific coast, crossing the Darien Gap into Central America, circling the Gulf Coast and walking up the Atlantic shore, following the Canadian border, and then walking from the Yukon into Alaska. Meegan’s expedition covered 19,019 miles, the longest recorded uninterrupted walk. Effected by the nervous propulsion that possibly compelled that first generation to leave home, Meegan explains in The Longest Walk: The Record of our World’s First Crossing of the Entire Americas that “once the idea seemed to be a real possibility, once I thought I could do it, I had to do it.” Along the way Meegan wore out 12 pairs of hiking boots, got stabbed once, and met peanut-farmin’ Jimmy Carter at his Georgian homestead.

Meegan’s route was that of the first Americans, albeit accomplished in reverse. The most recent large landmass to be settled, those earliest walkers observed a verdant expanse, for as Craig Childs describes the Paleolithic continent in Atlas of a Lost World: Travels in Ice Age America, the land east of the Bering Strait was a “mosaic of rivers and grasslands… horizons continuing on as if constantly giving birth to themselves—mammoths, Pleistocene horses, and giant bears strung out as far as the eye could see. It must have seemed as if there was no end, the generosity of this planet unimaginable.” A venerable (and dangerous) tradition to see America as an unspoiled Paradise, but it’s not without its justifications, and one that I’ve been tempted to embrace during my own errands into the wilderness. Raised not far from Pittsburgh’s Frick Park, a 644-acre preserve set within the eastern edge of the city, a bramble of moss-covered rocky creeks and surprisingly steep ravines, a constructed forest primeval meant to look as it did when the Iroquois lived there, and I too could convince myself that I was a settler upon the frontier. Like all sojourns into the woods, I found that my strolls in Frick Park couldn’t help but have a bit of the mythic about them, especially at dusk.

One day when I was a freshman, a friend and I took a stack of cheap, pocket-sized, rough orange-covered New Testaments which the Gideons, who were standing the requisite constitutionally mandated distance from our high school, had been assiduously handing out as part of an ill-considered attempt to convert our fellow students. In a pique of adolescent blasphemy, we went to a Frick Park path, and walked through the cooling October forest as twilight fell, ripping the cheap pages from the bibles and letting them fall like crisp leaves to the woods’ floor, or maybe inadvertently as a trail of Eden’s apple seeds. No such thing as blasphemy unless you already ascent to the numinous, and as our heretical stroll turned God on his head, a different pilgrim had once roamed woods like these. During the Second Great Awakening when revivals of strange fervency and singular belief burnt down the Appalachian edge, a pious adherent of the mystical Swedenborgian faith named John Chapman was celebrated for traipsing through Pennsylvania, Ohio, Indiana, and Illinois. Briefly a resident of the settlement of Grant’s Hill in what is today downtown Pittsburgh, about several miles west of Frick Park, and Chapman’s mission was to spread the gospel of the New Church, along with the planting of orchards. Posterity remembers him as Johnny Appleseed.

Folk memory has Johnny
Appleseed fixed in a particular (and peculiar way): the bearded frontiersmen,
wearing a rough brown burlap coffee sack, a copper pot on his head, and a
trundled bag over his shoulder as the barefoot yeoman plants apples across the
wide expanse of the West. I’d wager there is a strong possibility you thought
he was as apocryphal as John Henry or Paul Bunyan, but Chapman was definitely
real; a Protestant St. Francis of whom it was said that he walked with a tamed wolf
and that due to his creaturely benevolence even the mosquitoes would spare him
their sting. Extreme walkers become aspects of nature, their souls as if the
migratory birds that trace lines over the earth’s curvature. Johnny Appleseed’s
walking was a declaration of common ownership over the enormity of this land. Sometime
in the 1840s, Chapman found himself listening to the outdoor sermonizing of a fire-and-brimstone
Methodist preacher in Mansfield, Ohio. “Where is the primitive Christian, clad
in coarse raiment, walking barefoot to Jerusalem?” the minister implored
the crowd, judging them for their materialism, frivolity, and immorality.
Finally, a heretofore silent Johnny Appleseed, grown tired of the uncharitable
harangue, ascended the speaker’s platform and hiked one grime-covered,
bunion-encrusted, and blistered black foot underneath the preacher’s nose.
“Here’s your primitive Christian,” he supposedly declared. Even
Johnny Appleseed’s gospel was of walking.

“John Chapman’s appearance at the minister’s stump,” writes William Kerrigan in Johnny Appleseed and the American Orchard: A Cultural History, made the horticulturalist a “walking manifestation of a rejection of materialism.” Not just figuratively a walking embodiment of such spirituality, but literally a walking incarnation of it. The regularity of putting one foot after the other has the rhythm of the fingering of rosary beads or the turning of prayer wheels; both intimately physical and yet paradoxically a means of transcending our bodies. “Walking, ideally, is a state in which the mind, the body, and the world are aligned,” writes Solnit, “as though they were three characters finally in conversation together, three notes suddenly making a chord.” Hence walking as religious devotion, from the Australian aborigine on a walkabout amidst the burnt ochre Outback, to the murmuring pilgrim tracing the labyrinth underneath the stone flying buttresses of Chartres Cathedral, and the Hadji walking over hot sands towards Mecca, or the Orthodox Jew graced with the gift of deliberateness as she walks to shul on Shabbat. Contemplative, meditative, and restorative, religiously speaking walking can also be penitential. In 2009 the Irish Augustinian Fr. Michael Mernagh walked from Cork to Dublin on a pilgrimage of atonement that he single-handedly took in penitence for the Church’s shameful silence regarding child sexual abuse. Not just a pilgrimage, but a protest, with Fr. Mernagh saying of the rot infecting the Church that the “more I have walked the more I feel it is widespread beyond our comprehension.” Atonement is uncomfortable, painful even. As pleasant as a leisurely stroll can be, a penitential hike should strain the lungs, burn the muscles. If penitence isn’t freely taken, however, then it’s no longer penitence. Especially if there’s no reason for contrition, then it’s something else—punishment. Or torture.

“The drop outs began,” recalled Lt. Col. William E. Dyess. “It seemed that a great many of the prisoners reached the end of their endurance at about the same time. They went down by twos and threes. Usually, they made an effort to rise. I never can forget their groans and strangled breathing as they tried to get up. Some succeeded.” Many didn’t. A Texas Air Force officer with movie-star good looks, Dyess had been fighting with the 21st Pursuit Squadron in the Philippines when the Japanese Imperial Army invaded in 1942. Along with some 80,000 fellow American and Filipino troops, he would be marched the 70 miles from Marviales to Camp O’Donnell. Denied food and water, forced to walk in the scorching heat of the jungle sun, with temperatures that went well over 100 degrees, it’s estimated that possibly 26,000 prisoners—a third of those captured—perished in that scorching April of 1942.

Bayonet and heat, bullet and sun, all goaded the men to put one foot in front of the other until many of them couldn’t any more. Transferred from the maggot and filth infested Camp O’Donnell, where malaria and dengue fever took even more Filipinos and Americans, Dyess was able to escape and make it back to American lines. While convalescing in White Sulphur Springs, W.Va., Dyess narrated his account—the first eyewitness American testimony to the Bataan Death March—to Chicago Tribune writer Charles Leavelle. Prohibited from release by the military, Leavelle would finally see the publication of Bataan Death March: A Survivor’s Account in 1944. Dyess never read it; he died a year before. His P-38G-10-LO Lightning lost an engine during takeoff at a Glendale, Calif., airport, and rather than risk civilian casualties by abandoning the plane, Dyess crashed it into a vacant lot, so that his life would be taken by flying rather than by walking.

Walking can remind us that we’re alive, so that it’s all the more obscene when such a human act is turned against us, when the pleasure of exertion turns into the horror of exhaustion, the gentle burn in muscles transformed into spasms, breathing mutated into sputtering. Bataan’s nightmare, among several, was that it was walking that couldn’t stop, wasn’t’ allowed to stop. “It is the intense pain that destroys a person’s self and world,” writes philosopher Elaine Scarry in The Body in Pain: The Making and Unmaking of the World, “a destruction experienced spatially as either the contraction of the universe down to the immediate vicinity of the body or as the body swelling to fill the entire universe.” Torture reminds us that we’re reducible to bodies; it particularizes and universalizes our pain. With some irony, walking does something similar, with the exertion of moving legs and swinging arms, our wide-ranging mobility announcing us as citizens of the universe. Hence the hellish irony of Bataan, or the 2,200 miles from Georgia to Oklahoma that more than 100,000 Cherokee, Muscogee, Seminole, Chickasaw, and Choctaw were forced to walk by the U.S. federal government between 1830 and 1850, the January 1945 40-mile march of 56,000 Auschwitz prisoners to the Loslau train station in sub-zero temperatures, or the 2.5 million residents of Phnom Penh, Cambodia, forced to evacuate into the surrounding countryside by the Khmer Rouge in 1975. These walks are as hell, prisoners followed by guards with guns and German shepherds, over the hard, dark ground.

Harriet Tubman’s walks were also in the winter, feet trying to gain uncertain purchase upon frozen bramble, stumbling over cold ground and slick, snow covered brown leaves, and she too was pursued by men with rifles and dogs. She covered similar distances as those who were forced to march, but Tubman was headed to another destination, and that has made all the difference. Crossing the Mason-Dixon line in 1849, Tubman recalled that “I looked at my hands to see if I was the same person. There was such glory over everything; the sun came like gold through the trees, and over the fields, and I felt like I was in Heaven.” But she wasn’t in Heaven, she was in Pennsylvania. For Tubman, and for the seventy enslaved people whom she liberated on thirteen daring missions back into Maryland, walking was arduous, walking was frightening, walking was dangerous, but more than anything walking was the price of freedom.

Tubman would rightly
come to be known as the Moses of her people (another prodigious walker),
descending into the antebellum South like Christ harrowing Hell. The network of
safe-houses and sympathetic abolitionists who shepherded the enslaved out of
Maryland, and Virginia, and North Carolina into Pennsylvania, and New England
and Canada, who quartered the enslaved in cold, dusty, cracked root cellars and
hidden passageways, used multiple means of transportation. People hid in the
backs of wagons underneath moldering produce, they availed themselves of
steamships and sometimes the Underground Railroad was a literal railroad. One
enterprising man named Henry Box Brown even mailed himself from Richmond to
Philadelphia, the same year Tubman arrived in the Quaker City. But if the
Underground Railroad was anything, it was mostly a process of putting one foot
before the other on the long walk to the north.

              Familiar with the ebbs and flows of the
brackish Chesapeake as it lapped upon the western shores of Dorchester County,
Tubman was able to interpret mossy rocks and trees to orient herself, navigating
by the Big Dipper and Polaris. Her preferred time of travel was naturally at
night, and winter was the best season to abscond back, deploying the silence
and cold of the season as camouflage. Dorchester is only a scant 150 miles from
Philadelphia, but those even further south – South Carolina, Georgia, even
Mississippi – would also walk to freedom. Eric Foner explains in Gateway to
Freedom: The Hidden History of the Underground Railroad, that “Even
those who initially escaped by other means ended up having to walk significant
distances.” Those nights on the Underground Railroad must have been
terrifying. Hearing the resounding barks of Cuban hounds straining at slave
catchers’ leashes, the metallic taste of fear sitting in mouths, bile rising up
in throats. Yet what portals of momentary grace and beauty were there, those
intimations of the sought-after freedom? To see the graceful free passage of a
red-tailed hawk over the Green Ridge, the bramble thickets along the
cacophonous Great Falls of the cloudy Potomac, the luminescence of a blue moon
reflected on a patch of thick ice in the Ohio River?

            During that same decade, and the French dictator Napoleon III was, through ambitious city planning, inventing an entirely new category of walker – the peripatetic urban wanderer. Only a few months after Tubman arrived in Philadelphia, and four thousand miles across the ocean, something new in the annals of human experience would open at Paris’ Au Con de la Rue – a department store. For centuries, walking was simply a means of getting from one place to another; from home to the market, from market to the church. With something, like the department store, or the glass covered merchant streets known as arcades, people were enticed not just to walk somewhere, but rather to walk everywhere. Such were the beginnings of the category of perambulator known as the flâneur, a word that is untranslatable, but carries connotations of wandering, idling, loafing, sauntering.

Being a flâneur means simply walking without purpose other than to observe; of strolling down Le Havre Boulevard and eyeing the window displays of fashions weaved in cashmere and mohair at Printemps, of espying the bakeries of Montmartre laying out macarons and Pain au chocalait; of passing the diners in outdoor brasseries of the Left Bank eating coq au vin. Before the public planner Georges-Eugène Hausmann’s radical Second Empire reforms, Paris was a crowded, fetid, confusing and disorganized assemblage of crooked and narrow cobblestoned streets and dilapidated half-timbered houses. Afterwards it became a metropolis of wide, magnificent boulevards, parks, squares, and museums. Most distinctively, there were the astounding 56,573 gas lamps that had been assembled by 1870, lit by a legion of allumeurs at dusk, so that people could walk at night. If Hausmann – and Napoleon III – were responsible for the arrival of the flâneur, then it was because the city finally had things worth seeing. For the privileged flâneur, to walk wasn’t the means to acquire freedom – to walk was freedom. 

“For the perfect flâneur,” writes poet Charles Baudelaire in 1863, “it is an immense joy to set up house in the heart of the multitude, amid the ebb and flow of movement, in the midst of the fugitive and the infinite… we might liken him to a mirror as vast as the crowd itself; or to a kaleidoscope gifted with consciousness.” Every great city’s pedestrian-minded public thoroughfares—the Avenue des Champs-Élysées and Cromwell Road; Fifth Avenue and Sunset Boulevard—is the rightful territory for the universal flâneur. Writers like Baudelaire compared the idyl of city walking to that other 19th-century innovation of photography; the flâneur existed inside a living daguerreotype, as if they had entered the hazy atmosphere of an impressionist painting, the gas lamps illuminating the drizzly fog of a Parisian evening. For the 20th-century German philosopher Walter Benjamin, who analyzed that activity in his uncompleted magnum opus The Arcades Project, the flâneur was the living symbol of modernity, writing that the “crowd was the veil from behind which the familiar city as phantasmagoria beckoned to the flaneur.”

I often played the role of flaneur for the two years my wife and I lived in Manhattan in a bit of much-coveted rent-controlled bliss. When your estate is 200 square feet, there’s not much choice but to be a flâneur, and so we occupied ourselves with night strolls through the electric city, becoming familiar with the breathing and perspiring of the metropolis. Each avenue has its espirit de place: stately residential York, commercial First and Second with their banks and storefronts, imperial Madison with its regal countenance, Park with its aura of old money, barely reformed Lexington with its intimations of past seediness. In the city at night, we availed ourselves of looking at the intricate pyrotechnic window displays of Barneys and Bloomingdales, of the bohemian leisure of the Strand’s displays in front of Central Park, of the linoleum cavern underneath the 59th Street Bridge, and of course Grand Central Station, the United States’ least disappointing public space.

Despite gentrification, rising inequity, and now the pandemic, New York still amazingly functions according to what the geographer Edward Soja describes in Thirdspace as being a place where “everything comes together… subjectivity and objectivity, the abstract and the concrete, the real and the imagined, the knowable and the unimaginable.” Yet Manhattan is perhaps more a nature preserve for the flâneur, as various economic and social forces over the past few decades have conspired to make our species extinct. The automobile would seem to be a natural predator for the type, and yet even in the deepest environs of the pedestrian unfriendly suburbs the (now largely closed) American shopping mall fulfilled much the same function as Baudelaire’s arcades. To stroll, to see, to be seen. A new threat has emerged in the form of Amazon, which portends to end the brick-and-mortar establishment, the coronavirus perhaps the final death of the flâneur. If that type of walker was birthed by the industrial revolution, then it now appears late capitalism is his demise, undone by our new tyrant Jeff Bezos.

The rights of the flâneur were never equally distributed, with scant mention of the flâneur needing to be hyperaware of his surroundings, of needing to carry keys in his fist, or having to arm himself with mace. While it’s not an entirely unknown word, flâneuse is a much rarer term, and it’s clear that the independence and assumed safety that pedestrian exploration implies is more often than not configured as masculine. Women have, of course, been just as prodigious in mapping the urban space with their feet as have men, with Lauren Elkin joking in Flâneuse: Women Walk the City in Paris, New York, Tokyo, Venice, and London that many accounts assume that a “penis were a requisite walking appendage, like a cane.” She provides necessary corrective to the male-heavy history of the flaneur, while also acknowledging that the risks are different for women. Describing the anonymity that such walking requires, Elkin writes that “We would love to be invisible the way a man is. We’re not the ones to make ourselves visible… it’s the gaze of the flaneur that makes the woman who would join his ranks too visible to slip by unnoticed.”

As a means of addressing this inequity that denies more than half the world’s population safe passage through public spaces, the activist movement Take Back the Night held its first march in Philadelphia, after the 1975 murder of microbiologist Susan Alexander Speeth as she was walking home. Take Back the Night used one of the most venerable of protest strategies—the march—as a means of expressing solidarity, security, defiance, and rage. Andrea Dworkin stated the issue succinctly in her treatise “The Night and Danger,” explaining that “Women are often told to be extra careful and take precautions when going out at night… So when women struggle for freedom, we must start at the beginning by fighting for freedom of movement… We must recognize that freedom of movement is a precondition for everything else.” Often beginning with a candlelight vigil, participants do exactly that which they’re so often prevented from doing—walking freely at night. So often paeons to walking that are penned by men wax rhapsodic about the freedom of the flaneur, but forget how gendered the simple act of walking is. Dworkin’s point is that women never forget it.

Few visuals are quite as powerful as seeing thousands of women and men moving with intentionality through a public space, hoisting placards and signs, chanting slogans, and reminding the powers that be what mass mobilization looks like. There is a debate to be had about the efficacy of protest. But at their absolute most charged, a protest seems like it can change the world; thousands of feet walking as one, every marcher a small cell in a mighty Leviathan. In that uncharacteristically warm February of 2003, I joined the 5,000 activists who marched through the Pittsburgh neighborhood of Oakland against the impending war in Iraq. There were the old hippies wearing t-shirts against the Vietnam War, the slightly drugged out looking college-aged Nader voters, Muslim women in vermillion hijabs and men in olive keffiyeh, the Catholic Workers, and the Jews for Palestine, the slightly menacing balaclava wearing anarchists, and of course your well-meaning liberals such as myself.

We marched past Carnegie-Mellon’s frat row, young Republicans jeering us with cans of Milwaukee’s Best, through the brutalist concrete caverns of the University of Pittsburgh’s campus, and under the watchful golem that was the towering gothic Cathedral of Learning, up to the Fifth Avenue headquarters of CMU’s Software Engineering Institute, a soulless mirrored cube reflecting the granite gargoyles blackened by decades of steel mill exhaust who were watchfully positioned on St. Paul’s Cathedral across the street. Supposedly both the SEI and the adjacent RAND Institute had DoD contracts, developing software that would be used for drone strikes and smart bombs. With righteous (and accurately placed) indignation, the incensed crowd chanted, and we felt as a singular being. On that same day, in 650 cities around the world, 11 million others marched, history’s largest global protest. It felt as if by walking we’d stop the invasion. Reader, we did not stop the invasion.      

Despite those failures, the experience is indicative of how walking alters consciousness. Not just in a political sense, but in a personal one as well (though those things are not easily extricated). There is a methodology for examining how walking alters our subjectivity, a discipline with the lofty and vaguely threatening name of “psychogeography.” Theorist Guy Debord saw the practice as a means of reenchanting space and place, developing a concept called dérive, which translates from the French as “drifting,” whereby participants “drop their usual motives for movement and action, their relations, their work and leisure activities, and let themselves be drawn by the attractions of the terrain and the encounters they find there,” as he is quoted in the Situationist International Anthology. Sort of a hyper-attenuated version of being the flaneur, psychogeographers perceived familiar streets, squares, and cities from an entirely different perspective. Other psychogeographical activities included tracing out words by the route a walker takes through the city, or mapping smells and sounds. The whole thing has an anarchic sensibility about it, but with the whimsy of the Dadaists, while just as enthused with praxis as with theory.

For example, in his travelogue Psychogeography the Anglo-American novelist Will Self journeys from JFK to Manhattan while on foot. Sneakers crunching over refuse alongside the Van Wyck, the metropolitan detritus that exists in those scrubby brown patches that populate the null-voids that exist between somewhere and somewhere else. Nothing can really compare to entering New York on foot, as Self did. It’s fundamentally different from arriving in a cab driving underneath the iconic steel girders of the Manhattan Bridge, or being ferried into the Parthenon that is Grand Central, or even disembarking from a Peter Pan Bus in the grody cavern of Port Authority. Walking is a “means of dissolving the mechanized matrix which compresses the space-time continuum” Self writes, with the walker acting as “an insurgent against the contemporary world, an ambulatory time traveler.” For the psychogeographers, how we move is how we think, so that if we wish to change the later, we must first alter the former.

So it would seem. Writing in the 18th century, Jean-Jacques Rousseau remarked in The Confessions that “I can only meditate when I’m walking… my mind only works with my legs.” In his autobiography Ecce Homo, Friedrich Nietzsche injuncted, “Sit as little as possible; do not believe any idea that was not born in the open air and of free movement…. All prejudices emanate from the bowels.” Meanwhile, his contemporary Søren Kierkegaard wrote that “I have walked myself into my best thoughts.” Most celebrated of the walking philosophers, Immanuel Kant, had daily constitutionals across Konigsberg’s bridges that merchants set their watches by him. Wallace Stevens famously used to write his poems as he stomped off across the antiseptic landscape of Hartford, Conn. He walked as scansion, his wingtips pounding out iambs and trochees with the wisdom that understands verse is as much of the body as it is of the mind, so that “Perhaps/The truth depends on a walk.” Walking is a type of consciousness, a type of thinking. Walking is a variety of reading, the landscape unspooling as the most shining of verse, so that every green leaf framed against a church’s gothic tower in a dying steel town, both glowing with an inner light out of the luminescence of the golden hour, is the most perfect of poems, only to be seen by she who gives it the time to be considered.

Image Credit: SnappyGoat

Henry Vaughan’s Eternal Alchemy

-

Mercury has a boiling point of 674.1 degrees Fahrenheit and a freezing point of -37.89 degrees, rendering it the only metal that’s liquid at room temperature. Malleable, fluid, transitory—the element rightly lends itself to the adjective “mercurial,” a paradoxical substance that has the shine of silver and the flow of water, every bit as ambiguous as the Greek god from whom it derives its name. Alchemists were in particular drawn to mercury’s eccentric behavior, as Sam Kean explains in The Disappearing Spoon: And Other True Tales of Madness, Love, and the History of the World from the Periodic Table of the Elements, writing that they “considered mercury the most potent and poetic substance in the universe.” Within sequestered granite basements and hidden cross-timbered attics, in cloistered stone monasteries and under the buttresses of university halls, alchemists tried to encounter eternity, and very often their medium of conjuration was mercury. The 13th-century English natural philosopher Roger Bacon writes in Radix Mundi that “our work is hidden… in the body of mercury,” while in the 17th-century the German physician Michael Maier declared in Symbola aurea mensae that “Art makes metals out of… mercury.” Quicksilver which pools like some sort of molten treasure is one of the surprising things of this world. To examine mercury and its undulations is to see time itself moving, when metal appears acted upon by entropy and flux, the disintegration of all that which is solid into pure water. Liquid metal mercury is a metaphysical conceit.

Alchemy has been practiced since antiquity, but the decades before and during the Scientific Revolution were a golden age for the discipline. In England alone there were practitioners like John Dee, Edward Kelley, and Robert Boyle (who was also a chemist proper), and then there was the astrologer and necromancer Isaac Newton, who latter had some renown as a physicist and mathematician.  Such were the marvels of the 17th century, this “age of mysteries!” as described by the Anglo-Welsh poet Henry Vaughan–born 400 years ago tomorrow. He could have had in mind his twin brother, Thomas, among the greatest alchemists of that era, whose “gazing soul would dwell an hour, /And in those weaker glories spy/Some shadows of eternity.” Writing under the name Eugenius Philalethias, Thomas was involved in a project that placed occultism at the center of knowledge, seeing in the manipulation of matter answers to fundamental questions about reality. Though Henry is the more remembered of the two brothers today (though “remembered” is a relative term), the pair were intellectually seamless in their own time, seeing in both poetry and alchemy a common hermetic purpose. Four centuries later, however, and alchemy no longer seems an avenue to eternity. Thomas’s own demise demonstrated a deficiency of those experiments, for despite mercury’s supposed poetic qualities, among its more tangible properties is an extreme toxicity, and when heated in a glass it can be accidentally inhaled, or when handled by an ungloved hand (as alchemists were apt to do) it can be absorbed through the skin. The resultant poisoning has several symptoms, not least of which are muscle spasms, vision and hearing problems, hallucinations, and ultimately death. Such was the fate of Thomas after some mercury got up his nose in that apocalyptic year of 1666, when plague and fire destroyed London.

Despite Thomas being an Anglican vicar removed from his position for “being a common drunkard, a common swearer… a whoremaster” and who would be satirized nearly a century later by Jonathan Swift in A Tale of a Tub as the greatest author of nonsense “ever published in any language,” he was still in pursuit of something very particular and important, though it’s perhaps easy to mock alchemy nearly four centuries later. In works like Anima Magica Abscondita; or A Discourse on the Universal Spirit of Nature and Aula Lucis, or The House of Light, Thomas evidenced a noble and inquisitive spirit about nature, a desire to both “Have thy heart in heaven and thy hands upon the earth,” as he writes in the first of those two volumes. Thomas searched for eternity, a sense of what the fundamental units of existence are, and he rearranged matter and energy until it killed him.

“Their eyes were generally fixed on higher things,” writes Michael Schmidt in Lives of the Poets, and indeed whether in manipulation of chemicals or words, both Vaughans desired the transcendent, the infinite, the eternal; what Henry called “authentic tidings of invisible things.” This essay is not mainly about Thomas—it’s about Henry, a man who rather than making matter vibrate chose to arrange words, and in abandoning chemicals for poetry came much closer to eternity. Thomas was an alchemist of things and Henry was one of words, but the goal was identical—”a country/Afar beyond the stars,” where things are shining and perfect. This necessarily compels a question—how eternal can any poetic voice ever actually be? Mine is not a query about cultural endurance; I’m not asking for how long will a poet like Vaughan be studied, read, treasured. I’m asking how possible is it for Vaughan—for any poet—to ascend to the perspective of Aeternitas, to strip away past, present, and future, to see all as if one gleaming moment of light, divorced from our origins, our context, our personality, and in that mystical second to truly view existence as if from heaven? And for the purpose of the poet (and the reader), to convey something of this experience in the fallen and imperfect medium of language?

Because he was a man of rare and mystical faith, for whom the higher ecstasies of metaphysical reverence subsumed the lowly doldrums of moralistic preening, it can be easy to overlook that Vaughan—like all of us who are born and die—lived a specific life. The son of minor Welsh gentry (and whose second language was English, often writing “in imitation of Welsh prosody” as the editors of the Princeton Handbook of Poetic Terms note), Vaughan was most likely a soldier among the Royalists, brother of the esteemed scholar Thomas (of course), and a physician for his small borderland’s town, until dying in 1695 at the respectable age of 74, the “very last voice contained entirely within what many regards as the great century of English poetry” as Schmidt writes. Vaughan may not be as celebrated as other visionary poets, yet he deserves to be included among William Blake and Emily Dickinson as one of the most immaculate. Such perfection took time.

His earliest “secular” verse is composed of largely middling attempts at aping the Cavalier Poets with their celebrations of leisure and the pastoral, yet sometime around 1650, “Vaughan seems to have experienced a spiritual upheaval more in the nature of a regeneration than of a conversion. He violently rejected secular poetry and turned to devotion,” as Miriam K. Starkman explains in 17th Century English Poetry. This turning of the soul was perhaps initiated by the trauma of the Royalist loss in the English Civil War, the death of his older brother, William, and most of all his discovery of the brilliant metaphysical poet and Anglican theologian George Herbert. But regardless of the reasons, his verse and faith took on a shining, luminescent quality, and his lyrics intimated the quality of burning sparks from hot iron. “Holy writing must strive (by all means) for perfection and true holiness,” Vaughan writes in the preface to his greatest collection, 1655’s Silex Scintillans, “that a door may be opened to him in heaven” (the Latinate title translates to “The Fiery Flint” in keeping with his contention that “Certain divine rays break out of the soul in adversity, like sparks of fire out of the afflicted flint”).

He was, first and foremost, a Christian, and an Anglican one at that, writing his verse during the Puritan Interregnum when his church was abolished, and those of his theological position prohibited from their community and liturgy. In his prose treatise of 1652, The Mount of Olives, or Solitary Devotions, he offers “this my poor Talent to the Church,” now a “distressed Religion” for whom “the solemn and public places of meeting… [are] vilified and shut up.” To these traumas must be added Vaughan’s marginalization in England as a Welshmen, for as Schmidt writes “Wales, [is] his true territory,” indeed so much so that he called himself the “Silurian” after the fearsome Celtic tribe that had once made his birthplace their home. In championing Vaughan, or any poet, as “eternal,” we risk reducing them, of subtracting that which makes them human. Silex Scintillans is consciously written in a manner whereby it can be easy to strip the lyrics of theological particularity, which makes him an easy poet for the heretics among us to champion, and yet Vaughan (ecstatic though he may be) was an orthodox Anglican.

Vaughan’s particular genius is in being able to write from a perspective that seems eternal, for theology may be of man but faith is of God, and he is a poet that understands the difference. The result is a verse that though it was written by an Anglican Welshman in the 17th century, reads (when it’s soaring the highest) as if it came from some place alien, strange, and beautiful. Consider one of his most arresting poems from Silex Scintillans, titled “The World.” Along with John Donne and Dickinson, Vaughan is among the best crafters of first lines in the history of poetry, writing “I saw Eternity the other night, /Like a great ring of pure and endless light.” This is a poem that begins like the Big Bang. A simple trochaic rhyming couplet, its epigraphic minimalism lends itself to the very eternity of which it speaks. To my contemporary ear, there is something oddly colloquial about Vaughan’s phrasing, speaking of seeing “Eternity the other night” like you might mention having run into a friend somewhere, even though what the narrator has experienced is “Time in hours, days, years, /Driv’n by the spheres/Like a vast shadow mov’d; in which the world/And all her train were hurl’d.”

In its understatement there’s something almost funny about the line, as the casual tone before the enjambment transitions into the sublime cosmicism after the first comma. That’s the other thing—the narrator had this transcendent experience “the other night”—the past tense is crucial. Eternity is possible, but it’s only temporary (at least while we live on earth). Schmidt correctly observes that Vaughan’s lyric “does not sustain intensity throughout, dwindling to deliberate allegory,” though that’s true of any poem which begins with a powerful and memorable line—Donne and Dickinson weren’t ever able to sustain such energy through an entire lyric either. What’s so powerful in “The World” is that this inevitable rhetorical decline is reminiscent of the actual mystical experience itself, whereby that enchanted glow must necessarily be diminished over time. Leah Marcus argues in Childhood and Cultural Despair: A Theme and Variations in Seventeenth-Century Literature that the “dominant mood in Vaughan’s poetry is pessimism, and a sense of deep loss which occasional moments of vision can only partly alleviate.” Such an interpretation is surely correct, for loss marked not only Vaughan’s life, but his mystical experiences as well, where despite his spiritual certainty (and he is not a poet of doubt), transcendence itself must be bounded by space and time, a garden to which we are only sometimes permitted to visit.

Better to describe Vaughan as the poet of eternity deferred, an ecstatic who understands that in a fallen world paradise is only ever momentary. “My Soul, there is a country/Afar beyond the stars,” Vaughan writes in another poem entitled “Peace,” continuing “Where stands a winged sentry… There, above noise and danger/Sweet Peace sits, crown’d with smiles.” He writes with certainty, but not with dogmatism; perhaps assured of his own election, he doesn’t mistake his earthly present for his heavenly future, and that combination of assurance and humility has lent itself to the eternal mode of the lyrics. Note the reference to where perfection can be found, possibly an echo of Hamlet’s “undiscovered country,” as well as the cosmological associations of such paradise being located deep within the outer universe. Along with his contemporary Thomas Traherne, Vaughan is one of the great interstellar poets, though such imagery must necessarily be read as allegorical, the mystic translating experience into metaphor. “Private experience is communicated in the language of Anglican Christianity,” observes Schmidt, and such language must forever be contingent, a gesture towards the ineffable but by definition not the ineffable itself. “His achievement,” Schmidt writes, “is to bring the transcendent almost within reach of the senses.”

There are brilliant poets and middling ones, influential and obscure, radical and conservative, but the coterie of those able to look upon eternity with transparent eye and encapsulate their experience in prosody, no matter how relative and subjective, are few. During the Renaissance, Herbert with his “something understood” was one of these poets; Donne with his “little world made cunningly” was another. Among others include Gerard Manley Hopkins and the vision of God’s grandeur “shining from shook foil” in the 19th-century alongside Christina Rossetti who had “no words, no tears.” In the 20th there was Robert Frost (hackneyed though he is misremembered) intoning that “Nature’s green is gold” and Marianne Moore for whom “Mysteries expound mysteries;” Jean Toomer’s “lengthened tournament for flashing gold,” and Denise Levertov who could “Praise/god or the gods, the unknown;” Louise Gluck whose hands are “As empty now as at the first note,” Martin Espada who prays that “every humiliated mouth… fill with the angels of bread,” and Kaveh Akbar who “always hoped that when I died/I would know why.”  Then of course there are aforementioned Dickinson and Blake, frequently John Milton, often Walt Whitman, and most of the time Traherne. Hard to discern a thorough line through the eternal poets, eternal because they seem to have gone to that permanent place and returned with some knowledge. When Schmidt writes that “Vaughan’s chosen territory” was the “area beyond the senses, accessible only to intuition,” I suspect that same description could be made from those on my truncated syllabus.

Henry Vaughan’s poetry demonstrates the difference between eternity and immortality. The latter is the juvenile desire of the alchemists, this intention to transmute base metals to gold, to acquire the philosopher’s stone, to construct homunculi and perhaps live forever. Immortality is understood as being like this life right now, only with more of it. Eternity is something else entirely. Another difference is that immortality isn’t real. Writing in The Mount of Olives, Vaughan describes our lives as a “Wilderness… A darksome, intricate wood full of ambushes and dangers; a forest with spiritual hunter, principalities and powers.” By contrast, eternity is that domain of perfection—it is the gentle buzz of a bee in the stillness of a summer dusk, the scent of magnolia wafting on a breeze, the reflection of moonlight off an unrippling pond. “They are all gone into the world of light! /And I alone sit ling’ring here,” Vaughan writes in another of his perfect opening lines. One of the most poignantly sad sentences in Renaissance poetry. The world weariness is there, the estrangement, the alienation—but the world of light is still real. “I cannot reach it, and my striving eye/Dazzles at it, as at eternity,” he mournfully writes, and yet “we shall gladly sit/Till all be ready.” What faith teaches is that we’re all exiles from that Zion that is eternity, to which we shall one day return. What Vaughan understands is that if we seek eternity, it is now.

Marvelous Mutable Marvell

-

Twelve years to the day after King Charles I ascended the scaffold at Whitehall to become the first European sovereign decapitated by his subjects, and his son had the remains of Oliver Cromwell, the Lord Protector during England’s republican decade and the man whom as much as any other was responsible for the regicide, exhumed from his tomb in Westminster Abby so that the favor could be returned. Interregnum signified dreary theocracy, when the theaters were closed, Christmas abolished, and even weekend sports banned. By contrast, Charles promised sensuality, a decadent rejection of austere radicalism. But if there was one area of overlap, it would be in issues of rapprochement, for though pardons were issued to the opposition rank-and-file, Charles still ordered the execution of 31 signers of his father’s judgement, remembered by the subject of this essay as that moment when the king had “bowed his comely head/Down as upon a bed.” Even the dead would not be spared punishment, for in January of 1661 Cromwell was pulled from his grave, flesh still clinging to bone, languid hair still hanging from grey scalp, and the Royalists would hang him from a gibbet at Tyburn.


Eventually Cromwell’s head would be placed on a pike in front of Westminster Hall where it would molder for the next two decades. Such is the mutability of things. Through this orgy of revenge, a revolutionary who’d previously served as Latin Secretary (tasked with translating diplomatic missives) slinked out of London. Already a celebrated poet, he’d been appointed by Cromwell because of his polemical pamphlets. After the blanket pardon he emerged from hiding, but still Charles had him arrested, perhaps smarting when in 1649 the author had described Royalists as a “credulous and hapless herd, begott’n to servility, and inchanted with these popular institutes of Tyranny.” By any accounting John Milton’s living body should have been treated much the same as Cromwell’s dead one, for he was arrested, imprisoned, and no doubt fully expected himself to be flayed and hung up at Tyburn. But he would avoid punishment (and go on to write Paradise Lost). Milton’s rescue can be attributed to another poet a decade his junior.  

Once a keen supporter of Cromwell, but now the new government completely trusted this younger poet and his word was enough to keep Milton from the scaffold. Before the regicide he’d been a royalist, during Interregnum a republican, and upon Restoration a royalist again, all of which earned him the sobriquet of “The Chameleon.” That perhaps impugns him too much, for if Andrew Marvell understood anything it was fickle mutability, how in a mercurial reality all that can be steadfast is merely the present. A man loyal not to regimes but to friends. He was born on this day 400 years ago, he is among the greatest poets, and the relative silence that accompanies his quadricentenary speaks to his major theme: our eventual oblivion. “Thy beauty shall no more be found,” Marvell’s narrator warns in his most famous poem, and it might as well be prophecy for his own work.    

“His grave needs neither rose nor rue nor laurel; there is no imaginary justice to be done; we may think about him… for our own benefit, not his,” writes T.S. Eliot in his seminal essay “Andrew Marvell,” published on this day a century ago in the Times Literary Supplement. Just as John Donne had been known primarily for his sermons, so too was Marvell remembered chiefly as a politician (he was elected as an MP prior to Restoration, a position he stayed in through his death), until Eliot encouraged critics to take a second look at his verse. By Eliot’s recommendation, Marvell was now understood as one of the greatest poets of the century, perhaps only lesser than his friend Milton. Jonathan Bate writes in the introduction to the Penguin Classics edition of Andrew Marvell: The Complete Poems that his “small body of highly wrought work constitutes English poetry’s most concentrated imaginative investigation of the conflicting demands of the active and the contemplative life, the self and society, the force of desire and the pressure of morality, the detached mind and the body embedded in its environment.” Perhaps Marvell deserves a little bit of rose or rue or laurel. Yet his quadricentennial passes without much of a mention, though presumably some other writers will (hopefully) consider his legacy today. A quick Google search shows that several academic societies are providing (virtual) commemoration, including a joint colloquium hosted by St. Andrews, and appropriately enough a Marvell website maintained by  the University of Hull in his northern English hometown (which features this insightful essay by Stewart Mottram about Marvell and COVID-19). Yet by comparison to other recent literary anniversaries—for Milton, Mary Shelly, Walt Whitman, Charles Dickens, of course, William Shakespeare—there has largely been quiet about Marvell, even though Eliot once said regarding his poetry that a “whole civilization resides in these lines.”

Marvell published only a handful of lyrics in his own life, and while he authored examples of virtually every popular genre of the time, from ode to panegyric, country house poem to satire (though not epic), he stretched their bounds, made them unclassifiable and slightly strange. Having written across three major periods of 17th-century English literary history—during the Caroline reign, the Commonwealth, and then Restoration—he awkwardly fits within both of the two dominant traditions of that age, metaphysical and Cavalier poetry. “Was he classic or romantic, metaphysical or Augustan, Cavalier or Puritan… republican or royalist?” asks Elisabeth Story Donno in the introduction to Andrew Marvell: The Critical Heritage, and indeed such ambiguity threads through his verse and especially his life. This Hull-born son of a stern Calvinist minister who after dalliances on the continent possibly converted to Catholicism (tarred on return as now being an “Italo-Machiavellian”), who served kings and republicans, and who seamlessly switched political and religious allegiances.


The great themes of Marvell’s biography and his poetry are mutability and transitoriness; his verse is marked by the deep sense that as it once was, it no longer is; and as it is, it no longer shall be. “Underlying these themes is the knowledge that in love or action time can’t be arrested or permanence achieved,” writes Michael Schmidt in Lives of the Poets. “A sanctioned social order can be ended with an ax, love is finite, we grow old.” With an almost Taoist sense of the impermanence of life, Marvell casts off opinions and positions with ease, and even his own place within the literary firmament is bound to change and alter. “But at my back I always hear/Time’s winged chariot hurrying near,” as Marvell astutely notes. A poet who has “no settled opinions, except the fundamental ones,” writes Schmidt, though his “political readjustments in times of turmoil have not told against him. There is something honest seeming about everything he does, and running through his actions a constant thread of humane concern.” Steadfast though we all wish we could be, Marvell is the poet of Heraclitus’s stream, always reminding us that this too shall pass. “Even his name is slippery,” writes Bates, “In other surviving documents it is as variously spelt as… Marvell, Mervill, Mervile, Marvel, Mervail. We are not sure how to pronounce it.” A convenient metaphor.

However Marvell pronounced his name, it’s his immaculate words and the enchanted ways that he arranged them that are cause for us to return to him today, a poet for whom his “charms are as real as they are hard to define,” as Schmidt writes. Marvell has no great drama in his oeuvre, no Hamlet or King Lear, he penned no epics, no Paradise Lost. Donne, one of his great influences, was more innovative, and George Herbert provided a more unified body of work. And yet in about six or so poems—”The Garden,” “Upon Appleton House,” “An Horatian Ode Upon Cromwell’s Return from Ireland,” the sequence known as the Mower poems, “Bermudas,” and of course “To His Coy Mistress”—are among the most perfect in the language, all written around 1650, when the poet acted as tutor to the daughter of Lord General Thomas Fairfax, commander of the New Model Army before Cromwell. Schmidt argues that “Most of his poems are in one way or another ‘flawed’… [the] tetrameter couplets he favored prove wearying… Some of the conceits are absurd. Many of the poems… fail to establish a consistent perspective… Other poems are static: an idea is stated and reiterated in various terms but not developed… There are thematic inconsistencies.” Yet despite Schmidt’s criticism of Marvell, he can’t help but conclude that “because of some spell he casts, he is a poet whose faults we not only forgive but relish.”

It’s because of Marvell’s overweening honesty when it comes
to those faults, themselves a function of the world in flux, which is to say a
reality where perfection must always be deferred. In “The Garden,”
one of the great pastoral poems, he describes “The mind, that ocean where
each kind/Does straight its own resemblance find;/Yet it creates, transcending these,
/For other worlds, and other seas;/Annihilating all that’s made.”  His “Upon Appleton House,” which
takes part in the common if retroactively odd convention of the country-house
poem wherein the terrain of a wealthy estate is described, contrasts the artifice
of the garden with its claim to naturalism; in “An Horatian Ode Upon
Cromwell’s Return from Ireland,” he gives skeptical praise to the Lord
Protector (back from a genocidal campaign across the sea), where the “forward
youth that would appear/Must now forsake his Muses dear,/Nor in the shadows
sing/His numbers languishing.” Mutability defined not just Marvell’s
politics but his poetry; steadfastly Christian (of some denomination or
another) he gives due deference to the eternal, but the affairs of men are
fickle, and time’s arrow moves in only one direction.

History’s unforgiving erraticism is sometimes more apparent, and Marvell lived during the most chaotic of English centuries. When the Commonwealth that Marvell served collapsed, and Charles II came back from his exile in 1660, it was to triumphant fanfare in a London that was exhausted after years of stern Puritan dictatorship. Crowds thronged the streets wearing sprigs of oak leaves to welcome the young King Charles and the spring of Restoration. In the coronation portrait by John Michael Wright, the 30-year-old king is made to appear the exact opposite of priggish Cromwell; he is bedecked in ribbons and lace, tight white hose and cascading violet velvet, his curly chestnut wig long in the French style (as he would have picked up in the Sun King’s Court at Versailles) with a thin brunette mustache above a wry smile, a royal scepter in one hand and an orb in the other. If Cromwell was a Puritan hymn, then Charles was a baroque sonata by Henry Purcell. Marvell learned the melodies of both. His contemporary biographer Nigel Smith notes in Andrew Marvell: The Chameleon that the poet “made a virtue and indeed a highly creative resource of being other men’s (and women’s) mirrors.” What is espied in a mirror, it must be said, depends on what you put in front of it. Mirrors are ever-changing things. Such is mutability.  

There must be a few caveats when I argue that Marvell’s fame has diminished. Firstly, to any of my fellow scholars of the Renaissance (now more aridly known as “early modernists”) who are reading, of course I know that you still study and teach Marvell, of course I know that articles, dissertations, presentations, and monographs are still produced on him, of course I know that all of us are familiar with his most celebrated poems. It would be a disingenuous argument to claim that I’m “rediscovering” a poet who remains firmly canonical, at least among my small tribe. Secondly, to any conservatives reading, I’m not suggesting that Marvell has been eclipsed because he’s been “cancelled,” or anything similarly silly, though honestly there would probably be ample reason to do so even if that were a thing (and as perfect a poem as it is, “An Horatian Ode Upon Cromwell’s Return from Ireland” is problematic in the least), but bluntly his stock isn’t high enough to warrant being a target in that way. Thirdly, and most broadly, I’m not claiming that his presence is absent from our imagination, far from it. If anything, Marvell flits about as a half-remembered reference, a few turns of phrase that lodge in the brain from distant memories of undergraduate British literature survey courses, or a dappling of lines that end up misattributed to some other author (I’ve seen both “Times winged chariot” and “green thought in a green shade” identified with Shakespeare). Modern authors as varied as Eliot, Robert Penn Warren, Ursula K. Le Guinn, William S. Burroughs, Terry Pratchet, Philip Roth, Virginia Woolf, Archibald MacLeish, Primo Levi, and Arthur C. Clark either quote, reference, allude toward, or directly write about Marvell. Even Stephen King quotes from “To His Coy Mistress” in Pet Sematary. Regardless, on this muted birthday, ask yourself how many people you know are aware of Marvell, among a handful of the greatest poets writing during the greatest period of English poetry?

“Thus, though we cannot make our Sun/Stand still, yet we will make him run.” So concludes Marvell’s greatest accomplishment, “To His Coy Mistress.” Like an aesthetic syllogism, the poem piles an abundance of gorgeous imagery in rapid succession to make an argument about how the present must be seized, for immortality is an impossibility. Of course, the conceit of the poem isn’t novel; during the Renaissance it was downright cliché. Carpe diem poetry—you may remember the term from the Peter Weir film Dead Poets Society—goes back to Latin classics by poets like Catullus and Lucretius. A representative example would be Marvell’s contemporary, the Cavalier poet Robert Herrick, and his lyric “Gather Ye Rosebuds While Ye May.” These poems commonly made an erotic argument, the imagined reader a woman whom the narrator is attempting to seduce into bed with an argument about the finite nature of life. Donne’s obvious precursor “To His Mistress Going to Bed (Elegy 19)” is perhaps the most brilliant example of the form. Bate emphasizes that such a “motif is typical of the Cavalier poets… Marvell, however, takes the familiar Cavalier genre and transposes it to a realm beyond ideology.” When reading Donne, the “poems are written in the very voice of a man in bed with a real woman,” as Bates writes, most likely his beloved wife, Anne, for whom he had no compunctions about blatantly stating his sexual yearning. Marvell, by contrast, “never fleshes out his imaginary mistress,” and the result is that “He is not really interested in either the girl or the Cavalier pose; for him, it is experience itself that we must seize with energy, not one particular lifestyle.” The mistress to whom Marvell is trying to woo is his own soul, and the purpose is to consciously live in a present, for that’s all that there is.

“Had we but World enough, and Time,” Marvell begins, “This coyness Lady were no crime.” Ostensibly an argument against chastity, his reasoning holds not just for sex (though the declaration of “My vegetable Love should grow” a few lines later certainly implies that it’s not not about coitus). Eroticism marks the first half of the lyric, explicitly tied to this theme of time’s fleetingness, while imagining the contrary. “A hundred years should go to praise/Thine Eyes, and on thy Forehead Gaze. /Two hundred to adore each breast:/But thirty thousand to the rest. /An Age at least to every part, /And the last Age should show your heart.” Marvell is working within the blazon tradition, whereby the physical attributes of a beloved are individually celebrated, yet Bates is correct that the hypothetical woman to whom the lyric is a cipher more than a person—for nothing actually is described, merely enumerated. For that matter, the time frames listed—100 years, 30,000, an ambiguous “Age”—all seem random, but as a comment on eternity, there’s a wisdom in understanding that there’s no difference between a second or an epoch. He similarly reduces and expands space, calling upon venerable exoticized images to give sense of the enormity of the world (and by consequence how small individual realities actually are). “To walk, and pass our long Loves Day. /Thou by the Indian Ganges side… I would/Love you ten year before the Flood:/And you should if you please refuse/Till the Conversion of the Jews.” Marvell’s language (orientalist though it may be) is playful; it calls to mind the aesthetic sensuality of Romantics like William Taylor Coleridge, where he imagines the mistress picking rubies off the ground. But the playfulness is interrupted by the darker tone of the second half of the poem.

“But at my back I always hear/Time’s winged chariot hurrying near:/And yonder all before us lie/Desarts of vast Eternity.” The theology of “To His Coy Mistress” is ambiguous; are these deserts of vast eternity the same as the immortality of Christian heaven, or does it imply extinction? Certainly, the conflation of death with a desert seems to deny any continuation. It evokes the trepidation of the French Catholic philosopher and Marvell’s contemporary Blaise Pascal, who in his Pensées trembled before the “short duration of my life, swallowed up in an eternity before and after, the little space I fill engulfed in the infinite immensity of spaces whereof I know nothing.” Evocatively gothic language follows: Marvell conjures the beloved’s moldering corpse in “thy marble Vault,” where “Worms shall try/That long preserv’d Virginity: And your quaint Honour turn to dust,” a particularly grotesque image of the phallic vermin amidst a decomposing body, especially as “quaint” was a contemporary pun for  genitalia. “The Grave’s a fine and private place, /But none I think do there embrace.” In the final third of the poem, Marvell embraces almost mystical rhetoric. “Let us roll all our Strength, and all/Our sweetness, up into one Ball:/And tear our Pleasures with rough strife, /Thorough the iron gates of Life.” This couple melding together is obviously a sexual image, but he’s also describing a singularity, a monad, a type of null point that exists beyond space and time. If there is one clear message in “To His Coy Mistress,” it’s that this too shall pass, that time will obliterate everyone and everything. But paradoxically, eternity is the opposite of immortality; for if eternity is what we desire, then it’s in the moment. Traditional Carpe diem demands that we live life fully for one day we’ll die; Marvell doesn’t disagree with that, but his command is that to truly live life fully is to understand that it’s the present that exists eternally, that we always only live in this present right now, so we must ask ourselves what the significance of this brief second is. What’s mere fame to that?

Marvell’s star has faded not because he wasn’t a great poet, not because he deserves to be forgotten, not because he’s been replaced by other writers or because of any conscious diminishment of his stature. His fame has ebbed because that’s what happens to people as time moves forward. The arguments about what deserves to be read, taught, and remembered often overlooks the fact that forgetting is a function of what it means to be human. What makes “To His Coy Mistress” sublime is that there is a time-bomb hidden with it, the poet’s continuing obsolescence firmly confirming the mutability of our stature, of our very lives. Marvell’s own dwindling fame is a beautiful aesthetic pronouncement, a living demonstration of time’s winged chariot, and the buzzing of the wings of oblivion forever heard as a distant hum. After his death in 1678, he was ensconced within seemingly ageless marble, and set within the catacombs of St. Giles-in-the-Fields, a gothic stone church at the center of London, though true to its name it was at the edge of the city when Marvell was alive. Ever as fortune’s wheel turns, the neighborhood is now bathed in perennial neon light—theaters with their marquees signs and the ever-present glow of electronic billboards. Red double-decker busses and black hackney-carriages dart past the church were Marvell slumbers, unconscious through empire, through Industrial Revolution, through the explosions of the blitz. 

“But a Tombstone can neither contain his character, nor is Marble necessary to transmit it to posterity,” reads his epitaph, “it is engraved in the minds of this generation, and will be always legible in his inimitable writings, nevertheless.” Probably not. For one day, there will less seminars about Marvell, and then no more; less anthologizing, and then the printings will stop; less interest, except in the minds of antiquarians, and then they too shall forget. One day, even Shakespeare shall largely be forgotten as well. The writing on Marvell’s tomb will become illegible, victim to erosion and entropy, and even the banks of the Thames will burst to swallow the city. When he is forgotten, paradoxically maybe especially when he is, Marvell teaches us something about what is transient and what is fleeting. Time marches forward in only one direction, and though it may erase memory it can never annihilate that which has happened. Regardless of heaven, Marvell lived, and as for all of us, that’s good enough. His verse still dwells in eternity, whether we utter his name or not, for what is rendered unto us who are mortal is the ever-living life within the blessed paradise of a second, which is forever present.

Image Credit: Wikipedia.

On Literature and Consciousness

- | 2

Not far from where the brackish water of the delta irrigates the silty banks of the Nile, it was once imagined that a man named Sinuhe lived. Some forty centuries ago this gentleman was envisioned to have walked in the cool of the evening under a motley pink sky, a belly full of stewed pigeon, dates, and sandy Egyptian beer. In twelve pages of dense, narrative-rich prose, the author recounts Sinuhe’s service to Prince Senworset while in Libya, the assassination of the Pharaoh Amenemhat, and the diplomat’s subsequent exile to Canaan wherein he becomes the son-in-law of a powerful chief, subdues several rebellious tribes, and ultimately returns to Egypt where he can once again walk by his beloved Nile before his death. “I, Sinuhe, the son of Senmut and of his wife Kipa, write this,” begins the narrative of the titular official, “I do not write it to the glory of the gods in the land of Kem, for I am weary of gods, nor to the glory of the Pharaohs, for I am weary of their deeds. I write neither from fear nor from any hope of the future, but for myself alone.” This is the greatest opening line in imaginative literature, because it’s the first one ever written. How can the invention of fiction itself be topped?  

Whoever pressed stylus to papyri some eighteen centuries before Christ (The Tale of Sinuhe takes places two hundred years before it was written) has her central character pray that God may “hearken to the prayer of one far away!… may the King have mercy on me… may I be conducted to the city of eternity.” Fitting, since the author of The Tale of Sinuhe birthed her character from pure thought, and in the process invented fiction, invented consciousness, invented thought, even invented being human. Because Sinuhe is eternal — the first completely fictional character to be conveyed in a glorious first-person prose narration. The Tale of Sinuhe is the earliest of an extant type, but there are other examples from ancient Egypt, and indeed Mesopotamia, and then of course ancient Greece and Rome as well. As a means of conveying testimony, or history, or even epic, there is a utility to first-person narration, but The Tale of Sinuhe is something so strange, so uncanny, so odd, that we tend to ignore it by dint of how abundantly common it is today. Whoever wrote this proto-novel was able to conceive of a totally constructed consciousness and to compel her readers to inhabit that invented mind.

Hard to know how common this type of writing was, since so little survives, but of the scraps that we have there is a narration that can seem disturbingly contemporary. Written some eight-hundred years after Sinuhe’s tale, The Report of Wenamun is a fictional story about a traveling Egyptian merchant who in sojourns throughout Lebanon and Cyprus must confront the embarrassment of being a subject from a once-great but now declining empire. With startling literary realism, Wenamun grapples with his own impotence and obsolescence, including descriptions of finding a character “seated in his upper chamber with his back against a window, and the waves of the great sea of Syria… [breaking] behind him.” What a strange thing to do, not to report on the affairs of kings, or even to write your own autobiography, but rather to manifest from pure ether somebody who never actually lived, and with a charged magic make them come to life. In that image of the ocean breaking upon the strand — the sound of the crashing water, the cawing of gulls, the smell of salt in the air — the author has bottled pure experience and handed it to us three millennia later.

Critic Terry Eagleton claims in The Event of Literature that “Fiction is a question of how texts behave, and of how we treat them, not primarily of genre.” What makes The Tale of Sinuhe behave differently is that it places the reader into the skull of the imagined character, that it works as a submersible pushing somebody deep into the murky darkness of not just another consciousness, but that replicates experience of being another mind. That’s what makes the first-person different; that it catalogues the moments which constitute the awareness of another mind — the crumbly texture of a madeleine dunked in tea, the warmth of a shared bed in a rickety old inn on a rainy Nantucket evening, the sad reflective poignancy of pausing to watch the ducks in Central Park — and makes them your own, for a time. The first-person narrative is a machine for transforming one soul into another. Such narration preserves a crystalline moment in words like an insect in amber. The ancient Egyptians believed that baboon-faced Thoth invented writing (sometimes he was an ibis). Perhaps it was this bestial visage who wrote these first fictions? Writing in his introduction to the collection Land of Enchanters: Egyptian Short Stories from the Earliest Times to the Present Day, Bernard Lewis says that the Tale of Sinuhe “employs every sentence construction and literary device known in… [her] period together with a rich vocabulary to give variety and color to… [her] narrative.” Fiction is a variety of virtual reality first invented four thousand years ago.

By combining first-person narration with fictionality, the author built the most potent mechanism for empathy which humans have ever encountered; the ability to not just conjure consciousness from out of nothing, but to inhabit another person’s life. Critic James Wood writes in How Fiction Works that first-person narration is “generally a nice hoax: the narrator pretends to speak to us, while in fact the author is writing to us, and we go along with the deception happily enough.” Wood isn’t wrong – first-person narration, in fact all narration, is fundamentally a hoax, or maybe more appropriately an illusion. What I’d venture is that this chimera, the fantasy of fiction, the mirage of narration, doesn’t just imitate consciousness — it is consciousness. Furthermore, different types of narration exemplify different varieties of consciousness, all built upon that hard currency of experience, so that the first-person provides the earliest intimations of what it means to be a mind in time and space. That nameless Egyptian writer gave us the most potent of incantations — that of the eternal I. “By the end of the 19th century [BCE],” writes Steven Moore in The Novel: An Alternative History, “all the elements of the novel were in place: sustained narrative, dialogue, characterization, formal strategies, rhetorical devices.” Moore offers a revisionist genealogy of the novel, pushing its origins back thousands of years before the seventeenth-century, but regardless of how we define the form itself, it’s incontrovertible that at the very least The Tale of Sinuhe offers something original. Whether or not we consider the story to be a “novel,” with all of the social and cultural connotations of that word (Moore says it is, I say eh), at the very least Sinuhe is the earliest extant fragment of a fictional first-person narrative, and thus a landmark in the history of consciousness.  

What the Big Bang is to cosmology, The Tale of Sinuhe is to literature; what the Cambrian Explosion is to natural history, so narrative is to culture. An entire history of what it means to be a human being could focus entirely on the different persons of narration, and what they say about all the ways in which we can understand reality. Every schoolchild knows that narration breaks down into three persons: the omniscient Father of the third-person, the eerie Spirit of the second-person placing the reader themselves into the narrative, and the Son of the first-person, whereby the reader is incarnated into the character. It’s more complicated than this, of course; there’s first-person unreliable narrators and third-person limited omniscient narrators; plural first-person narratives and free indirect discourse; and heterodiegetic narrators and focalized homodiegetic characters. Regardless of the specifics of what narrative person a story is written in, the way a narrative is told conveys certain elements of perception. The voice which we choose says something about reality; it becomes its own sort of reality. As Michael McKeon explains in Theory of the Novel: A Historical Approach, the first-person form is associated with “interiority as subjectivity, of character as personality and selfhood, and of plot as the progressive development of the integral individual.”

The history of the novel is a history of consciousness. During the eighteenth-century, in the aftershocks of the emergence of the modern novel, first-person faux-memoirs — fictional accounts written with the conceit that they were merely found documents like Daniel Defoe’s Robinson Crusoe or Jonathan Swift’s Gulliver’s Travels — reflected both the Protestant attraction towards the self-introspection which the novel allowed for, but also an anxiety over its potentially idolatrous fictionality (the better to pretend that these works actually happened). By the nineteenth-century that concern manifested in the enthusiasm for epistolary novels, a slight variation on the “real document” trope for grappling with fictionality, such as Mary Shelley’s Frankenstein and Bram Stoker’s Dracula. Concurrently, the nineteenth-century saw the rise in the third-person omniscient narrations, with all of its intimations of God-like eternity that we associate with novels like Charles Dickens’ Bleak House and Leo Tolstoy’s War and Peace. By contrast, the twentieth-century marked the emergence of stream-of-consciousness in high modernist novels such as James Joyce’s Finnegan’s Wake and William Faulkner’s The Sound and the Fury; and following the precedent of Gustave Flaubert and Jane Austen, authors like Virginia Woolf in Mrs. Dalloway and D.H. Lawrence in The Rainbow made immaculate use of free indirect discourse, where the intimacy of the first person is mingled into the bird’s eye perspective of the third. All of these are radical; all of them miracles in their own way. But the first-person is only that which is able to transplant personal identity itself.

“With the engine stalled, we would notice the deep silence reigning in the park around us, in the summer villa before us, in the world everywhere,” writes Orhan Pamuk in his novel The Museum of Innocence. “We would listen enchanted to the whirring of an insect beginning vernal flight before the onset of spring, and we would know what a wondrous thing it was to be alive in a park on a spring day in Istanbul.” Though I have never perambulated alongside the Bosporus on a spring day, Pamuk is still able to, by an uncanny theurgy, make the experience of his character Kemal my own. Certainly lush descriptions can be manifested in any narrative perspective, and a first-person narrator can also provide bare-bones exposition, yet there is something particularly uncanny about the fact that by staring at ink stains on dead trees we can hallucinate that we’re entirely other people in Istanbul. The existentialist Martin Heidegger argued that the central problem of philosophy was that of “Being,” which is to say the question of what it means to be this self-aware creature interacting with an abstract, impersonal universe. By preserving experience as if a bit of tissue stained on a microscope slide, first-person narration acts as a means of bracketing time outward, the better to comprehend this mystery of Being. “We ourselves are the entities to be analyzed,” Heidegger writes in Being and Time, and what medium is more uniquely suited for that than first-person narration?

The first-person isn’t merely some sort of textual flypaper that captures sensory ephemera which flit before the eyes and past the ears, but it even more uncannily makes the thoughts of another person — an invented person — your own. By rhetorical alchemy it transmutes your being. Think of P.G. Wodehouse’s Bertie, who for all of his flippancy and aristocratic foppishness arrives on the page as a keen and defined personality in his own right (and write). “I don’t know if you have the same experience, but a thing I have found in life is that from time to time, as you jog along, there occur moments which you are able to recognize immediately with the naked eye as high spots,” Wodehouse writes in The Code of the Woosters. “Something tells you that they are going to remain etched, if etched is the word I want, forever on the memory and will come back to you at intervals down the years, as you are dropping off to sleep, banishing that drowsy feeling and causing you to leap on the pillow like a gaffed salmon.” If genteel and relaxed Anglophilia isn’t your thing, you can rather turn into the nameless narrator of Ralph Ellison’s Invisible Man, buffeted and assaulted by American racism and denied self-definition in his almost ontological anonymity, who recalls the “sudden arpeggios of laughter lilting across the tender, springtime grass — gay-welling, far-floating, fluent, spontaneous, a bell-like feminine fluting, then suppressed; as though snuffed swiftly and irrevocably beneath the quiet solemnity of the vespered air now vibrant with somber chapel bells.”

Those who harden literature into reading lists, syllabi, and ultimately the canon have personal, social, and cultural reasons for being attracted to the works that they choose to enshrine, yet something as simple as a compelling narrative voice is what ultimately draws readers. An author who is able to conjure from pure nothingness a personality that seems realer than real is like Prospero conjuring specters out of the ether. Denis Johnson was able to incarnate spirits from spirits in his classic novel of junkie disaffection Jesus’ Son. Few novels are able to convey the fractured consciousness of the alcoholic (and I know) as well as Johnson’s does; his unnamed character known only as “Fuckhead” filtering through the distillery of his mind every evasion, half-truth, duplicity (to self and others), and finally radical honesty that the drug addict must contend with if they’re to achieve any semblance of sanity. Writing of an Iowa City bar and its bartender, Johnson says that “The Vine had no jukebox, but a real stereo continually playing tunes of alcoholic self-pity and sentimental divorce. ‘Nurse,’ I sobbed. She poured doubles like an angel, right up to the lip of a cocktail glass, no measuring. ‘You have a lovely pitching arm.’” For those normies of a regular constitution, the madness of addictions like Fuckhead’s can seem a simple issue of willpower, yet the distinctive first-person of Jesus’ Son conveys, at least a bit, what it means to be locked inside a cage of your own forging. Fuckhead recalls seeing the aforementioned bartender after he’d gotten clean: “I saw her much later, not too many years ago, and when I smiled she seemed to believe I was making advances. But it was only that I remembered. I’ll never forget you. Your husband will beat you with an extension cord and the bus will pull away leaving you standing there in tears, but you were my mother.”

Alcoholism isn’t the only manifestation of consciousness at war with itself; self-awareness turned inside out so that a proper appraisal of reality becomes impossible. Consider the butler Stevens in Kazuo Ishiguro’s The Remains of the Day, who carries himself with implacable dignity and forbearance, and is committed to the reputation of his master Lord Darlington, while overlooking the aristocrat’s Nazi sympathies. Stevens is a character of great sympathy, despite his own loyalty being so extreme that it becomes a character deficiency. No mind is ever built entirely of abstractions. Rather, our personalities are always constituted by a million prosaic experiences; there is much more of the human in making a cup of coffee or scrubbing a toilet than there is anything that’s simply abstract. “I have remained here on this bench to await the event that has just taken place – namely, the switching on of the pier light,” remembers Stevens, “for a great many people, the evening is the most enjoyable part of the day. Perhaps, then, there is something to his advice that I should cease looking back so much, that I should… make the best of what remains of my day.” A distillation of individual thought as it experiences the world, to move from switching on a light to a reflection on mortality. Ishiguro captures far more of life in Remains of the Day than does a manifesto, a treatise, a syllogism, a theological tract.

By its very existence, fictional literature is an argument about divinity, about humanity, about creativity. That an author is able to compel a reader into the perspective of a radically different human being is the greatest claim that there is to the multiplicity of existence, of the sheer, radiating glory of being. Take Marilyn Robinson’s Rev. John Ames, an elderly Congregationalist minister in small-town Iowa in 1956, who unfolds himself in a series of letters he’s writing to his young son, which constitutes the entirety of the novel Gilead. Devout, reverential, and most of all good, Rev. Ames’ experiences — not just the bare facts of his life but his reactions to them — is exceedingly distant from my own biography. As with so many readers of Gilead, I feel that there is a supreme honor in being able to occupy Ames’ lectern for a time, to read his epistles as if you were his son. In what for me remains one of the most oddly moving passages in recent American literature, Robinson writes how “Once, we baptized a litter of cats.” Ames enumerates the unusual piety of his childhood, and how that compelled a group of similarly religious children fearful of the perdition which awaited pagan kittens to dressing the animals in doll clothing while uttering Trinitarian invocations and using water to mark the cross on their furry foreheads. “I still remember how those warm little brows felt under the palm of my hand. Everyone has petted a cat, but to touch one like that, with the pure intention of blessing it, is a very different thing.” The way that Ames recounts the impromptu feline conversion is funny, in the way that the things children do often are, especially the image of the meowing cat dressed like a baby, their perfidious mother stealing them back by the napes of their neck, all while the event is marked by the young future minister trying to bring as many of the kittens to Christ as he could. “For years we would wonder what, from a cosmic viewpoint, we had done to them. It still seems to be a real question. There is a reality in blessing… It doesn’t enhance sacredness, but it acknowledges it, and there is a power in that. I have felt it pass through me so to speak.”

And so have I, so to speak, felt that same emotion, despite not being a Congregational minister, a resident of rural Iowa, or a man born in the last decades of the Victorian era writing letters to his son. What the descriptions of first-person narration accomplish — both the litany of detail in the material world, and the self-reflection on what it means to be an aware being — is the final disproof of solipsism. The first-person narration, when done deftly and subtly, proves that yours isn’t the only consciousness, because such prose generates a unique mind and asks you to enter into it; the invitation alone is proof that you aren’t all that exists. Literature itself is a giant conscious being — all of those texts from Sinhu onward mingling together and interacting across the millennia like synapses firing in a brain. Writing may not be reducible to thought, but it is a type of thought. Fiction is thus an engine for transformation, a mechanism for turning you into another person. Or animal. Or thing. Or an entity omniscient. Writing of the feline baptism, Ames remembers that “The sensation is one of really knowing a creature, I mean really feeling its mysterious life and your own mysterious life at the same time.” This, it seems to me, is an accurate definition of literature as well, so that fiction itself is a type of baptism. Writing and reading, like baptism — and all things which are sacred — is simply the act of really knowing a creature.