The Sound of Silence: Have We Forgotten How to Be Quiet?

The entire Western front went silent at exactly 11 a.m. on November 11, 1918. An armistice had been reached earlier that morning between the Allies and Germany, so it was agreed that hostilities would cease at that precise hour. Last year, audiologists at London’s Imperial War Museum used seismic data that had been gathered in the trenches as part of the war effort to recreate that moment. Now it’s possible to listen to what American troops along the River Moselle heard: repeated blasts of artillery fire for two minutes, a final explosion, and then at 11 a few gentle bird chirps in the autumn morning as everything falls quiet. An eerie, stunning, beautiful sound. The most sacred thing you will ever hear.

Everything and nothing is answered in silence as deep as that. Following such barbarity and horror, the very earth was now enveloped in a deep blanketing quiet. Novelist Kurt Vonnegut, who had experienced the nightmare of war when he was a POW in Dresden during the Allied bombing campaign of World War II, said of the Armistice that “I have talked to old men who were on battlefields during that minute. They have told me in one way or another that the sudden silence was the voice of God.” A moving and strange wisdom that understands that when the Lord speaks it may not be through thunder-clap and lightning, but rather in the final blast of artillery and a few bird chirps across a scarred field. If dissonance is the question, then silence may be the only answer.

Our present world is not quite as loud as it had been on the Western front (yet), but still we live mired in a never-ending cacophony. A century after God spoke unto the trenches of the Great War, and the volume is getting louder and louder. Not artillery shells, but the din of chatter, the thrum of status updates, the vibration of push notification. With the omnipresence of the smartphone, we’re continually in multiple conversations that often don’t deserve our attention. The 24-hour news cycle forces us to formulate hot-takes and positions on every conceivable event, from issues of political importance to fleeting pop culture phenomenon, and bellicose bloviating announces itself 140 characters at a time from the top on down. If Vonnegut is right that God spoke a century ago, then our current age is too loud to make that voice out today.


So loud is our current age, the echo of social media static resounding in our ears, that the French historian Alain Corbin writes that when it comes to silence “We have almost forgotten what it is.” If the industrial revolution heralded a landscape where silence was eliminated, the peace of town and country punctuated by the clank of machinery, then the digital revolution has shifted that very noise into our heads as well. Perhaps that’s why there has been a recent spike in interest about silence in titles like Erline Kagge’s Silence: In the Age of Noise or Jane Brox’s Silence: A Social History of One of the Least Understood Elements of Our Lives. Like Corbin in his A History of Silence, we’re all looking for a little peace and quiet these days.

Philosophy is traditionally conveyed through debate and argument. A reliance on disputation and dialogue, syllogism and seminar, would seem to proffer a noisy discipline. Yet philosophy can also be defined by a peaceful undercurrent, a metaphysics of silence. Corbain writes that silence is the “precondition for contemplation, for introspection, for meditation, for prayer, for reverie and for creation.” Silence allows us the intentionality required of thought, the presentness needed for rumination. If one model of philosophy takes place in the noisy Agora of ancient Athens, long Socratic dialogues among the sellers of lemons and olives, then another way of practicing philosophy emphasizes the quiet, the construction of a space in which thought can grow unhampered, where answers may be found among the still. Such quiet is its own philosophical method, a type of thought where answers are not generated through dialectic, but from within the soul of the thinker herself.

Something instrumental in this, where silence is a means to an end. But such an approach still has noise as its ultimate goal, in the form of declared conclusions.When parsing the significance of silence there are radical conclusions. Stillness can be the space that allows us to find future answers, but there is a wisdom that sees silence itself as an answer. In a Western tradition that very much sees logic as a goal unto itself, where truth is a matter of positive statements about objective reality, silence is its own anarchic technique, its own strange approach to that which we cannot speak. Silence can be both process and result.

From the sixth century before the Common Era when Lao-Tzu claimed that “The spoken Tao is not the eternal Tao” until Ludwig Wittgenstein’s 1924 conclusion that “Whereof one cannot speak, thereof one must be silent,” silence has been a philosophical counter-tradition. Wittgenstein argued that if all of the questions of science, logic, and mathematics were somehow to be answered, we’d still have to be silent about that which is most profound. Issues of metaphysics would be passed over in a deep silence, not because they’re insignificant, but rather because they’re the most important. Perhaps it’s pertinent that the Austrian philosopher organized these thoughts while fighting in the loud trenches of World War I. Though separated by millennia, both thinkers believed that when the veil of reality is peeled back what mutely announces itself is a profound and deep silence.

Such an approach shares much with what theologians call the apophatic, from the Greek “to deny,” an understanding that when it comes to the approximation of ultimate things, it’s sometimes more accurate to say what something isn’t. Rather than circumscribe God with the limitations of mere words, we should pass eternity over in silence. Such a perspective holds that there can be more truth in uttering nothing than in a literal description. As the 9th-century Irish monk Johan Scotus Erigena wrote, “We do not know what God is.” Anticipating the monk’s contention, Jesus’s response to Pilate’s question of his origins was such that Christ “gave him no answer.” Scripture itself conveys that God’s silence is often more sacred than his declarations.

More than mere humility, apophatic theology is a profound approach that conveys that God isn’t just silent, but that in some ways God is silence. What would it mean, in our age of endless distraction and deafening noise, to inculcate silence not just for a bit of peace, but as an answer itself? What would it mean to engender within our lives an apophatic sensibility? To still the ear and mind long enough so that we could, as Vonnegut said, “remember when God spoke clearly to mankind?” Not in a booming voice, but rather in the sublime silence that permeates the emptiness between atoms and the space between stars, the very quiet where creation is born.
Image credit: Felipe Elioenay.

Missives from Another World: Literature of Parallel Universes

“He believed in an infinite series of times, in a growing, dizzying net of divergent, convergent and parallel times.”—Jorge Luis Borges, The Garden of Forking Paths (1942)

“And you may tell yourself, ‘This is not my beautiful house’/And you may tell yourself, ‘This is not my beautiful wife.’”—Talking Heads, “Once in a Lifetime” (1980)

1.By the release of their 17th album, Everyday Chemistry, in 1984, The Beatles had been wandering for years in a musical wilderness. Their last cohesive venture had been 1972’s Ultraviolet Catastrophe, but the ’70s were mostly unkind to the Beatles—an output composed of two cover albums of musicians like Ben E. King and Elvis Presley, rightly derided by critics as filler. Meanwhile, The Rolling Stones released their brilliant final album before Keith Richards’s death; the disco-inflected 1978 Some Girls which marked them as the last greats of the British Invasion. By contrast, The Beatles’s Master Class and Master Class II were recorded separately and spliced together by engineers at Apple Studies; a two-star Rolling Stone review from 1977 arguing that “Lennon and McCartney don’t even appear in the same room with each other. Their new music is a cynical ploy by a band for whom it would have perhaps been better to have divorced sometime around Abby Road or Let it Be.”

Maybe it was the attempt on John Lennon’s life in 1980, or the newfound optimism following the election of Walter Mondale, but by the time the Fab Five properly reunited to record Everyday Chemistry there was a rediscovered vitality. All of that engineering work from the last two albums actually served them well as they reentered the studio; true to its title with its connotations of combination and separation, catalyst and reaction, Everyday Chemistry would borrow from the digital manipulations of Krautrock bands like Kraftwerk, and the synthesizer-heavy experimentation of Talking Heads. The Beatles may have missed punk, but they weren’t going to miss New Wave.

With a nod to the Beatlemania of two decades before, Lennon and Paul McCartney sampled their own past songs, now overlaid with flourishes of electronic music, the album sounding like a guitar-heavy version of David Byrne and Brian Eno’s avant-garde classic My Life in the Bush of Ghosts. A formula that would define this reconstituted version of the band, now committed to digital production, and whose influences are seen from Jay Z’s Lennon-produced The Grey Album, to the tracks George Harrison played with James Mercer in Broken Bells.

By asking Eno to produce their new album, The Beatles signaled that they were once-again interested in producing pop that didn’t just pander. Always pioneers in sound effects, the modulation on Revolver, Sergeant Pepper’s Lonely Heart’s Club Band, and Ultraviolet Catastrophe were a decidedly lo-fi affair, but by the era of the Macintosh, the Beatles had discovered the computer. Speaking to Greil Marcus in 1998, Ringo Starr said “You know, we were always more than anything a couple of kids, but John was always into gizmos, and something about that box got his attention, still does.” Billy Preston, officially the band’s pianist since Ultraviolet Catastrophe, was also a zealous convert to digital technology. In Marcus’s Won’t Get Fooled Again: Constructing Classic Rock, Preston told the critic that “They were a bar band, right? Long before I met them, but I was a boogie-woogie guy too, so it was always copacetic. You wouldn’t think we’d necessarily dig all that space stuff, but I think the band got new life with that album.” From the nostalgic haziness of the opening track “Four Guys” to the idiosyncratic closing of “Mr. Gator’s Swamp Jamboree,” Everyday Chemistry was a strange, beautiful, and triumphant reemergence of The Beatles.

2.Such a history may seem unusual to you, because undoubtedly you are a citizen of the same dimension that I am. Unless you’re a brave chrononaut who has somehow twisted the strictures of ontological reality, who has ruptured the space-time continuum and easily slides between parallel universes, your Beatles back-catalog must look exactly the same as mine. And yet Everyday Chemistry exists as a ghostly artifact in our reality, a digital spirit uploaded to the Internet in 2009 by some creative weirdo, who cobbled together an imagined Beatles album from the fragments of their solo careers. A bit of Wings here, some of the Plastic Ono Band there, samplings from All Things Must Pass and Sentimental Journey, edited together into a masterful version of what could have been.

Most of my narrative above is my own riffing, but claims that the album is from a parallel universe are part of the mythmaking that makes listening to the record so eerie. “Now this is where the story becomes slightly more unbelievable,” the pseudonymous “discoverer” James Richards writes. Everyday Chemistry is a seamlessly edited mashup done in the manner of Girl Talk or Danger Mouse, but its ingenious creator made a parallel universe origin of Everyday Chemistry the central conceit. Richards claims that a tape of the album was swiped after he fell into a vortex in the California desert and was gifted Everyday Chemistry by an inter-dimensional Beatles fan.

At Medium, John Kerrison jokes that “inter-dimensional travel probably isn’t the exact truth” behind Everyday Chemistry, even if the album is “actually pretty decent.” Kerrison finds that whoever created the album is not going to reveal their identity anytime soon. Unless of course it actually is from a parallel universe. While I mostly think that that’s probably not the truth, I’ll admit that anytime I listen to Everyday Chemistry I get a little charged frisson, a spooky spark up my spine. It’s true that Everyday Chemistry is kind of good, and it’s also true that part of me wants to believe. Listening to the album is like finding a red rock from Mars framed by white snow in your yard—a disquieting interjection from an alien world into the mundanity of our lives.

Part of what strikes me as so evocative about this meme that mixes science fiction, urban legend, and rock ‘n’ roll hagiography, is that we’re not just reading about a parallel universe, but the evidence of its existence is listenable right now. Tales of parallel universes—with their evocation of “What if our world was different from how it is right now?”—is the natural concern of all fiction. All literature imagines alternate worlds. But the parallel universe story makes such a concern explicit, makes it obvious. Such narratives rely upon the cognitive ability to not accept the current state of things, to conjecture and wonder at the possibility that our lives could be different from how we experience them in the present.

Such stories are surprisingly antique, as in Livy’s History of Rome written a century before the Common Era, in which he conjectured about “What would have been the results for Rome if she had been engaged in a war with Alexander?” Even earlier than Livy, and the Greek father of history Herodotus hypothesized about what the implications would have been had there been a Persian victory at Marathon. Such questions are built into how women and men experience our lives. Everyone asks themselves how things would be different had different choices been made—what if you’d moved to Milwaukee instead of Philly, majored in art history rather than finance, asked Rob out for a date instead of Phil?

Alternate history is that narrative writ large. Such stories have been told for a long time. In the 11th century there was Peter Damian’s De Divina Omnipotentia, which imagined a reality where Romulus and Remus had never been suckled by a she-wolf and the Republic was never founded. In 1490, Joanot Martorell’s romance Tirant lo Blanch, perhaps the greatest work ever written in the Iberian Romance language of Valencian, envisioned a conquering errant knight who recaptures Constantinople from the Ottomans. Medieval Europeans were traumatized as the cross was toppled from the dome of the Hagia Sophia, but in Martorell’s imagination a Brittany-born knight is gracious enough so that “A few days after he was made emperor he had the Moorish sultan and the Grand Turk released from prison.” What followed was a “peace and a truce for one hundred one years,” his former enemies “so content that they said they would come to his aid against the entire world.” Written only 37 years after Mehmed II’s sacking of Orthodoxy’s capital, Tirant lo Blanch presents a Christian poet playing out a desired reality different from the one in which he actually found himself.

In the 19th century, the American writer Nathaniel Hawthorne did something similar, albeit for different ideological aims. His overlooked “P.’s Correspondence” from his 1846 Mosses from an Old Manse is credibly the first alternate history story written in English. An epistolary narrative where the titular character, designated by only his first initial, writes about all the still-living Romantic luminaries he encounters in a parallel version of Victorian London. Lord Byron has become a corpulent, gouty, conservative killjoy; Percy Shelley has rejected radical atheism for a staunch commitment to the Church of England; Napoleon Bonaparte skulks the streets of London, embarrassed and vanquished while kept guard by two police officers; and John Keats has lived into a wise seniority where he alone seems to hold to the old Romantic faith that so animated and inspired Hawthorne. P. is a character for whom the “past and present are jumbled together in his mind in a manner often productive of curious results,” a description of alternate history in general. Hawthorne’s is a message about the risks of counter-revolution, but also an encomium for the utopian light exemplified by Keats, for whom there remains so “deep and tender a spirit of humanity.”

Alternate history’s tone is often melancholic, if not dystopian. An exercise in this world might not be great, but think of how much worse it could be. Think of authors like Philip K. Dick in The Man in the High Castle or Robert Harris in Fatherland, both exploring the common trope of imagining a different outcome to the second world war. Such novels present Adolf Hitler running rough-shod over the entire globe, crossing the English Channel and ultimately the Atlantic. Such narratives highlight the ways in which the evils of fascism haven’t been as vanquished as was hoped, but also as a cautionary parable about what was narrowly averted. In his own indomitable amphetamine-and-psychosis-kind-of-way, Dick expresses something fundamental about the interrogative that defines alternative history, not the “What?” but the “What if?” He asks “Can anyone alter fate?…our lives, our world, hanging on it.”

Such novels often trade in the horror of an Axis victory or the catastrophe of Pickett’s Charge breaking through that Confederate high-water line in that quiet, hilly field in Pennsylvania. Some of the most popular alternate history depicts a dark and dystopian reality in which polished Nazi jack-boots stomp across muddy English puddles and Confederate generals hang their ugly flag from the dome of the Capital building; where an American Kristallnacht rages across the Midwest, or emancipation never happens. Gavriel Rosenfeld in his study The World Hitler Never Made: Alternate History and the Memory of Nazism argues that such stories serves a solemn purpose, that the genre has a “unique ability to provide insights into the dynamics of remembrance.” Rosenfeld argues that alternate history, far from offering impious or prurient fascination with evil, memorializes those regimes’ victims, generating imaginative empathy across the boundaries of history and between the forks of branching universes.

Philip Roth in The Plot Against America and Michael Chabon in The Yiddish Policeman’s Union imagine and explore richly textured versions of the 20th century. With eerie prescience, Roth’s 2004 novel reimagines the genre by focusing on the personal experience of the author himself, interpolating his own childhood biography into a larger narrative about the rise of a nativist, racist, sexist, antisemitic American fascism facilitated through the machinations of a foreign authoritarian government. Chabon’s novel is in a parallel universe a few stops over, but examines the traumas of our past century with a similar eye towards the power of the counterfactual, building an incredibly detailed alternate reality in which Sitka, Alaska, is a massive metropolis composed of Jewish refugees from Europe. Such is the confused potentiality that defines our lives, both collective and otherwise; an apt description of our shared predicament could be appropriated from Chabon’s character Meyer Landsman: “He didn’t want to be what he wasn’t, he didn’t know how to be what he was.”

For Rosenfeld, the form “resists easy classification. It transcends traditional cultural categories, being simultaneously a sub-field of history, a sub-genre of science fiction, and a mode of expression that can easily assume literary, cinematic, dramatic or analytical forms.” More than just that, I’d suggest that these narratives says something fundamental about how we tell stories, where contradiction and the counter-factual vie in our understanding, the fog from parallel universes just visible at the corners of our sight, fingerprints from lives never lived smudged across all of those precious things which we hold onto.

While long the purview of geeky enthusiasts, with their multiverses and retconning, alternate history has been embraced by academic historians for whom such conjecture has traditionally been antithetical to the sober plodding of their discipline. In history no experiment can ever be replicated, for it is we who live in said experiment—which is forever ongoing. Temporality and causality remain a tricky metaphysical affair, and it’s hard to say how history would have turned out if particular events had happened differently. Nonetheless, true to its ancient origins in the conjectures of Herodotus and Livy, some scholars engage in “counterfactual history,” a variety of Gedankenexperiment that plays the tape backwards.

Economist Niall Ferguson has advocated for counterfactuals; arguing that they demonstrate that history doesn’t necessarily follow any predetermined course. Writing in his edited collection Virtual History: Alternatives and Counterfactuals, Fergusson claims that the “past—like real life chess, or indeed any other game—is different; it does not have a predetermined end. There is no author, divine or otherwise; only characters, and (unlike in a game) a great deal too many of them.“

Seriously considering counterfactual history as a means of historiographical analysis arguably goes back to John Squire’s 1931 anthology If it Had Happened Otherwise. That volume included contributions by Hilaire Belloc, who true to his monarchist sympathies imagines a very much non-decapitated Louis XVI returning to the Bourbon throne; his friend G.K. Chesterton enumerating the details of a marriage between Don John of Austria and Mary Queen of Scots; and none-other-than future prime minister Winston Churchill writing a doubly-recursive alternate history entitled “If Lee had not won the Battle of Gettysburg,“ narrated from the perspective of a historian in a parallel universe in which the Confederacy was victorious, who roughly imagines a different version of our history.

Churchill concludes the account with his desired reunification of the English speaking peoples, a massive British, Yankee, and Southern empire stopping the Teutonic menace during the Great War. As with so much of Lost Cause fantasy, especially in the realm of alternate history (including Newt Gingerich’s atrocious Gettysburg: A Novel of the Civil War—yes that Newt Gingerich), Churchill’s was a pernicious revisionism, obstinate fantasizing that posits the Civil War as being about something other than slavery. Churchill’s imaginary Robert E. Lee simply abolishes slavery upon the conclusion of the war, even while the historical general fought in defense of the continuation and expansion of that wicked institution. Yet ever the Victorian Tory, Churchill can’t help but extol a generalized chivalry, with something of his ideal character being implicit in his description of Lee’s march into Washington, D.C. and Abraham Lincoln’s rapid abandonment of the capital. The president had “preserved the poise and dignity of a nation…He was never greater than in the hour of fatal defeat.“ In counterfactual history, Churchill had been cosplaying dramatic steadfastness while facing invasion before he’d actually have to do it.

Counterfactuals raise the question of where exactly these parallel universes are supposed to be, these uncannily familiar storylines that seem as if they inhabit the space at the edge of our vision for a duration as long as an eye-blink. Like a dream where unfamiliar rooms are discovered in one’s own house, the alternate history has a spooky quality to it, and the mere existence of such conjecture forces us to confront profound metaphysical questions about determinism and free-will, agency and the arc of history. Did you really have a choice on whether or not you would move to Philly or Milwaukee? Was art history ever a possibility? Maybe Phil was always going to be your date.

The frustration of the counterfactual must always be that since history is unrepeatable, not only is it impossible to know how things would be altered, but we can’t even tell if they could be. How can one know what the impact of any one event may be, what the implications are for something happening slightly different at Marathon, or at Lepanto, or at Culloden, or Yorktown? All those butterflies fluttering their wings, and so on. Maybe Voltaire’s Dr. Pangloss in Candide is right, maybe this really is the best of all possible worlds, though five minutes on Twitter should make one despair at such optimist bromides. Which is in part why alternate history is so evocative—it’s the alternate, stupid. James Richards found that other world easily, apparently there is a wormhole in the California desert that takes you to some parallel universe where scores of Beatles albums are available. But for all of those who don’t have access to the eternal jukebox, where exactly are these parallel realities supposed to be?

Quantum mechanics, the discipline that explains objects at the level of subatomic particles, has long produced surreal conclusions. Werner Heisenberg’s Uncertainty Principle proves that it’s impossible to have complete knowledge of both the location and the momentum of particles; Louis de Broglie’s wave-particle duality explains subatomic motion with the simultaneous mechanics of both particle and wave; and Erwin Schrödinger’s fabled cat, who is simultaneously dead and alive, was a means of demonstrating the paradoxical nature of quantum supposition, whereby an atom can be both decayed and not at the same time. The so-called Copenhagen Interpretation of quantum mechanics is comfortable with such paradoxes, trading in probabilities and the faith that observation is often that which makes something so. At the center of the Copenhagen Interpretation is how we are to interpret that which physicists call the “collapse of the wave-function,“ the moment at which an observation is made and something is measured as either a wave or a particle, decayed or not. For advocates of the orthodox Copenhagen Interpretation, the wave-function exists in blissful indeterminacy until measured, being both one thing and the other until we collapse it.

For a Pentagon-employed physicist in 1957 named Hugh Everett, such uncertainty was unacceptable. That a particle could be both decayed and not at the same time was nonsensical, a violation of that fundamental logical axiom of non-contradiction. If Everett thought that the Copenhagen Interpretation was bollocks, then he had no misgivings about parallel universes, for the physicist would argue that rather than something being both one thing and its opposite at the same time, it’s actually correct to surmise that the universe has split into two branching forks. In Schrödinger’s fabled thought-experiment, a very much not sub-atomic cat is imprisoned in some sadist’s box, where the release of a poison gas is connected to whether an individual radioactive atomic nucleus has decayed or not. According to the Copenhagen Interpretation, that cat is somehow dead and alive since the nucleus is under the purview of quantum law, and can exist in indeterminacy as both decayed and not until it is observed and the wave-function collapses. Everett had a more parsimonious conclusion—in one universe the cat was purring and licking his paws, and in an unlucky dimension right next door all four fury legs were rigid and straight-up in the air. No weirder than the Copenhagen Interpretation, and maybe less so. Writing of Everett’s solution, the physicist David Deutsch in his book The Fabric of Reality claims that “Our best theories are not only truer than common sense, they make more sense than common sense.“

Maybe mathematically that’s the case, but I still want to know where those other universes are? Whither in wardrobe or wormhole, it feels like Narnia should be a locale more accessible than in just the equations of quantum theorists. For myriad people who congregate in the more eccentric corners of the labyrinth that is the Internet, the answer to where those gardens of forking paths can be found is elementary—we’re all from them originally. Those who believe in something called the “Mandela Effect” believe they’re originally from another dimension, and that you probably are as well. Named after people on Internet message boards who claim to have memories of South African president Nelson Mandela’s funeral in the early ’80s (he died in 2013), whole online communities are dedicated to enumerating subtle differences between our current timeline and wherever they’re originally from. Things like recalling a comedy about a genii starring Sinbad called Shazaam! or the ursine family from the The Berenstain Bears spelling their surname “Berenstein“ (I think that I’m actually from that dimension).

Everett’s calculations concern minuscule differences; the many-worlds interpretation deals in issues of momentum and location of subatomic particles. That doesn’t mean that there isn’t a universe where the Berenstain bears have a different last name—in a multiverse of infinite possibility all possibilities are by definition actual things—but that universe’s off-ramp is a few more exits down the highway. This doesn’t stop believers in the Mandela Effect from comparing notes on their perambulations among the corners and byways of our infinite multiverse, recalling memories from places and times as close as your own life and as distant as another universe. Looking out my window I can’t see the Prudential Center anymore, and for a second I wonder if it ever really existed, before realizing that it’s only fog.

Have some sympathy for those of us who remember Kit-Kat bars as being spelled with a dash, or Casablanca having the line “Play it again, Sam.” Something is lost in this universe of ours, here where whatever demiurge has decided to delete that line. Belief in the Mandela Effect illuminates our own alterity, our own discomfort in this universe or any other—a sense of alienness, of offness. The Mandela Effect is when our shoes pinch and our socks are slightly mismatched, when we could swear that we didn’t leave our keys in the freezer. And of course the Mandela Effect is the result of simply misremembering. A deeper truth is that existence can sometimes feel so oft-putting that we might as well be from a parallel universe. Those other dimensions convey the promise of another world, of another reality. That just because things are done this way where we live now, doesn’t mean that they’re done this way everywhere. Or that they must always be done this way here, either.

What’s moving about Everyday Chemistry is that those expertly mixed songs are missives from a different reality, recordings from a separate, better universe. The album is a tangible reminder that things are different in other places, like the fictional novel at the center of K. Chess’s brilliant new novel Famous Men Who Never Lived, which imagines thousands of refugees from a parallel universe find a home in our own. In that novel, the main character clutches onto a science fiction classic called The Pyronauts, a work of literature non-existent in our reality. The Pyronauts, like Everyday Chemistry, betrays a fascinating truth about parallel universes. We may look for physical, tangible, touchable proof of the existence of such places, but literature is all the proof we need. Art is verification that another world isn’t just possible, but already exists. All literature is from a parallel universe and all fiction is alternate history.

Whether or not the Beatles recorded Everyday Chemistry, the album itself exists; if The Pyronauts is written not in our universe, then one only need transcribe it so as to read it. In the introduction to my collection The Anthology of Babel, I refer to “imagined literature;” an approach towards “probing the metaphysics of this strange thing that we call fiction, this use of invented language which is comprehensible and yet where reality does not literally support the representation.” Every fiction is an epistle from a different reality, even Hugh Everett would tell you that somewhere a real Jay Gatsby pined for Daisy Buchanan, that a few universes over Elizabeth Bennett and Mr. Darcy were actually married, and somewhere Mrs. Dalloway is always buying the flowers herself. The Great Gatsby, Pride and Prejudice, and Mrs. Dalloway are all, in their own way, alternate histories as well.

Alternate history functions to do what the best of literature more generally does—provide a wormhole to a different reality. That fiction engenders a deep empathy for other people is true, and important, but it’s not simply a vehicle to enter different minds—but different worlds as well. Fiction allows us to be chrononauts, to feel empathy for parallel universes, for different realities. Such a thing as fiction is simply another artifact from another dimension; literature is but a fragment from a universe that is not our own.  We are haunted by our other lives, ghosts of misfortune averted, spirits of opportunities rejected, so that fiction is not simply the experience of another, but a deep human connection with those differing versions on the paths of our forked parallel lives.

Image credit: Unsplash/Kelly Sikkema.

Island Time: On the Poetics of the Isle

“The isle is full of noises, /Sounds, and sweet airs, that give delight, and hurt not.”—Caliban in William Shakespeare’s The Tempest (1610)

“Is our island a prison or a hermitage?”—Miranda in Aimé Césaire’s Une Tempête (1969)

In 1810 a struggling whaler by the name of Jonathan Lambert, “late of Salem…and citizen thereof,” set out for torrid waters. By December of 1811, Lambert and his crew of three alighted upon an unpopulated volcanic island in the south Atlantic that Portuguese navigators had christened Ilha de Tristão da Cunha three centuries before. Lambert, in a spirit of boot-strapping individualism, declared himself to be king. A solitary monarchy, this Massachusettsian and his three subjects on that rocky shoal of penguins and guano. Still, Lambert exhibited utopian panache, not just in spite of such remoteness, but because of it. Contrary to the British maritime charts that listed the archipelago as “Tristan de Cunha,” the American renamed them a far more paradisiacal “Islands of Refreshment.”

This whaler’s small kingdom promised not just refreshment from Lambert’s old life, where he explained that “embarrassments…have hitherto constantly attended me,” but from the affairs of all people. Lambert’s Islands of Refreshment were, and are, the most distant habitation on the planet, laying 2,166 miles from the Falkland Islands, 1,511 miles from Cape Town, South Africa, and 1,343 miles from St. Helena where lonely Napoleon Bonaparte would soon expire. And for the whaler’s sake, Lambert’s new home was 10,649 miles from Salem, Massachusetts.

Parliamentary records quote Lambert as claiming that he had “the desire
and determination of preparing myself and family a home where I can enjoy
life.” Here on the Islands of Refreshment, where he subsisted on the greasy
blubber of elephant seals, Lambert prayed that he would be “removed beyond the
reach of chicanery and ordinary misfortune.” As it was, all utopias are deigned
to fail sooner or later, and ordinary misfortune was precisely what would end
Lambert’s life, the experienced sailor drowning five months after arriving,
such final regicide belonging to the sea.

Four years after Lambert’s drowning, the British navy claimed the Islands of Refreshment for king, England, and St. George, hoping to prevent the use of the archipelago by either the Americans (whose ships trolled those waters during the War of 1812) or any French who wished to launch a rescue mission for their imprisoned emperor those 1,343 miles north. And as Tristan de Cunha was folded into that realm, so it remains, now administered by the office of the British Overseas Territory, joining slightly more than a dozen colonies from Anguilla to Turks and Caicos that constitute all that remain of the empire upon which it was said that the sun would never set. As of 2019, Lambert’s ill-fated utopia remains the most solitary land mass on Earth, some 250 hearty souls in the capital of Edinburgh-of-the-Seven-Seas, as far from every other human as is possible on the surface of the planet, and a potent reminder of the meaning of islands. An island, you see, has a certain meaning. An island makes particular demands.

In trying to derive a metaphysics of the island—a poetics of the isle—few square miles are as representative as Tristan de Cunha. A general theory could be derived from Lambert’s example, for failure though he may have been, he was a sort of Prospero and Robinson Crusoe tied into one. Maybe more than either, Lambert’s project resembles that of Henry Neville’s strange 1668 utopian fantasy The Isle of Pines. Neville’s arcadian pamphlet was a fabulist account by a fictional Dutch sailor who comes upon an isle that had been settled by an English castaway named George Pine and four women from each corner of the world, whose progeny populate that isle three generations later.

Lambert drowned before he had opportunity to be a George Pine for the Islands of Refreshment, but the one survivor of his original quartet, at Italian named Tommaso Corri, has descendants among the populace of Tristan de Cunha today. Neville’s account is significantly more ribald than what we know of Tristan de Cunha’s peopling. The Isle of Pines is a psychosexual mine-field ripe for Freudian analysis, where the phallic anagram of Pine’s name makes clear the pornographic elements involved in a narrative of a castaway kept company not by Friday, but by four women. Pine’s account includes such exposition as “Idleness and Fulness of every thing begot in me a desire of enjoing the women…I had perswaded the two Maids to let me lie with them, which I did at first in private, but after, custome taking away shame (there being none but us) we did it more openly, as our Lusts gave us liberty.”

Imaginary islands often present colonial fantasy, an isolated Eden ready for exploitation by an almost always male character, where the morality of home can be shuffled off, while those whose home has been violated are not even given the dignity of names. Such is in keeping with the pastoral origins of the island-narrative, the myth that such locations are places outside of time and space, simultaneously remote and yet connected by the ocean to every other point on the globe. This is the metaphysics of the island, for they may be sun-dappled, palm-tree-lined, blue-water isles many fathoms from “civilization,” but by virtue of the ocean current they are connected to every other location that sits on a coast. By dint of that paradoxical property, islands are almost always the geographic feature with which utopias are associated, for paradise is not paradise if it’s easily accessible.

The Isle of Pines is an example of the utopian genre that flourished in early modern England, and includes Francis Bacon’s 1627 science fiction novel New Atlantis and James Harrington’s 1656 The Commonwealth of Oceana. Writers popularized the idea of the New World island; authors like Richard Hakluyt in his 1600 Principle Navigations and Samuel Purchas in his 1614 Purchas, his Pilgrimage collated the fantastic accounts of Walter Raleigh, Francis Drake, Thomas Harriot, and Martin Frobisher, even while those compilers rarely ventured off another island called Britain. Renaissance authors were fixated not on river or lake, isthmus or peninsula, but on islands. Not even the ocean itself held quite as much allure. “The Island” was that era’s geographic feature par excellence.

Islands have of course been the stuff of fantasy since Homer sang tales of brave Ulysses imprisoned on Calliope’s erotic western isle, or Plato’s account of the sunken continent of Atlantis. Geographer Yi-Fu Tuan in Topophilia: A Study of Environmental Perception, Attitudes, and Values explains that the “island seems to have a tenacious hold,” arguing that unlike continental seashores or tropical forests, islands played a small role in human evolution, and that rather their “importance lies in the imaginative realm.” But it was the accounts of the so-called “Age of Discovery” that endowed the geography of the island with a new significance, a simultaneous rediscovery and invention of the very idea of the island. We’re still indulging in that daydream.

Scholar Roland Greene explains in Unrequited Conquests: Love and Empire in the Colonial Americas that the island took on a new significance during the Renaissance, writing that the geographical feature often signified a “figurative way of delimiting a new reality in the process of being disclosed, something between a fiction and an entire world.” Knowledge of islands obviously existed before 1492, and earlier stories about them even had some of the same fantastic associations, as indicated in the aforementioned examples. But unlike the Polynesians who were adept navigators, Europeans had to anxiously hug the coasts for centuries. Improvements in Renaissance technology finally allowed sailors to venture into the open ocean, and in many ways the awareness that on the other side of the sea was a different continent meant the discovery of something that people had travelled on for millennia—the Atlantic Ocean. And the discovery of that ocean invested the remote and yet interconnected island with a glowing significance.

It’s not a coincidence that Thomas More’s Utopia was printed in 1516 only 14 years after the navigator Amerigo Vespucci argued that the islands off the coast of Asia which Christopher Columbus had “discovered” were continents in their own right—though in many ways the Americas are islands off the coast of Asia, even if the scale and distance are larger than might have been assumed. More’s account of an imagined perfect society as narrated by yet another Dutch sailor has occasioned five centuries of disagreement on how the tract should be read, but whether in earnest or in jest, Utopia’s American location and its status as an island “two hundred miles broad” that holds “great convenience for mutual commerce” is not incidental. If the sandy, sunny island took on new import, it’s because the isolated hermitage of the isle made the perfections of utopia seem possible.

So powerful was More’s tract that later colonists would pilfer his utopian imagery, conflating real Caribbean isles with the future saint’s fiction. Such was the magic of what Tuan describes as the “fantasy of island Edens” that Ponce de Leon assumed a place as lovely as Florida must be an island, and for generations cartographers depicted California as such, as they “followed the tradition of identifying enchantment with insularity.” Utopias, paradises, or Edens have no place in a landlocked location. Tuan explains that the island “symbolizes a state of prelapsarian innocence and bliss, quarantined by the sea from the ills of the continent,” and that in the early modern era they signaled “make-believe and a place of withdrawal from high-pressured living on the continent.” Watch an advertisement from the Bahamian travel board and see if much has changed in that presentation since the 16th century, or spend a few days on a carefully manicured Caribbean beach in the midst of a cold and grey winter, and see if you don’t agree.

Such were the feelings for the crew of the Sea Venture bound for Jamestown in 1609, finding themselves blown by a hurricane onto the corrals off an isolated isle whose eerie nocturnal bird-calls had long spooked navigators, known to posterity as Bermuda. Historians Marcus Rediker and Peter Linebaugh in The Many-Headed Hydra: Sailors, Slaves, Commoners, and the Hidden History of the Revolutionary Atlantic write that Bermuda was a “strange shore, a place long considered by sailors to be an enchanted ‘Isle of Devils’ infested with demons and monsters… a ghoulish graveyard.” During that sojourn, many of the Sea Venture’s crew came to a different conclusion, as Bermuda “turned out to be an Edenic land of perpetual spring and abundant food.”  

A gentleman named William Strachey, dilettante sonneteer and investor in both the Blackfriars Theater and the Virginia Company, would write that Bermuda’s environment was so ideal that it “caused many of them utterly to forget or desire ever to return…they lived in such plenty, peace, and ease.” Silvester Jourdain, in his 1610 A Discovery of the Bermudas, Otherwise Called the Isle of Devils, claimed that contrary to its reputation, this island was “the richest, healthfullest and pleasantest they ever saw.” Trouble in this paradise, as Strachey reported, for so pleasant was Bermuda that many of the shipwrecked sailors endeavored never to continue onto Virginia, for on the continent “nothing but wretchedness and labor must be expected, with many wants and churlish entreaty, there being neither that fish, flesh, nor fowl which here… at ease and pleasure might be enjoyed.”

The Virginia Company could of course not allow such intransigence, so while those 150 survivors made do on Bermuda’s strand for nine months, the threat of violence ensured that all the sailors would continue onto America after two new ships were constructed. For a pregnant year, the survivors of the Sea Venture sunned themselves on Atlantic white-sand beaches, nourished by the plentiful wild pigs, coconuts, and clean fresh-water streams, while in Jamestown the colonists resorted to cannibalism, having chosen to hand their agriculture over entirely to the cultivation of an addictive, deadly, and profitable narcotic called tobacco. When the Sea Venture’s crew arrived in Jamestown, including Pocahontas’s future husband, John Rolfe, they found a settlement reduced from more than 500 souls to fewer than 60, while the Bermudian vessel lost only two sailors whom history remembers as Carter and Waters, the pair so enraptured with this Eden that they absconded into its internal wilderness never to be seen again.

William Shakespeare’s sprite Ariel in The Tempest refers to the isle as the “still-vex’d Bermoothes,” and for a generation scholars have identified Strachey’s letter as source material for that play, the two men having potentially been drinking buddies at the famed Mermaid Tavern. Long has it been a theoretical point of contention as to if The Tempest could be thought of as Shakespeare’s “American play.” Excluding Strachey’s letter, the plot of this last play of Shakespeare’s is arguably his only original one, and though geographic calculation places Prospero’s isle in the Mediterranean, his concerns are more colonial in an American sense, with Ariel and the ogre Caliban being unjustly exploited. The latter makes this clear when he intones that this “island’s mine, by Sycorax my mother/Which thou tak’st from me.” In a disturbing reflection of actual accounts, the working-class castaways Trinculo and Stephano (Is that you Carter and Waters?) conspire to exhibit Caliban in a human zoo after plying him with liquor. 

If America has always been a sort of Faustian bargain, a fantasy of Eden purchased for the inconceivable price of colonialism, slavery, and genocide, then The Tempest offers an alternative history where imperialism is abandoned at the very moment that it rendered Paradise lost. As Faustian a figure as ever, the necromancer Prospero ends those revels with “As you from crimes would pardon’d be/Let your indulgences set me free.” And so, the Europeans return home, leaving the isle to Ariel and Caliban, its rightful inhabitants. Something unspeakably tragic, this dream of a parallel universe penned by Shakespeare, an investor in that very same Virginia Company. Shortly after The Tempest was first staged, Bermuda would be transformed from liberty to a place “of bondage, war, scarcity, and famine,” as Linebaugh and Rediker write. But even if such a terrestrial heaven was always the stuff of myth, in the play itself “something rich and strange” can endure, this place where “bones are coral made” and there are “pearls that were his eyes,” and were “Sea-nymphs hourly ring his knell.”

This is the island of Andrew Marvell’s 1653 poem “Bermudas,” an archipelago “In th’ocean’s bosom unespied,” until religious schismatics fleeing for liberty come “Unto an isle so long unknown, /And yet far kinder than our own.” In their “small boat,” the pilgrims land “on a grassy stage/Safe from the storm’s and prelates’ rage.” God has provided in these fortunate isles fowls and oranges, pomegranates, figs, melons, and apples. On Bermuda cedars are more plentiful than in Lebanon, and upon the beaches wash the perfumed delicacy of ambergris, the whale bile that Charles II supposedly consumed alongside his eggs. For these Englishmen, they sing a “holy and a cheerful note, /And all the way, to guide their chime, with falling oars they kept the time,” for Bermuda itself is a “temple, where to sound his name.” In Marvell’s imagination the islands are a place where the Fall itself has reversed, where the expulsion from Eden never happened, here on Bermuda where God “gave us this eternal spring/Which here enamels everything.”

While Strachey wrote of their nine-month vacation of indolence, sensuality, and pleasure, islands have also been feared and marveled at as sites of hardened endurance. If utopian literature sees the island as a hermitage, then its sister genre of the Robinsonade sees the isle as a prison, albeit one that can be overcome. That later genre draws its name, of course, from Daniel Defoe’s 1719 classic Robinson Crusoe. Greene notes that “shipwreck become a locus for the frustrations of conquest and trade,” and nowhere is that clearer than in Defoe’s book. The titular character remarks that “We never see the true state of our condition till it is illustrated to us by its contraries, nor know how to value what we enjoy, but by the want of it.” In the fiery cauldron that is the desert island, we’re to understand that the hardship and isolation of Crusoe’s predicament will reveal to him (and us) what he actually is, and what he should be. Defoe concludes that the proper state of man should be defined by industry, imperialism, Puritanism, and an anal-retentive sense of organizing resources, time, and our very beliefs.

Robinson Crusoe was written when the relationship between travelogue and fiction was still porous; readers took the account of the sailor shipwrecked on an isle at the mouth of Venezuela’s Orinoco River as factual, but the character was always the invention of Defoe’s mind. Defoe’s novel, if not the first of that form, was certainly an early bestseller; so much so that Robinson Crusoe endures as an archetypal folktale, the narrative of the castaway and his servant Friday known by multitudes who’ve never read the book (and perhaps still don’t known that it was always a fiction). Living on in adaptation over the centuries, its influence is seen everywhere from the Robert Zemeckis film Cast Away, to the television series Lost and Gilligan’s Island. With what we’re to read as comfortable acclimation, Crusoe says that he “learned to look more upon the bright side of my condition, and less upon the dark side.” Whether Crusoe is aware of that dark side or not, he very much promulgates a version of it, the castaway turning himself into a one-man colonial project, ethnically cleansing the island of its natives, basically enslaving “my man Friday” after converting him to Christianity, and exploiting the isle’s natural resources.

With more than a bit of Hibernian skepticism towards the whole endeavor, James Joyce claimed that Crusoe was the “true prototype of the British colonist” (and the Irishman knew something about British colonialism). Joyce sees the “whole Anglo-Saxon spirit in Crusoe: the manly independence, the unconscious cruelty, the persistence, the slow yet efficient intelligence, the sexual apathy, the calculating taciturnity.” Something appropriate in the Robinsonade using the atomistic isolation of the island as a metaphor for the rugged individual, the lonely consumer, the lonelier capitalist. Crusoe spent three decades on his island, from 1651 to 1686, but unbeknownst to him, half-way during his seclusion the Second Anglo-Dutch War settled a few small issues of geography not very far from Crusoe’s natural anchorage. When that war ended, the English would trade their colony in Suriname, watered by the tributaries of the Orinoco, in exchange for an island many thousands of miles to the north called Manhattan. That colder bit of rock would be much more associated with capitalism and rugged individualism than even Crusoe’s.

Innumerable are the permutations of utopia’s hermitage and the prison of the Robinsonade. Consider the kingdoms of Jonathan Swift’s 1726 Gulliver’s Travels; the pseudonymous Unka Eliza Winkfield in 1767’s The Female American, with its tiki idols talking through trickery; Johann David Wyss’s wholesome 1812 Swiss Family Robinson; Jules Verne’s fantastic 1874 The Mysterious Island; H.G Wells’s chilling 1896 The Island of Dr. Moreau; William Golding’s thin 1954 classic Lord of the Flies (which has terrified generations of middle school students); Scott O’Dell’s 1960 New Age Island of the Blue Dolphins; Yann Martel’s obscenely popular 2001 Life of Pi; and even Andy Weir’s 2012 technocratic science fiction novel The Martian. All are Robinsonades or utopias of a sort, even if sometimes the island must be a planet. Once you start noticing islands, they appear everywhere. An archipelago of imagined land masses stretching across the library stacks.

Which is to remember that islands may appear in the atlas of our canon, but actual islands existed long before we put words to describe them. There is the platonic island of the mind, but also the physical island of reality, and confusion between the two has ramifications for those breathing people who call the latter home. Both utopias and Robinsonades reflected and created colonial attitudes, and while it’s easy to get lost in the reverie of the imagined island, actual islanders have suffered upon our awakening. Island remoteness means that one person’s paradisiacal resort can be another person’s temperate prison. So much of the writing about islands is projection from those on the continent, but no true canon of the island can ignore those who actually live there, so that we can study “utopian literature,” but there is no literature from a real Utopia, rather we must also read poetry, prose, and drama from Jamaica and Haiti, Cuba and Hawaii, Puerto Rico and the Dominican Republic.

We must orient ourselves to not just the imagined reveries of a Robinson Crusoe, but the actual words of Franz Fanon, C.L.R. James, V.S. Naipaul, Jamaica Kincaid, Derek Walcott, Marlon James, Junot Diaz, Claude McKay, and Edwidge Danticat. Imagined islands can appear in the atlases of our minds, but real islands have populations of real people. An author who refused to let his audience forget that fact was the Martinique dramatist Aimé Césaire’s who, in his 1969 play Une Tempête, rewrote the cheery conclusion of Shakespeare’s play so as to fully confront the legacy of colonialism. If Shakespeare gave us a revision before the facts of imperialism, then Césaire forces us to understand that Prospero has never been fully willing to go home, and that for Caliban and Ariel happy endings are as fantastic as utopia. Yet Césaire’s characters soldiers on, a revolutionary Caliban reminding us of what justice looks like, for “I’m going to have the last word.”

What is the use then, of an island? We may read of Utopia, but we must not forget the violent histories of blood and sugar, plantations and coups, slavery and revolution. Yet we must not abandon “The Island” as an idea in our sweltering days of the Anthropocene, with Micronesia and the Maldives, the Seychelles and the Solomon Islands threatened by the warm lap of the rising ocean. In Devotions upon Emergent Occasions, the poet John Donne wrote that “No man is an island,” but viewed another way, everything is an island, not least of which this planet that we float upon. I’ve claimed that islands are defined by their parallel remoteness and interconnectedness, but even more than that, an island is a microcosm of the world, for the world is the biggest island on which life exists, a small land mass encircled by the infinite oceanic blackness of space.

As the astronomer Carl Sagan famously said in a 1994 address at Cornell University, “Our planet is a lonely speck… it underscores our responsibility to deal more kindly and compassionately with one another and to preserve and cherish that pale blue dot.” On Christmas Eve of 1968, the astronaut William Anders took the first photograph of the entire planet as it rose over the dead surface of the dusty grey moon. In that photo we see all islands, all continents, all oceans subsumed into this isolated, solitary thing. Islands are castles of the imagination, and as thought got us into this ecological mess, so must it be thought that redeems us. Hermitage or prison? Can such an island ever be a utopia, or are we marooned upon its quickly flooding beaches?

Image credit: Unsplash/Benjamin Behre.

Ten Ways to Look at the Color Black

1.One of the most poignant of all passages in English literature occurs in The Life and Opinions of Tristram Shandy, Gentleman, serially published between the years of 1759 and 1767, when its author Laurence Sterne wrote: “████████████████████████████████████ ██████████████████████████████████████████████████████████████████████████████████████” Such is the melancholic shade of the 73rd page of Tristram Shandy, the entirety of the paper taken up with black ink, when the very book itself mourns the death of an innocent but witty parson with the Shakespearean name Yorick. Said black page appears after Yorick went to his doors and “closed them, – and never opened them more,” for it was that “he died… as was generally thought, quite broken hearted.”

Tristam Shandy is more than just an account of its titular character, for as Steven Moore explains in The Novel: An Alternative History 1600-1800, the English writer engaged subjects including “pedantry, pedagogy, language, sex, writing, obsessions… obstetrics, warfare and fortifications, time and memory, birth and death, religion, philosophy, the law, politics, solipsism, habits, chance… sash-windows, chambermaids, maypoles, buttonholes,” ultimately concluding that it would be “simpler to list what it isn’t about.” Sterne’s novel is the sort that spends a substantial portion of its endlessly digressive plot with the narrator describing his own conception and birth. As Tristam says of his story, “Digressions, incontestably, are the sunshine; – & they are the life, the soul of reading; – take them out of this book for instance, – you might as well take the book along with them.”

Eighteenth-century critics didn’t always go in for this sort of thing. Dr. Johnson, with poor prescience, said “Nothing odd will do long. Tristam Shandy did not last,” while Voltaire gave it a rather more generous appraisal, calling it “a very unaccountable book; an original.” Common readers were a bit more adventuresome; Moore records that the “sheer novelty of the first two volumes made Tristam Shandy a hit when they were reprinted in London in the early 1760s.” Sterne arguably produced the first “post-modern” novel, long before Thomas Pynchon’s Gravity’s Rainbow or David Foster Wallace’s Infinite Jest. Central to Tristam Shandy are its typographical eccentricities, which Michael Schmidt in The Novel: A Biography describes: “mock-marbling of the paper, the pointing hands, the expressive asterisks, squiggles, dingbats…the varying lengths of dashes.” None of those are as famous as poor Yorick’s pitch-black page, however.

It’s easy to see Sterne’s black page, its rectangle of darkness, as an oddity, an affectation, an eccentricity, a gimmick. This is woefully inconsiderate to English language’s greatest passage about the blankness of grief. Sober critics have a tendency to mistake playfulness with lack of seriousness, but a reading of Tristram Shandy shows that for all of its strangeness, its scatological prose and its metafictional tricks, Sterne’s goal was always to chart the “mechanism and menstruations in the brain,” as he explained, to describe “what passes in a man’s mind.”

Which is why Tristram Shandy’s infamous black page represents grief more truthfully than the millions of pages that use ink in a more conventional way. Sterne’s prose, or rather the gaping dark absence where prose normally would be, is the closest that he can get to genuinely conveying what loss’s void feels like. What’s clear is that no “reading” or “interpretation” of Yorick’s extinction can actually be proffered, no analysis of any human’s death can be translated into something rationally approachable. Sterne reminds us that grief is not amenable to literary criticism.  For anyone that has ever lost someone they loved, seen that person die, you can understand that there is an inability for mere words to be commensurate with the enormity of that absence. Concerning such emotions beyond emotions, when it comes to “meaning,” the most full and accurate portrayal can only ever be a black hole.

2.Black is the most parsimonious of all colors. Color is a question of what it is we’re seeing when contrasted with that which we can’t, and black is the null zero of the latter. Those Manichean symbolic associations that we have with black and white are culturally relative—they are contingent on the arbitrary associations that a people project onto colors.  Yet true to the ballet of binary oppositions, they are intractably related, for one could never read black ink on black paper, or its converse. If with feigned synesthesia we could imagine what each color would sound like, I’d suspect that they’d either be all piercing intensity and high pitches, or perhaps low, barely-heard thrum—but I’m unsure which would be which.

Their extremity is what haunts, allowing either only absorption or only
reflection, the two colors reject the russet cool of October and the blue chill
of December, or the May warmth of yellow and the July heat of red. Black and
white are both voids, both absences, both spouses in an absolutism. They are
singularities. Hardly anything is ever truly black, even the night sky awash in
the electromagnetic radiation of all those distant suns. Black and white are
abstractions, they are imagined mathematical potentials, for even the darkest
of shades must by necessity reflect something back. Save for one
thing—the black hole.

As early as 1796 the Frenchman Pierre-Simon Laplace conjectured the existence of objects with a gravitational field so strong that not even light could escape. Laplace, when asked of God, famously told Napoleon that he “had no need for that hypothesis,” but he knew of the black hole’s rapacious hunger. It wouldn’t be until 1916 that another scientist, the German Karl Schwarzschild, would use Albert Einstein’s general theory of relativity to surmise the existence of the modern black hole. Physicist Brian Greene explains in The Elegant Universe that Schwarzschild’s calculations implied objects whose “resulting space-time warp is so radical that anything, including light, that gets too close… will be unable to escape its gravitational grip.”

Black holes were first invented as a bit of mathematical book-keeping, a theoretical concept to keep God’s ledger in order. However, as Charles Seife writes in Alpha and Omega: The Search for the Beginning and End of the Universe, though a “black hole is practically invisible, astronomers can infer its presence from the artifacts it has on spacetime itself.” Formed from the tremendous power of a supernova, a blackhole is a lacuna in space and time, the inky corpse of what was once a star, and an impenetrable passage from which no traveler may return.

A black hole is the simplest object in the universe. Even a hydrogen atom is composed of a proton and an electron, but a black hole is simply a singularity and an event horizon. The former is the infinitely dense core of a dead star, the ineffable heart of the darkest thing in existence, and the latter marks the point of no return for any wayward pilgrim. It’s at the singularity itself where the very presuppositions of physics breakdown, where our mathematics tells us that reality has no strictures. Though a black hole may be explained by physics, it’s also paradoxically a negation of physics. Obvious why the black hole would become such a potent metaphor, for physics has surmised the existence of locations for which logic has no dominion. A cosmological incognito if you will, where there be monsters.

God may not play dice with the universe, but as it turns out She is ironic. Stephen Hawking figured that the potent stew of virtual particles predicted by quantum mechanics, general relativity’s great rival in explaining things, meant that at the event horizon of a black hole there would be a slight escape of radiation, as implied by Werner Heisenberg’s infamous uncertainty principle. And so, from Hawking, we learn that though black may be black, nothing is ever totally just that, not even a black hole. Save maybe for death.

3.“Black hole” is the rare physics term that is evocative enough to attract public attention, especially compared to the previous phrase for the concept, “gravitationally collapsed object.” Coined by physicist Robert H. Dicke in the early ’60s, he appropriated it from the infamous dungeon in colonial India that held British prisoners and was known as the “Black Hole of Calcutta.” In Dicke’s mind, that hot, fetid, stinking, torturous hell-hole from which few men could emerge was an apt metaphor for the cosmological singularity that acts as a physical manifestation of Dante’s warning in Inferno to “Abandon hope all ye who enter here.”

Dante was a poet, and the word “black hole” is a metaphor, but it’s important to remember that pain and loss go beyond language, they are not abstractions, but very real. That particular Calcutta hole was in actuality an 18-foot by 14-foot cell in the ruins of Ft. William that held 69 Indian and British soldiers upon the fall of that garrison in 1756, when it was taken by the Nawab of Bengal. According to a survivor of the imprisonment, John Zephaniah Howell, the soldiers “raved, fought, prayed, blasphemed, and many then fell exhausted on the floor, where suffocation put an end to their torments.” On the first night 46 of the men died.

What that enclosure in Calcutta signified was its own singularity, where meaning itself had no meaning. In such a context the absence of color becomes indicative of erasure and negation, such darkness signaling nothing. As Lear echoes Parmenides, “Nothing can come of nothing: speak again.” There have been many black holes, on all continents, in all epochs. During the 18th century the slave ships of the Middle Passage were their own hell, where little light was allowed to escape.

In Marcus Redicker’s The Slave Ship: A Human History, the scholar speaks of the “horror-filled lower deck,” a hell of “hot, crowded, miserable circumstances.” A rare contemporary account of the Middle Passage is found in the enslaved Nigerian Olaudah Equiano’s 1789 The Interesting Narrative of Olaudah Equiano, Or Gustavus Vassa, The African. Penned the year that French Jacobins stormed the Bastille, Equiano’s account is one of the rare voices of the slave ship to have been recorded and survived, an account of one who has been to a hell that they did not deserve and who yet returned to tell tale of that darkness. Equiano described being “put down under the decks” where he “received such a salutation in my nostrils as I had never experience in my life: so that, with the loathsomeness of the stench, and crying together, I became so sick and low that I was not able to eat…I now wished for the last friend, death.”

There’s a risk in using any language, any metaphor, to describe the singularities of suffering endured by humans in such places, a tendency to turn the lives of actual people into fodder for theorizing and abstraction. Philosopher Elaine Scary in The Body in Pain: The Making and Unmaking of the World argues that much is at “stake in the attempt to invent linguistic structures that will reach and accommodate this area of experience normally so inaccessible to language… a project laden with practical and ethical consequence.” Any attempt to constrain such experience in language, especially if it’s not the author’s experience, runs a risk of limiting those stories. “Black hole” is an affective metaphor to an extent, in that implicit within it is the idea of logic and language breaking down, and yet it’s all the more important to realize that it is ultimately still a metaphor as well, what the Soviet dissident Aleksandr Solzhenitsyn in The Gulag Archipelago described as “the dark infinity.”

David King in The Commissar Vanishes: The Falsification of Photographs and Art in Stalin’s Russia provides a chilling warning about what happens when humans are reduced to such metaphor, when they are erased. King writes that that the “physical eradication of Stalin’s political opponents at the hands of the secret police was swiftly followed by their obliteration from all forms of pictorial existence.” What’s most disturbing are the primitively doctored photographs, where being able to see the alteration is the very point. These are illusions that don’t exist to trick, but to warn; their purpose is not to make you forget, but rather the opposite, to remind you of those whom you are never to speak of again. Examine the Damnatio memoriae of Akmal Ikramov, first secretary of the Communist Party of Uzbekistan, who was condemned by Stalin and shot. In the archives his portrait was slathered in black paint. The task of memory is to never forget that underneath that mask there was a real face, that Ikramov’s eyes looked out as yours do now.

4.Even if the favored color of the Bolsheviks was red, black has also had its defenders in partisan fashion across the political spectrum, from the Anarchist flag of the left to the black-shirts of Benito Mussolini’s fascist right and the Hugo Boss-designed uniforms of the Nazi SS. Drawing on those halcyon days of the Paris Commune in 1871, anarchist Louis Michel first flew the black flag at a protest. His implications were clear—if a white flag meant surrender, then a black flag meant its opposite. For all who wear the color black certain connotations, sometimes divergent, can be potentially called upon; including authority, judiciousness, piety, purity, and power. Also, black makes you look thinner.

Recently departed fashion designer, creative director for the House of Chanel, and noted Teutonic vampire Karl Lagerfeld once told a Harper’s Baazar reporter that “Black, like white, is the best color,” and I see no reason to dispute that. Famous for his slicked-back powdered white pony-tail, his completely black suits, starched white detachable collars, black sunglasses, and leather riding gloves, Lagerfeld is part of a long tradition of that fabled French design firm. Coco Chanel, as quoted in The Allure of Chanel by Paul Morand and Euan Cameron, explains that “All those gaudy, resuscitated colors shocked me; those reds, those greens, those electric blues.” Chanel explains rather that she “imposed black; it’s still going strong today.”

Black may be the favored monochromatic palette for a certain school of haute couture; think black tie affairs and little black cocktail dresses—but the look is too good to be left to the elite. Black is the color of bohemians, spartan simplicity as a rebellion against square society. Beats were associated with it, they of stereotypical turtlenecks and thick-framed glasses. It’s always been a color for the avant-garde, signifying a certain austere rejection of the superficial cheerfulness of everyday life. Beats like Allen Ginsberg in his epic poem Howl, with its memorable black cover from City Lights Books, may have dragged himself through the streets at dawn burning for that “ancient heavenly connection to the starry dynamo,” but his friend William S. Burroughs would survey the fashion choices of his black-clad brethren and declare that the Beats were the “movement which launched a million Gaps.”

Appropriated or not, black has always been the color of the outlaw, a venerable genealogy that includes everything from Marlon Brando’s leather jacket in The Wild One to Keanu Reeves’s duster in The Matrix. Fashionable villains too, from Dracula to Darth Vader. That black is the color of rock music, on its wide highway to hell, is a given. There is no imagining goth music without black’s macabre associations, no paying attention to a Marilyn Manson wearing khaki, or the Cure embracing teal. No, black is the color of my true love’s band, for there’s no Alice Cooper, Ozzy Osbourne, or the members of Bauhaus in anything but a monochromatic darkness. When Elvis Presley launched his ’68 comeback he opted for a skin-tight black leather jumpsuit.

Nobody surpasses Johnny Cash though. The country musician is inextricably bound to the color, wearing it as a non-negotiable uniform that expressed radical politics. He sings “I wear the black for the poor and the beaten down, /Livin’ in the hopeless, hungry side of town.” Confessing that he’d “love to wear a rainbow every day,” he swears allegiance to his millennial commitments, promising that he’ll “carry off a little darkness on my back, /’Till things are brighter, I’m the Man in Black.” Elaborating later in Cash: The Autobiography, cowritten with Patrick Carr, he says “I don’t see much reason to change my position today…There’s still plenty of darkness to carry off.”

Cash’s sartorial choices were informed by a Baptist upbringing; his clothes mourned a fallen world, it was the wardrobe of a preacher. Something similar motivates the clothing of a very different prophetic figure, the pragmatist philosopher Cornel West, who famously only wears a black three-piece suit, with matching scarf. In an interview with The New York Times, West calls the suit his “cemetery clothes,” with a preacher’s knowledge that one should never ask for whom the bell tolls, but also with the understanding that in America, the horrifying reality is that a black man may always need to be prepared for his own funeral when up against an unjust state. As he explained, “I am coffin-ready.” West uses his black suit, “my armor” as he calls it, as a fortification.

Black is a liturgical, sacred, divine color. It’s not a mistake that Cash
and West draw from the somber hue of the minister’s attire. Black has often
been associated with orders and clerics; the Benedictines with their black
robes and Roman collared Jesuits; Puritans and austere Quakers, all unified in
little but clothing. Sects as divergent as Hasidic Jews and the Amish are known
for their black hats. In realms of faith, black may as well be its own temple.

5.Deep in the Finsterwalde, the “Dark Forest” of northwestern Switzerland, not far from Zurich, there is a hermitage whose origins go back to the ninth century. Maintained by Benedictine monks, the monastery was founded by St. Meinard. The saint lived his life committed to solitude, to dwelling in the space between words that can stretch to an infinity, a black space that still radiates its own light. In his vocation as a hermit, where he would find the monastery known (and still known) as the Einsiedeln Abbey, he had a single companion gifted to him by the Abbes Hildegard of Zurich—a carved, wooden statue of the Virgin Mary holding the infant Christ, who was himself clutching a small bird as if it was his play companion.

For more than a millennium, that figure, known as the “Lady of Einsiden,” has been visited by millions of pilgrims, as the humble anchorage has grown into a complex of ornate, gilded baroque buildings. These seekers are drawn to her gentle countenance, an eerie verisimilitude projecting some kind of interiority within her walnut head. She has survived both the degradations of entropy and Reformation, and is still a conduit for those who travel to witness that material evidence of that silent world beyond. Our Lady of Einsiden is only a few feet tall; her clothing is variable, sometimes wearing the celestial, cosmic blue of the Virgin, other times in resplendent gold, but the crown of heaven is always upon her brow. One aspect of her remains unchanging, however, and that’s that both her and Christ are painted black.

In 1799, during a restoration of the monastery, it was argued, in the words of one of the workers, that the Virgin’s “color is not attributable to a painter.” Deciding that a dose of revisionism was needed alongside restoration, the conclusion of restorer Johann Adam Fuetscher was that the Mary’s black skin was the result of the “smoke of the lights of the hanging lamps which for so many centuries always burned in the Holy Chapel of Einsideln.”

Fuetscher decided to repaint the statue, but when visitors saw the new Virgin they were outraged, and demanded she be returned to her original color, which has remained her hue for more than 200 years. Our Lady of Einsideln was not alone; depictions of Mary with dark skin can be found the width and breadth of the continent, from the famed Black Madonna of Czestochowa in Poland to Our Lady of Dublin in the Whitefriar Street Carmelite Church; in the Sicilian town of Tindari, to the frigid environs of Lunds Domkyrka Lund Cathedral in Sweden. Depending on how one identifies the statues, there are arguably 500 medieval examples of the Virgin Mary depicted with dark skin.

Recently art historians have admitted that the hundreds of Black Madonnas are probably intentionally so, but there is still debate as to why she is so often that color. One possibility is that the statues are an attempt at realism, that European artists saw no compunctions about rendering the Virgin and Christ with an accurate skin-tone for Jews living in the Levant. Perhaps basing such renderings upon the accounts of pilgrims and crusaders who’d returned from the Holy Land, these craftsmen depicted the Mother of God with a face that wasn’t necessarily a mirror of their own.

Scholar Lucia Chiavola Birnbaum has her own interpretation of these carvings in her study Black Madonnas: Feminism, Religion, and Politics in Italy. For Birnbaum, the statues may represent a multicultural awareness among those who made them, but they also have a deep archetypal significance. She writes that “Black is the color of the earth and of the ancient color of regeneration, a matter of perception, imagination, and beliefs often not conscious, a phenomenon suggested in people’s continuing to call a madonna black even after the image had been whitened by the church.”

China Galland in Longing for Darkness: Tara and the Black Madonna, her account of global pilgrimage from California to Nepal, asks if there was in the “blackness of the Virgin a thread of connection to Tara, Kali, or Durga, or was its mere coincidence?”  These are goddesses, which as Galland writes, have a blackness that is “almost luminous,” beings of a “beneficent and redeeming dark.” Whatever the motivations of those who made the statues, it’s clear that they intended to depict them exactly as they appear now, candle smoke and incense besides. At the il Santuario della Madonna del Tindari in Sicily there is a celebrated Virgin Mary with dark skin. And just to dispel any hypothesis that her color is an accident, restorers in 1990 found inscribed upon her base a quotation from Song of Songs 1:5, when the Queen of Sheba declares to Solomon: “I am black but beautiful.”

6.Very different deities of darkness would come to adorn the walls of the suburban Madrid house that the Spanish painter Francisco Goya moved to 200 years ago, in the dusk of the Napoleonic conflicts (when Laplace had dismissed God). Already an old man, and deaf for decades, Goya would affix murals in thick, black oil to the plaster walls of his villa, a collection intended for an audience of one. As his biographer Robert Hughes would note in Goya, the so-called black paintings “revealed an aspect of Goya even more extreme, bizarre, and imposing” than the violent depictions of the Peninsular War for which he was famous. The black paintings were made for Goya’s eyes only. He was a man who’d witnessed the barbarity of war and inquisition, and now in his convalescence he chose to make representations of witches’ sabbaths and goat-headed Baphomet overseeing a Black Mass, of Judith in the seconds after she decapitated Holofernes, and of twisted, toothless, grinning old men. And, though now it hangs in the Museo del Prado, it was painted originally on the back wall of the first story of the Quinta del Sordo next to one window and perpendicular to another, was his terrifying depiction of a fearsome Saturn devouring his own young.

In the hands of Goya, the myth of the Titan who cannibalized his progeny is
rendered in stark, literal, horrifying reality. For Goya there is no forgetting
the implications of what that story implies, his Chronos appears as shaggy,
wild-eyed, orangish monstrosity; matted, bestial white hair falls uncombed from
his head, and past his scrawny shoulders. Saturn is angular, jutting bones and
knobby kneecaps, as if hunger has forced him to this unthinkable act. His eyes
are wide, and though wild, they’re somehow scared, dwelling in the darkness of
fear.

I wonder if that’s part of Goya’s intent, using this pagan theme to express something of Catholic guilt and death-obsession, that intuitive awareness of original sin. It makes sense to me that Saturn is the scared one; scared of what he’s capable of, scared of what he’s done. Clutching in both hands the dismembered body of a son, whose features and size are recognizably human, Chronos grips his child like a hoagie, his son’s right arm already devoured and his head in Saturn’s stomach, with the Titan biting directly into the final remaining hand. Appropriately enough for what is, after all, an act of deicide, the sacrificed god hangs in a cruciform position. A fringe of blood spills out from inside. His corpse has a pink flush to it, like a medium rare hamburger. That’s the horror of Chronos—of time—emerging from this undifferentiated darkness. When considering our final hour, time has a way of rendering the abstraction of a body into the literalism of meat. Saturn Devouring His Son hung in Goya’s dining room.

His later paintings may be the most striking evocation of blackness, but the
shade haunted Goya his entire life. His print The Sleep of Reason Produces
Monsters, made two decades before those murals in the Quinta del
Sordo, is a cross-hatched study of the somber tones, of black and grey.
Goya draws himself, head down on a desk containing the artist’s implements, and
above him fly the specters of his nocturnal imagination, bats and owls flapping
their wings in the ceaseless drone that is the soundtrack of our subconscious
irrationalities, of the blackness that defines that minor form of extinction we
call sleep.

7.The blackness of sleep both promises and threatens erasure. In that strange state of non-being there is an intimation of what it could mean to be dead. Telling that darkness is the most applicable metaphor when describing both death and sleep, for the bed or the coffin. Sigmund Freud famously said of his subject in The Interpretation of Dreams that they were the “royal road to the unconscious.” Even the laws of time and space seem voided within that nocturnal kingdom, where friends long dead come to speak with us, where hidden rooms are discovered in the dark confines of homes we’ve known our entire lives. Dreams are a singularity of sorts, but there is that more restful slumber that’s nothing but a calm blackness.

This reciprocal comparison between sleep and death is such a cliché precisely because it’s so obvious, from the configuration of our actual physical repose to our imagining of what the experiences might share with one another. Edmund Spenser in the Faerie Queene writing “For next to Death is Sleepe to be compared;” his contemporary the poet Thomas Sackville referring to sleep as the “Cousin of Death;” the immaculate Thomas Browne writing that sleep is the “Brother of Death;” and more than a century later Percy Shelley waxing “How wonderful is Death, Death and his brother Sleep!”

Without focusing too much on how the two have moved closer to one another on the family tree, what seems to unify tenor and vehicle in the metaphorical comparisons between sleep and death is this quality of blackness, non-existence of color the same as non-existence. Both imply a certain radical freedom, for in dreams everyone has an independence, at least for a few hours. Consider that in our own society, where our totalizing system is the consumerism which controls our every waking moment, that the only place where you won’t see anything designed by humans (other than yourself) is in dreams, at least until Amazon finds a way to beam advertisements directly into our skulls.

Then there is Shakespeare, who speaks of sleep as the “ape of death,” who in Hamlet’s monologue writes of the “sleep of death,” and in the Scottish play calls sleep “death’s counterfeit.” If centuries have a general disposition, then my beloved 17th century was a golden age of morbidity when the ars Moriendi of the “good death” was celebrated by essayists like Browne and Robert Burton in the magisterial Anatomy of Melancholy. In my own reading and writing there are few essayists whom I love more, or try to emulate more, than the good Dr. Browne. That under-read writer and physician, he who both coined the terms “literary” and “medical,” among much else besides, wrote one of the most moving and wondrous tracts about faith and skepticism in his 1642 Religio Medici. Browne writes “Sleep is a death, /O make me try, /By sleeping, what it is to die:/And as gently lay my head/On my grave, as now my bed.” Maybe it resonates with me because when I was (mostly) younger, I’d sometimes lay on my back and pretend that I was in my coffin. I still can only sleep in pitch blackness.

8.Far easier to imagine that upon death you go someplace not unlike here, in either direction, or into the life of some future person yet unborn. Far harder to imagine non-existence, that state of being nothing, so that the most accessible way that it can be envisioned is as a field of black, as being the view when you close your eyes. That’s simply blackness as a metaphor, another inexact and thus incorrect portrayal of something fundamentally unknowable. In trying to conceive of non-existence, blackness is all that’s accessible, and yet it’s a blackness where the very power of metaphor ceases to make sense, where language itself breaks down as if it were the laws of physics at the dark heart of the singularity.

In the Talmud, at Brachot 57b, the sages tell us that “Sleep is 1/60th of death,” and this equation has always struck me as just about right. It begs certain questions though: is the sleep that is 1/60th of death those evenings when we have a pyrotechnic, psychedelic panoply of colors before us in the form of surrealistic dreams, or is it the sleep we have that is blacker than midnight, devoid of any being, of any semblance of our waking identities? This would seem to me to be the very point on which all questions of skepticism and faith must hang. That sleep, that strangest of activities, for which neurologists still have no clear answers as to its necessities (though we do know that it is), is a missive from the future grave, a seven-hour slice of death, seems obvious to me. So strange that we mock the “irrationalities” of ages past, when so instrumental to our own lives is something as otherworldly as sleep, when we die for a third of our day and return from realms of non-being to bore our friends with accounts of our dreams.

When we use the darkness of repose as metaphor for death, we brush against the extremity of naked reality and the limitations of our own language. In imagining non-existence as a field of undifferentiated black, we may trick ourselves into experiencing what it would be to no longer be here, but that’s a fallacy. Black is still a thing. Less than encouraging, this inability to conceive of that reality, which may be why deep down all of us, whether we’re to admit it or not, are pretty sure that we’ll never die, or at least not completely. And yet the blackness of non-existence disturbs, how couldn’t it? Epicurus wrote as an argument against fear of our own mortality that “Death… is nothing to us, seeing that, when we are, death is not come, and, when death is come, we are not.”

Maybe that’s a palliative to some people, but it’s never been to me. More of
sophistry than wisdom in the formulation, for it eludes the psychology of being
terrified at the thought of our own non-existence. Stoics and Epicureans have
sometimes asked why we’re afraid of the non-existence of death, since we’ve
already experienced the non-existence before we’re born? When I think back to
the years before 1984, I don’t have a sense of an undifferentiated blackness,
rather I have a sense of…. well…. nothing. That’s not exactly
consoling to me. Maybe this is the height of egocentricity, but hasn’t anyone
ever looked at photographs of your family from before you’re born, and felt a
bit of the uncanny about it? Asking for a friend.

In 1714, the German philosopher Gottfried Wilhelm Leibnitz asked in the Monadology “Why is there something rather than nothing,” and that remains the rub. For Martin Heidegger in the 20th century, that issue remained the “fundamental question of metaphysics.” I proffer no solution to it here, only to notice that when confronted with the enormity of non-existence, prudence forces us to admit the equivalently disturbing question of existence. Physicist Max Delbrück in Mind from Matter: An Essay on Evolutionary Epistemology quotes his colleague Niels Bohr, the father of quantum theory, as having once said that the “hallmark of any deep truth [is] that its negation is also a deep truth.” Certainly, the case with existence and non-existence, equally profound and equally disquieting. If we’re to apply colors to either, I can’t help but see oppositional white and black, with an ambiguity to which is which.

9.If there can be a standard picture of God, I suspect that for most people it is a variation on the bearded, old man in the sky trope, sort of a more avuncular version of Michelangelo’s rendering from the Sistine Chapel ceiling. Such is an embodied deity, of dimensions in length, breadth, and width, and also such is the Lord as defined through that modern heresy of literalism. The ancients were often more sophisticated than both our fundamentalists and our atheists (as connected as black and white). Older methods of speaking about something as intractable as God were too often pass over in silence, with an awareness that to limit God to mere existence was to limit too much.

In that silence there was the ever-heavy blossom of blackness, the all-encompassing field of darkness that contains every mystery to which there aren’t even any questions. Solzhenitsyn observed that “even blackness [can]… partake of the heavens.” Not even blackness, but especially blackness, for dark is the night. Theologians call this way of speaking about God “apophasis.” For those who embrace apophatic language, there is an acknowledgement that a clear definition of the divine is impossible, so that it is better to dwell in sacred, uncertainties. This experience of God can often be a blackness in itself, what St. John of the Cross spoke of in his 1577 Spanish poem “The Dark Night of the Soul.” Content with how an absence can often be more holy than an image, the saint emphasized that such a dark night is “lovelier than the dawn.” A profound equality in undifferentiated blackness, in that darkness where features, even of God, are obscured. Maybe the question of whether or not God is real is as nonsensical as those issues of non-existence and death; maybe the question itself doesn’t make any sense, understanding rather that God isn’t just black. God is blackness.

10. On an ivory wall within the National Gallery, in Andrew Mellon’s palace constructed within this gleaming white city, there is a painting made late in life by Mark Rothko entitled Black on Grey. Measuring some 80 inches by 69.1 inches, the canvas is much taller than the average man, and true to its informal title it is given over to only two colors—a dark black on top fading into a dusty lunar grey below. Few among Rothko’s contemporaries in his abstract expressionist circle, that movement that moved the capital of the art world from Paris to New York, had quite the sublimity of color as he did. Jackson Pollock certainly had the kinetic frenzy of the drip, Willem de Kooning the connection to something still figurative in his pastel swirl. But Rothko, he had a panoply of color, from his nuclear oranges and reds to those arctic blues and pacific greens, what he described to Selden Rodman in Conversations with Artists as a desire to express “basic human emotions—tragedy, ecstasy, doom.”

Black on Grey looks a bit like what I imagine it would be to survey the infinity of space from the emptiness of the moon’s surface. These paintings towards the end of the artist’s life, made before he committed suicide by barbiturate and razor blade in his East 69th Street studio, took an increasingly melancholic hue. Perhaps Rothko experienced what his friend the poet Frank O’Hara had written about as the “darkness I inhabit in the midst of sterile millions.” Rothko confirmed that his black paintings were, as with Goya, fundamentally about death.

In a coffee-table book, Rothko’s work can look like something from a paint
sample catalog. It does no justice compared to standing before the images
themselves, of what Rothko described as the phenomenon of how “people break
down and cry when confronted with my pictures.” For Rothko, such reactions were
a type of communion, these spectators were “having the same religious experience
I had when I painted them.” When you stand before Black on Grey, when
it’s taken out from the sterile confines of the art history book or the
reductions of digital reproduction, you’re confronted with a blackness that
dominates your vision, as seeing with your eyes closed, as experiencing death,
as standing in the empty Holy of Holies and seeing God.

With a giant field of black, the most elemental abstraction that could be
imagined, this Jewish mystic most fully practiced the stricture to not make any
graven image. He paradoxically arrived at the most accurate portrayal of God
ever committed to paint. For all of their oppositions, both Infinity and
Nothing become identical, being the same shade of deep and beautiful black, so
that any differences between them are rendered moot.

Image credit: Unsplash/David Jorre.

What Is Italian America? It’s Complicated

“For years the old Italians have been dying/all over America.” -Lawrence Ferlinghetti

On the second floor of Harvard’s Fogg Museum, in an airy, well-lit, white-walled gallery, near a slender window overlooking a red-bricked Cambridge street, there is a display case holding three portraits on chipped wood not much bigger than post-cards. Of varying degrees of aptitude, the paintings are of a genre called “Fayum Portraits” from the region of Egypt where they’re commonly found. When the Roman ruling class established itself in this Pharaonic land during the first few centuries of the Common Era, they would mummify themselves in the Egyptian fashion while affixing Hellenistic paintings onto the faces of their preserved bodies. Across the extent of the Roman empire, from damp Britain to humid Greece, little of the more malleable painted arts survived, but in sun-baked Egypt these portraits could peer out 20 centuries later as surely as the desert dried out their mummified corpses. When people envision ancient Mediterranean art, they may think of the grand sculptures blanched a pristine white, Trajan’s Arch and the monumental head of Constantine, the colorful paint which once clung to their surfaces long since eroded away. And while the monumental marbles of classical art are what most people remember of the period, the Fayum portraits of Harvard provide an entirely more personal gaze across the millennia.

If white is the color we associate with those sculptures, then the portraits here in Cambridge are of a different hue. They are nut-brown, tanned from the noon-day sun, yellow-green, and olive. Mummy Portrait of a Woman with an Earring, painted in the second century, depicts in egg tempura on wood a dark-skinned middle-aged woman with commanding brown eyes, her black hair showing a bit of curl even as it is pulled back tightly on her scalp; a woman looking out with an assuredness that belies her anonymity over time. Mummy Portraits of a Bearded Man shows the tired look of an old man, grey beard neatly clipped and groomed, his wavy grey hair still with a hint of auburn and combed back into place. Fragments of a Mummy Portrait of a Man represents a far younger man, cleft chinned with a few days’ black stubble over his olive skin. What’s unnerving is the eerie verisimilitude of this nameless trio.

That they look so contemporary, so normal, is part of what’s unsettling. But they also unsettle because they’re there to assist in overturning our conceptions about what Roman people, those citizens of that vast, multicultural, multilingual, multireligious empire, looked like. Our culture is comfortable with the lily-white sculptures we associate with our Roman forebearers which were then imitated in our own imperial capitals; easier to pretend that the ancient Romans had nothing to do with the people who live there now, and yet when looking at the Fayum portraits I’m always struck by how Italian everybody looks.

The old man could be tending tomatoes in a weedy plot somewhere in Trenton; the middle-aged woman wearily volunteering for a church where she’s not too keen on the new Irish priest, and the young man with the stubble looks like he just got off a shift somewhere in Bensonhurst and is texting his friends to see who wants to go into the city. I’ve never seen anyone who actually looks like the statue of Caesar Augustus of Primo Porta carved from white stone, but you’ll see plenty of people who look like the Fayum Portraits in North Boston, Federal Hill, or Bloomfield (the one either in Jersey or in Pittsburgh). When I look at the Fayum portraits, I see people I know; I see my own family.

Despite my surname vowel deficiency, I’m very much Italian-American. Mathematically, twice as much as Robert DeNiro, so I feel well equipped to offer commentary in answering the question with which I’ve titled this piece. Furthermore, as a second-generation American, I’m not that far removed from Ellis Island. My mother’s father immigrated from Abruzzo, that mountainous, bear-dwelling region that was the birthplace of Ovid, and his entire family and much of his fellow villagers were brought over to Westmoreland County, Pennsylvania, to work as stone masons, a trade they’d been plying since the first rock was laid in the Appian Way. My grandmother’s family was from Naples, where Virgil was born, a teeming volcanic metropolis of orange and lemon trees, a heaven populated by devils as native-son Giordano Bruno wrote in the 16th century. For me, being Italian was unconscious; it simply was a fact no more remarkable than my dark hair or brown eyes.

Being Italian meant at least seven fishes on Christmas Eve and the colored lights rather than the white ones on the tree, it meant (and still means) cooking most things with a heavy dollop of olive oil and garlic, it means at least once a week eating either veal parmesan, prosciutto and melon, calamari, spaghetti with tuna, and buffalo mozzarella with tomatoes. Being Italian meant laminated furniture in the homes of extended family, and Mary-on-the-half-shell; it meant a Catholicism more cultural than theological, with the tortured faces of saints vying alongside a certain type of pagan magic. Being Italian meant assumed good looks and a certain ethnic ambiguity; it meant uncles who made their own wine and grew tomatoes in the backyard.

Being Italian-American meant having an identity where the massive pop culture edifice that supplies representations of you implies that the part before the hyphen somehow makes the second half both more and less true. My position was much like Maria Laurino’s in Were You Always an Italian?: Ancestors and Other Icons of Italian America, where she writes that “All the pieces of my life considered to be ‘Italian’…I kept distinct from the American side, forgetting about the hyphen, about that in-between place where a new culture takes form.” What I do viscerally remember is the strange sense I had watching those corny old sword-and-sandal epics that my middle school Latin teacher used to fill up time with, a sense that those strangely Aryan Romans presented on celluloid were supposed to somehow be related to me. Actors whose chiseled all-American whiteness evoked the marbles that line museum halls. Sculptures of Caesar Augustus were once a lot more olive than white as well. That classical Greek and Roman statuary was vividly painted, only to fade over time, has been known since the 19th century, even as contemporary audiences sometimes violently react to that reality. Using modern technology, archeologist Vinzenz Brinkmann has been able to restore some of the most famous Greek and Roman statues to glorious color, as he details in his Gods in Color: Polychromy in the Ancient World, but as the classicist Sarah Bond writes in Forbes, “Intentional or not, museums present viewers with a false color binary of the ancient world.” We think of the Romans as lily white, but the Fayum portraits demonstrate that they very much weren’t. That the individuals in these pictures should appear so Italian shouldn’t be surprising—Romans are Italians after all. Or at least in the case of the Fayum portraits they’re people from a mélange of backgrounds, including not just Romans, but Greeks, Egyptians, Berbers, Arabs, Jews, Ethiopians, and so on. Rome was, like our own, a hybridized civilization, and it’s marked on the faces that peer out towards us on that wall.

Last fall, at a handful of Boston-area colleges just miles from the museum, classical imagery was appropriated for very different means. Students awoke to find their academic halls papered with posters left during the night by members of one of these fascistic groups to have emerged after the 2016 presidential election, a bigotry that has been revealed as if discovering all of the fungus growing underneath a rotting tree stump that’s been kicked over. This particular group combined images of bleached classical sculpture and neo-fascist slogans to make their white supremacist arguments. The Apollo Belvedere is festooned with the declaration “Our Future Belongs to Us,” 17th-century French Neo-Classical sculptor Nicolas Coustou’s bust of Julius Caesar has “Serve your People” written beneath it, and in the most consciously Trumpy of posters, a close-up on the face of Michelangelo’s David injuncts “Let’s Become Great Again.” There’s something particularly ironic in commandeering the David in the cause of white supremacy. Perhaps they didn’t know that that exemplar of the Italian Renaissance was a depiction of a fierce Jewish king as rendered by a gay, olive-skinned artist?

Such must be the central dilemma of the confused white supremacist, for the desire to use ancient Rome in their cause has been irresistible ever since Benito Mussolini concocted his crackpot system of malice known as fascismo corporativo, but the reality is that the descendants of those very same Romans often don’t appear quite as “white” as those supremacists would be comfortable with. This is especially important when considering that the Romans “did not speak in terms of race, a discourse invented many centuries later,” as scholar Nell Irvin Painter writes in The History of White People. Moral repugnance is a given when it comes to racist ideologies, but one should also never forget the special imbecility that comes along with arguing that you’re innately superior because you kinda, sorta, maybe physically resemble dead people who did important things. What makes the case of the posters more damning is that those who made them don’t even actually look like the people whose culture they’ve appropriated.

No doubt the father of celebrated journalist Gay Talese would be outraged by this filching. In a 1993 piece for The New York Times Book Review, he remembers his “furious and defensive father” exploding after he’d learned that the Protestant-controlled school board had rejected his petition to include Ovid and Dante in the curriculum, the elder man shouting at his son that “‘Italy was giving art to the world when those English were living in caves and painting their faces blue!’” A particular twist as the descendants of those same WASPs paper college campuses with posters of Italian sculptures that they somehow claim patrimony from. But that’s always been the predicament of the Western chauvinist, primed to take ownership over another culture as evidence of his own genius, while simultaneously having to explain his justifications for the disdain in which he holds the actual children of that culture.

Since the late 19th-century arrival of millions of immigrants from the Mezzogiorno, American racists have long contrived baroque justifications for why white Anglo-Saxon Protestants are the inheritors of Italian culture, while Italians themselves are not. Some of this logic was borrowed from Italy itself, where even today, Robert Lumley and Jonathan Morris record in The New History of the Italian South, some northerners will claim that “Europe ends at Naples. Calabria, Sicily, and all the rest belong to Africa.” Northern Italians, comparatively wealthier, better educated, and most importantly fairer, had been in the United States a generation before their southern cousins, and many Anglo-Americans borrowed that racialized animus against southerners which reigned (and still does) in the old country.

As Richard Gambino writes in Blood of my Blood: The Dilemma of the Italian-Americans, it was in the “twisted logic of bigotry” that these immigrants were “flagrantly ‘un-American.’ And Italians replaced all the earlier immigrant groups as targets of resentment about the competition of cheap labor.” This was the reasoning that claimed that all Italian accomplishments could be attributed to a mythic “Nordic” or “Teutonic” influence, so that any Mediterranean achievements were written away, orienting Rome towards a Europe it was only tangentially related to and away from an Africa that long had an actual influence. Notorious crank Madison Grant in his unabashedly racist 1916 The Passing of the Great Race claimed that Italians were now “storming the Nordic ramparts of the United States and mongrelizing the good old American stock,” with Gambino explaining that “In his crackpot explanation, Italians are the inferior descendants of the slaves who survived when ancient Rome died.”

Paleontologist Stephen Jay Gould writes that Grant’s book was “the most influential tract of American scientific racism” and Adolf Hitler wrote a letter to the author claiming “The book is my Bible.” A lawyer and eugenicist, Grant’s writings were influential in both the Palmer Raids, a series of unconstitutional police actions directed by the Wilson administration against immigrants suspected of harboring anarchist and communist sympathies, as well as the xenophobic nastiness of the 1924 Johnson-Reed Act which made eastern and southern European immigration slow to a trickle. Incidentally, it was the Johnson-Reed Act that, had it been passed 10 years earlier, would have barred my mother’s father from entering the United States; a law that former Attorney General Jefferson Beauregard Sessions lauded in a 2017 interview with Stephen Bannon, arguing that the banning of immigrants like those in my family “was good for America.”

In Chiaroscuro: Essays of Identity, Helen Barolini writes that “Italian Americans are too easily used as objects of ridicule and scorn,” and while that’s accurate, it’s a rhetoric that has deep and complicated genealogies. Italy has always occupied a strange position in the wider European consciousness. It is simultaneously the birthplace of “Western Civilization,” and an exoticized, impoverished, foreign backwater at the periphery of the continent; the people who first modeled a noxious imperialism, and the subjugated victims of later colonialism. A pithy visual reminder of Italy’s status in early modern Europe can be seen in the German painter Hans Holbein the Younger’s 1533 masterpiece The Ambassadors, which depicts two of the titular profession surrounded by their tools. On a shelf behind them sits a globe. Europe is differentiated from the rest of the world by being an autumnal brown-green, with the exception of two notable regions colored the same hue as Africa—Ireland and Sicily. In Are Italians White?: How Race is Made in America, coedited with Salvatore Salerno, historian Jennifer Guglielmo explains that the “racial oppression of Italians had its root in the racialization of Africans,” something never more evident than in the anti-Italian slur “guinea” with its intimations of Africanness, this implication of racial ambiguity having profound effects on how Italians were understood and how they understood themselves.

In the rhetoric and thought of the era, Italy was somehow paradoxically the genesis of Europe, while also somehow not European. As such, Italians were to be simultaneously emulated and admired, while also reviled and mocked. During that English Renaissance, which was of course a sequel to the original one, books like Baldassare Castiglione’s The Courtier and Niccolò Machiavelli’s The Prince, with their respective paeans to sensuality and duplicity, molded a particular view of Italianness that has long held sway in the English imagination. Consider all of the Shakespeare plays in an imagined Italy: The Taming of the Shrew, Two Gentleman of Verona, Much Ado About Nothing, Romeo and Juliet, Julius Caesar, Titus Andronicus, Othello, Coriolanus, The Winter’s Tale and The Merchant of Venice, not to mention the occasional appearance of Romans in other plays. Shakespeare’s plays, and other icons of the English Renaissance, set a template that never really faded. A simultaneous attraction to and disgust at a people configured as overly emotional, overly sexual, overly flashy, overly corrupt, overly sensual, and with a propensity less cerebral than hormonal. And the criminality.

Long before Mario Puzo or The Sopranos, Renaissance English writers impugned Italians with a particular antisocial perfidy. Such is displayed in Thomas Nash’s 1594 The Unfortunate Traveller: or, the Life of Jack Wilton, which could credibly be called England’s first novel. In that picaresque, the eponymous character perambulates through the Europe of the early 16th century, encountering luminaries like Thomas More, Erasmus, Henry Howard, Martin Luther, and Cornelius Agrippa, and witnessing events like the horrific siege at Munster in the Low Countries. Most of Nash’s narrative, however, takes place in “the Sodom of Italy,” and an English fever dream of that country’s excess settles like a yellow fog. One Englishmen laments that the only lessons that can be learned here are “the art of atheism, the art of epicurizing, the art of whoring, the art of poisoning, the art of sodomitry.”

The narrative circumstances of Nash’s penultimate scene, which reads like Quentin Tarantino, has an Italian nobleman being executed for the violent revenge he took upon his sister’s rapist. On the scaffold, the nobleman declares that “No true Italian but will honor me for it. Revenge is the glory of arms and the highest performance of valor,” and indeed his revenge was of an immaculate quality. He’d first forced his sister’s assailant to abjure God and condemn salvation, and then, satisfied that such blasphemy would surely send his victim to hell, he shot him in the head. A perfect revenge upon not just the body, but the soul. Nash presents such passion as a ritual of decadent Mediterranean vendetta, simultaneously grotesque and inescapably evocative.

From Nash until today there has often been a presumption of vindictive relativist morality on the part of Italians, and it has slurred communities with an assumption of criminality. In the early 20th century sociologists claimed that the dominant Italian ethic was “familial amoralism,” whereby blood relations had precedence over all other social institutions. Nash’s nobleman is the great-grandfather to Michael Corleone in the collective imagination. Do not read this as squeamish sensitivity, I’d never argue that The Godfather, written and directed by Italians, is anything less than an unmitigated masterpiece. Both Puzo’s novel and Francis Ford Coppola’s adaptation are potent investigations of guilt, sin, and evil. I decided not to join the Sons of Italy after I saw how much of their concern was with stereotypes on The Sopranos, which I still regard as among the greatest television dramas of all time. I concur with Bill Tonelli, who in his introduction to The Italian American Reader snarked that “nobody loves those characters better than Italian Americans do,” and yet I recall with a cringe the evaluation of The Godfather given to me by a non-Italian, that the film was about nothing more than “spaghetti and murder.”

Representations of Italianness in popular culture aren’t just Michael Corleone and Tony Soprano, there’s also the weirdly prevalent sitcom stereotype of the lovable, but dumb, hypersexual goombah. I enter into consideration Arthur “The Fonze” Fonzarelli from Happy Days, Tony Micelli from Who’s the Boss?, Vinny Barbarino of Welcome Back Kotter, and of course Friends’ Joey Tribbiani. Once I argued with my students if there was something offensive about The Jersey Shore, finally convincing them of the racialized animus in the series when I queried as to why there had never been an equivalent about badly behaving WASPs called Martha’s Vineyard?

Painter explains that “Italian Americans hovered longer on the fringes of American whiteness,” and so any understanding must take into account that until recently Italians were still inescapably exotic to many Americans. Tonelli writes that “in an era that supposedly values cultural diversity and authenticity, the portrait of Italian Americans is monotonous and observed from a safe distance.” The continued prevalence of these stereotypes is a residual holdover from the reality that Italians are among the last of “ethnics” to “become white.” Tonelli lists the “mobsters, the urban brute, the little old lady shoving a plate of rigatoni under your nose,” declaiming that “it gets to be like a minstrel show after a while.”

Consider Judge Webster Thayer who after the 1921 sham-trial of anarchists Barolomeo Vanzetti and Nicola Sacco would write that although they “may not have committed the crime” attributed to them, they are “nevertheless morally culpable” because they were both enemies of “our existing institutions… the defendant’s ideals are cognate with crime.” Privately, Thayer bragged to a Dartmouth professor, “Did you see what I did to those anarchistic bastards the other day?” As late as 1969, another professor, this one at Yale, felt free to tell a reporter in response to a query about a potential Italian-American New York mayoral candidate that “If Italians aren’t actually an inferior race, they do the best imitation of one I’ve seen.”

But sometime in the
decades after World War II, Italians followed the Irish and Jews into the
country club of whiteness with its carefully circumscribed membership. Guglielmo
explains that initially “Virtually all Italian immigrants [that] arrived in the
United States [did so] without a consciousness about its color line.” Victims
of their birth nation’s rigid social stratification based on complexion and geography,
the new immigrants were largely ignorant of America’s racial history, and thus
were largely immune to the anti-black racism that was prevalent. These
immigrants had no compunction about working and living alongside African
Americans, and often understood themselves to occupy a similar place in
society.

But as Guglielmo explains, by the second and third generation there was an understanding that to be “white meant having the ability to avoid many forms of violence and humiliation, and assured preferential access to citizenship, property, satisfying work, livable wages, decent housing, political power, social status, and a good education, among other privileges.” Political solidarity with black and Hispanic Americans (we forget that Italians are Latinx too) was abandoned in favor of assimilation to the mainstream. Jennifer Gillan writes in the introduction to Growing up Ethnic in America: Contemporary Fiction about Learning to be American that “American have often fought bitter battles over what it means to be American and who exactly get to qualify under the umbrella term,” and towards the end of the 20th century Italians had fought their way into that designation, but they also left many people behind. In the process, a beautiful radical tradition was forgotten, so that we traded Giuseppe Garibaldi for Frank Rizzo, Philly’s racist mayor in the 70s; so that now instead of Sacco and Vanzetti we’re saddled with Antonin Scalia and Rudy Giuliani. As Guglielmo mourns, “Italians were not always white and the loss of this memory is one of the great tragedies of racism in America.”

If there is to be any anecdote, then it must be in words; where literature allows for imaginative possibilities and the complexities of empathy. What is called for is a restitution, or rather a recognition, of an Italian-American literary canon acting as bulwark against both misrepresentations and amnesia. Talese infamously asked if there were no “Italian-American Arthur Millers and Saul Bellows, James Baldwins and Toni Morrisons, Mary McCarthys and Mary Gordons, writing about their ethnic experiences?” There’s an irony to this question, as Italians in the old country would never think to ask where their writers are, the land of Virgil and Dante secure in its literary reputation, with more recent years seeing the celebrated post-modernisms of Italo Calvino, Primo Levi, Dario Fo, and Umberto Eco. In this century, the citizens of Rome, Florence, and Milan face different questions than their cousins in Newark, Hartford, or Providence. Nor do we bemoan a dearth of examples in other fields: that Italians can hit a baseball or throw a punch can be seen in Joe DiMaggio’s homeruns and Rocky Marciano’s slugging; that we can strike a note is heard in Frank Sinatra and Dean Martin; that we can shoot a picture is proven by Coppola, Brian DePalma, and Martin Scorsese.

Yet in the literary arts no equivalent names come up, at least no equivalent names that are thought of as distinctly Italian. Regina Barreca in the introduction to Don’t Tell Mama!: The Penguin Book of Italian American Writing says that there is an endurance of the slur which sees Italians as “deliberately dense, badly educated, and culturally unsophisticated.” By this view the wider culture is fine with the physicality of boxers and baseballs players, the emotion and sensuality of musicians, even the Catholic visual idiom of film as opposed to the Protestant textuality of the written word, so that the “intellectual” pursuits of literature are precluded. She explains that what remains is an “idea of Italian Americans as a people who would never choose to read a book, let alone write one,” though as Baraca stridently declares this is a “set of hazardous concepts [which] cannot simply be outlived; it must be dismantled.”

I make no claims to originating the idea that we must establish an Italian-American literary canon, such has been the mainstay of Italian-American Studies since that field’s origin in the ’70s. This has been the life’s work of scholars like Gambino, Louise DeSalvo, and Fred Gardaphé, not to mention all of the anthology editors I’ve referenced. Tonelli writes that “Our time of genuine suffering at the hands of this bruising country passed more or less unchronicled, by ourselves or anyone else,” yet there are hidden examples of Italian-American voices writing about an experience that goes beyond mafia or guido stereotypes. For many of these critics, the Italian-American literary canon was something that already existed, it was merely a question of being able to recognize what exists beyond the stark black and red cover of The Godfather. Such a task involved the elevation of lost masterpieces like Pietro di Donato’s 1939 proletarian Christ in Concrete , but also a reemphasis on the vowels at the ends of names for authors who are clearly Italian, but are seldom thought of as such.

That Philip Roth is a Jewish author goes without saying, but rarely do we think of the great experimentalist Don DeLillo as an Italian-American author. A restitution of the Italian-American literary canon would ask what precisely is uniquely Italian about a DeLillo? For that matter, what are the Italian-American aesthetics of poets like Lawrence Ferlinghetti, Jay Parini, Diane Di Prima, and Gregory Corso? What can we better say about the Italianness of Gilbert Sorrentino and Richard Russo? Where do we locate the Mezzogiorno in the criticism and scholarship of A. Bartlett Giamatti, Frank Lentricchia, and Camille Paglia? Baraca writes that “Italian Americans live (and have always lived) a life not inherited, but invented,” and everything is to be regained by making a reinvention for ourselves. Furthermore, I’d suggest that the hybridized nature of what it has always meant to be Italian provides a model to avoid the noxious nationalisms that increasingly define our era.

Guglielmo writes that “Throughout the twentieth century, Italian Americans crafted a vocal, visionary, and creative oppositional culture to protest whiteness and build alliances with people of color,” and I’d argue that this empathetic imagination was born out of the pluralistic civilization of which the Italians were descendants. Contrary to pernicious myths of “racial purity,” the Romans were as diverse as Americans are today, drawing not just from Italic peoples like the Umbrians, Sabines, Apulians, and Etruscans, but also from Egyptians, Ethiopians, Berbers, Carthaginians, Phoenicians, Greeks, Anatolians, Gauls, Huns, Dacians, Franks, Teutons, Vandals, Visigoths, Anglo-Saxons, Normans, Iberians, Jews, Arabs, and Celts, among others. A reality quite contrary to the blasphemy of those posters with their stolen Roman images.

Rome was both capital and periphery, a culture that was a circle with no circumference whose center can be everywhere. Christine Palamidessi Moore in her contribution to the Lee Gutkind and Joanna Clapps Herman anthology Our Roots are Deep with Passion: Creative Nonfiction Collects New Essays by Italian American Writers notes that “Italy is a fiction: a country of provinces, dialects, and regions, and historically because of its location, an incorporator of invaders, empires, and bloodlines.” Sitting amidst the Mare Nostrum of its wine-dark sea, Italy has always been at a nexus of Europe, Africa, and Asia, situated between north and south, east and west. Moore explains that the “genuineness of the ethnicity they choose becomes more obscure and questionable because of its mixed origins; however, because it is voluntary, the act of choosing sustains the identity.”

The question then is not “What was Italian America?” but rather “What can Italian America be?” In 1922 W.E.B. DuBois, the first black professor at Harvard, spoke to a group of impoverished Italian immigrants at Chicago’s Hull House. Speaking against the Johnson-Reed Act, DuBois appealed to a spirit of confraternity, arguing that there must be a multiethnic coalition against a “renewal of the Anglo-Saxon cult: the worship of the Nordic totem, the disenfranchisement of Negro, Jew, Irishman, Italian, Hungarian, Asiatic and South Sea Islander.” When DuBois spoke against the “Anglo-Saxon cult” he condemned not actual English people, but rather the fetish that believes only those of British stock can be “true Americans.” When he denounced the “Nordic totem,” he wasn’t castigating actual northern Europeans, but only that system that claims they are worthier than the rest of people. What DuBois condemned was not people, but rather a system that today we’ve taken to calling “white privilege,” and he’s just as correct a century later. The need for DuBois’s coalition has not waned.

Italian-Americans can offer the example of a culture that was always hybridized, always at the nexus of different peoples. Italians have never been all one thing or the other, and that seems to me the best way to be. It’s this liminal nature that’s so valuable, which provides answer to the idolatries of ancestry that are once again emerging in the West (with Italy no exception). DuBois offered a different vision, a coalition of many hues marshaled against the hegemony of any one. When I meet the gaze of the Fayum portraits, I see in their brown eyes an unsettling hopefulness from some 20 centuries ago, looking past my shoulder and just beyond the horizon where perhaps that world may yet exist.

Alliteration’s Apt and Artful Aim

“It used to be a piece of good advice to all young writers to avoid alliteration; and the advice was sound, in as much as it prevented daubing. None the less for that, was it abominable nonsense, and the mere raving of those blindest of the blind who will not see. The beauty of the contents of a phrase, or of a sentence, depends implicitly upon alliteration.”
—Robert Louis Stevenson, “On Some Technical Elements of Style in Literature” (1905)

When the first English poetry was given by the gift and grace of God it was imparted to an illiterate shepherd named Cædmon and the register that it was received and was alliterative. In the seventh century, the English, as they had yet to be called, may have had Christianity, but they did not yet have poetry. Pope Gregory I, having seen a group of them sold as slaves in the markets of Rome, had said “They are not Angles, but angels,” and yet these seraphim did not sing (yet). There among his sheep at the Abbey of Whitby in the rolling Northumbrian countryside, Cædmon served a clergy whose prayers were in a vernacular not their own, among a people of no letters. A lay brother, Cædmon feasted and drank with his fellow monks one evening when they all took to reciting verse from memory (as one does), playing their harps as King David had in the manner of the bards of the Britons, the scops of the Saxons, the Makers of song–for long before poetry was written it should be plucked and sung.

In his Ecclesiastical History of the English People, St. Bede described how the monks were “sometimes at entertainments” and that it was “agreed for the sake of mirth that all present should sing in their turn.” But in a scene whose face-burning embarrassment still resonates a millennium-and-a-half later, Bede explained that when Cædmon “saw the instrument come towards him, he rose up from the table and returned home.” Pity the simple monk whom Alasdair Gray in The Book of Prefaces described as a “local herdsman [who] wanted to be a poet though he had not composed anything.” An original composition would wait for that night. Cædmon went to sleep among his mute animals, but in the morning he arose with the fiery tongue of an angel. Bede records that in those nocturnal reveries “someone” came to Cædmon asking the herdsman to sing of “the beginning of created things.” Like his older contemporary, the prophet Muhammad, some angelic visitor had brought to Cædmon the exquisite perfection of words, and with a commission most appropriate–to create English verse on the topic of creation itself. When Cædmon awoke, he was possessed with the consonantal bursts of a hot, orange iron bar being hammered against a glowing, sparkly anvil; the sounds in his head were the characteristic alliteration of his native English.

That bright night in a dark age, what was delivered unto the shepherd were the first words of English poetry: “Nū scylun hergan     hefaenrīcaes Uard, / metudæs maecti     end his mōdgidanc, /uerc Uuldurfadur,    suē hē uundra gihwaes, / ēci dryctin     ōr āstelidæ / hē ǣrist scōp     aelda barnum / heben til hrōfe,     hāleg scepen,” and so on and so forth. The other monks brought Cædmon to the wise abbess St. Hilda, who declared this delivery a miracle (preserved only in 19 extant manuscripts). Gray described this genesis of English literature: A herdsman sang in a “Northumbrian dialect of Anglo-Saxon” to an amanuensis “who got learning from the Irish Scots,” his narrative being a “Jewish creation story transmitted to him through at least three other languages by a Graeco-Roman-Celtic-Christian church” with verse forms “learned from pagan German warrior chants.” The migrations of peoples and stories would, as with all languages, contribute to the individual aural soul of English, so that the result was an Anglo-Saxon literature that sounded like wind blowing in over the whale-road of the North Sea, reminding us that “Alliteration is part of the sound stratum of poetry. It predates rhyme and takes us back to the oldest English and Celtic poetries,” as Edward Hirsch writes in A Poet’s Glossary.

Read that bit of quoted verse from Cædmon aloud and it might not sound much like English to you, the modern translation roughly reading as “Now [we] must honor the guardian of heaven, / the might of the architect, and his purpose, / the work of the father of glory / as he, the eternal lord, established the beginning of wonders,” (and so on and so forth). But if you sound out what philologists call “Anglo-Saxon,” you’ll start to hear the characteristic rhythms of English–staccato firing of short, consonant rich words, and most of all the alliteration. Anglo-Saxon, if heard without concentration, can sound like someone speaking English in another room just beyond your hearing; it can sound like an upside-down version of what we speak every day; it can sound like what our language would be if imitated by a non-fluent speaker. Cædmon’s verse may have been gifted from angels, but the ingredients were his tongue’s phonemes, and unlike the Romance language’s rhyme-ready vowels, he had hard Germanic edges.

Such was the template set by Cædmon, for though “Old English meter is not fully understood,” as Derik Attridge explained in Poetic Rhythm: An Introduction, it does appear to “have been written according to complex rules.” Details of prosody are beyond my purview, but as a crackerjack explanation, what defines Anglo-Saxon meter is a heavy reliance on alliteration, whereby what connects the two halves of a line, separated by a caesura (the gap you see between words in the bit of Cædmon quoted above), is an alliterated meter stress. In Cædmon’s first line we have the alliteration of “hergan/hefaenrīcaes,” in the second line the alliterative triumvirate of “metudæs/maecti/mōdgidanc,” a pattern that continues throughout the rest of the hymn. A meter that David Crystal in The Stories of English described as “the most structurally distinctive verse form to have emerged in the history of English.”

The vast majority of Anglo-Saxon’s “structurally distinctive verse,” as with all literatures, is lost to us. By necessity, oral literature disappears, as ephemeral as breath in the cold. Words are subject to decay; poetry to entropy. Only about 400 manuscripts of Anglo-Saxon survive, less than the average number of books in a professor’s office in Cambridge, or Berkeley, or Ann Arbor. Of those, slightly fewer than 200 are considered “major,” and there are but four major manuscripts of specifically poetry. The earliest of these, the Junius manuscript, is that which contains Cædmon’s hymn; the latest of these, the Nowell Codex, contains among other things the only extant Anglo-Saxon epic, that which we call Beowulf. Such is undoubtedly an insignificant percentage of what was once written, of what was once sung to the accompaniment of a lute. Much was lost in the 16th century when Henry VIII decided to dissolve the monasteries, so that most alliterative verse was either turned to ash, or stripped into the bindings of other books, occasionally discovered by judicious bibliographers. Almost all Anglo-Saxon literature, from poetry to prose, hagiography to history, bible translation to travelogue, could be consumed by the committed reader in a few months as they’re squirreled away in a Northumbrian farm house.

Anglo-Saxon has a distinctive literary register that revels in riddles, in ironic understatement, in the creative metaphorical neologisms called “kennings,” and of course in alliteration. This is the literature of the anonymous poem “The Seafarer,” a fatalistic elegy of a mere 124 lines about the life of a sailor, whose opening was rendered by the 20th-century modernist poet Ezra Pound as “May I for my own self song’s truth reckon, /Journey’s jargon, how I in harsh days / Hardship endured oft.” Pound was not only bewitched by alliteration’s alluring galumph, but the meter of “The Seafarer” propels the action of the poem forward, the cadence of natural English speech tamed by the defamiliarization of enjambment and stress, rendering it simultaneously ordinary and odd (as the best poetry must). Anglo-Saxon is a verse, for which the Irish poet and translator of Beowulf Seamus Heaney remarks, where even when the language is elevated it’s also “always, paradoxically, buoyantly down to earth.” Read aloud “The Wanderer,” which sits alongside “The Seafarer” in the compendium known as the Exeter Book, where the anonymous scop sings of “The thriving of the treeland, the town’s briskness, / a lightness over the leas, life gathering, / everything urges the eagerly mooded / man to venture on the voyage he thinks of, / the faring over flood, the far bourn.” Prosody is an art of physical feeling before it ever is one of semantic comprehension–poems exist in the mouth, not in the mind. Note the mouth-feel of “The Wanderer’s” aural sense, the way in which reading it aloud literally feels good. Poetry is a science of placing tongue against teeth and pallet, it is not philosophy. This is what Heaney described as “the element of sensation while the mind’s lookout sways metrically and farsightedly,” and as English is an alliterative tongue it’s that alliteration which gives us sensation. A natural way of speaking, for as Heaney wrote “Part of me… had been writing Anglo-Saxon from the start.”

The cumulative effect of this meter is as if a type of sonic galloping, the clip-clop of horses across the frozen winter ground of a Northumbrian countryside. Such was the sound imparted to Cædmon by his unseen angel, and these were the rules drawn from the natural ferment and rich, black soil of the English language itself, where alliteration grows from our consonant top-heavy tongue. Other Germanic languages based their prosody on alliteration for the simple reason that, in the Icelandic of the Poetic Eddas or the German of Muspilli, the sounds of the words themselves make it far easier to alliterate than to rhyme, as in Italian or French. The Roland Greene and Stephen Cushman-edited Princeton Handbook of Poetic Terms explains that among the “four most significant devices of phonic echo in poetry”—rhyme, assonance, consonance, and alliteration—it was the last which most fully defined Anglo-Saxon prosody, grown from the raw sonic materials of the language itself. To adopt a line from Charles Churchill’s 1763 The Prophecy of Famine, Cædmon may have “prayed / For apt alliteration’s artful aid,” but the angel only got his attention–the sounds were already in the monk’s speech.

Every language has a certain aural spirit to it, a type of auditory fingerprint which is related to those phonemes, those sounds, which constitute the melody and rhythm of any tongue. Romance languages with their languid syllables, words ending with the open-mouthed expression of the sprawled vowel; the spittle-flecked plosives of the Slavic tongues, or the phlegmy gutturals of the Germanic, and despite their consonants the gentle lilt of the Gaelic languages. Sound can be separate from meaning as something that can be felt in the body without the need to comprehend it in the mind.

In his masterpiece Gödel, Escher, Bach: An Eternal Golden Braid, the logician Douglas Hofstadter provides examples of individual languages’ aural spirit. Hofstadter examines several different “translations” of the Victorian author Lewis Carrol’s celebrated nonsense lyric “Jabberwocky” from Through the Looking-Glass and What Alice Found There. That strange and delightful poem, as you will probably recall, encapsulates the aural spirit of English rather well. Carrol famously declared “Twas brillig, and the slithy toves / Did gyre and gimble in the wabe; / All mimsy were the borogoves, / And the mome raths outgrabe.” Other than some articles and conjunctions, the poem is almost entirely nonsense. Yet in its hodge-podge of invented nouns, verbs, and adjectives, most readers can imagine a fairly visceral scene, but this picture is generated not from actual semantic meaning, but rather from the strange wisdom of English’s particular aural soul. From “Jabberwocky” we get a sense of its preponderance of consonants and its relatively short words, a language that sounds like chewing a tough slab of air-dried meat. As Hofstadter explains, this poses a difficulty in “translating,” because the aural sense of English has to be converted into that of another language. Despite that difficulty, there have been several successful attempts, each of which enact the auditory anatomy of a particular tongue.

Frank L. Warrin’s French translation of Carrol’s poem begins, “Il brilgue: les tôves lubricilleux / Se gyrent en vrillant dans le guave”; Adolfo de Alba’s Spanish starts, “Era la asarvesperia y los flexilimosos toves / giroscopiaban taledrando en el vade”; and my personal favorite, the German of Robert Scott’s first stanza, reads as “Es brillig war. Die schlichte Toven / Wirrten und wimmelten in Waben; / Und aller-mümsige Burggoven / Die mohmen Räth’ ausgraben.” What all translations accomplish is a sense of the sounds of a language, the aural soul that I speak of. You need not be fluent to identify a language’s aural soul; unfamiliarity with the actual literal meanings of a language might arguably make it easier for a listener to detect those distinct features that define a dialect. Listen to a YouTube video of the prodigious, late comedic genius Sid Caesar, a master of “double-talk,” who while working in his father’s Yonkers restaurant overheard speakers of “Italian, Russian, Polish, Hungarian, French, Spanish, Lithuanian, and even Bulgarian” and subsequently learned that he could adeptly parrot the cadence and aural sense of those and other languages.

In his memoir Caesar’s Hours: My Life in Comedy, with Love and Laughter, the performer observes that “Every language has its own song and rhythm.” If you’re fluent in fake French or suspect Spanish, you may naturally inquire as to what fake English sounds like. When I lived in Scotland I had a French room-mate whom I posed that query too, and after much needling (and a few drinks) he recited a sentence of perfect counterfeit English. Guttural as German, but with far shorter words; it was a type of rapid-fire clanging-and-clinking with words shooting out with a “ping” as if lug-nuts from a malfunctioning robot. If you want a more charitable interpretation of what American English sounds like, listen to Italian pop star Adriano Celentano’s magnificently funky fake-English song “Prisencolinensinainciusol,” where with all the guttural urgency THAT our language requires, he sings out “Uis de seim cius nau op de seim/Ol uoit men in de colobos dai/Not s de seim laikiu de promisdin/Iu nau in trabol lovgiai ciu gen.”

If every language has this sonic sense, then poetry is the ultimate manifestation of a tongue’s unique genius; a language’s consciousness made manifest and self-aware, bottled and preserved into the artifact of verse. Language lives not in the mind, but rather in the larynx, the soft pallet, the mouth, and the tongue, and its progeny are the soft serpentine sibilant, the moist plosive, the chest gutturals’ heart-burn. A language’s poetry will reflect the natural sounds that have developed for its speakers, as can be witnessed in the meandering inter-locking softness of Italian ottava rima or the percussive trochaic tetrameter of Finland’s The Kalevala. For Romance languages, rhyme is relatively easy–all of those vowels at the ends of words. That Anglo-Saxon meter should be so heavily alliterative, along with its consonant-heavy West Germanic cousins like Frisian, also makes sense. Why then is rhyme historically the currency of English language poetry after the Anglo-Saxon era? After all, one couldn’t imagine a philistine’s declaration of “This can’t be poetry; it doesn’t alliterate.”

The reasons are both complex and contested, though the hypothesis is that following the Norman invasion (1066 and all that) Romance poetic styles were imposed on the Germanic speaking peoples (as indeed they’d once imposed their language on the Celts). In A Poet’s Guide to Poetry by Mary Kinzie, she writes that “In the history of English poetry, rhyme takes over when alliteration leaves off.” A century-and-a-half of free verse, and almost five centuries of blank verse, and so enshrined is rhyme that your average reader still assumes that it’s definitional to what poetry is. But alliteration, despite its integral role in the birth of our prosody, and its continual presence (often accidental) in our everyday speech, is relegated to the role of adult child from a first marriage whose invitation to Thanksgiving dinner has been lost in the mail. There is an important historical exception to this, during the High Middle Ages, when the Germanic throatiness of old English was rounded out by the softness of Norman French and some poets chose to once again write their verse in the characteristic alliterative meter of their ancestors. Or maybe that’s what happened, it’s entirely possible that English poets never chose to abandon alliteration, and the so-called alliterative revival with poems like the anonymous Pearl Poet’s Sir Gawain and the Green Knight and William Langland’s Piers Plowman are just that which survives, giving the illusion that something has returned which never actually left.

In All the Fun’s in How you Say a Thing: An Explanation of Meter and Versification, Timothy Steele writes that some regard the alliterative revival as a “conscious protest against the emerging accentual-syllabic tradition and as a nationalistic effort to turn English verse back to its German origins.” Whether that’s the case is an issue for medievalists to debate. What is true is that the Middle English poetry of this period represents a vernacular renaissance of the 13th and 14th centuries when much English poetry read as, well, English. Again, with a sense of mouth feel, read aloud Simon Armitage’s translation of Sir Gawain and the Green Knight when the titular emerald man picks up his just severed head and says, “you must solemnly swear/that you’ll seek me yourself; that you’ll search me out/to the ends of the earth to earn the same blow/as you’ll dole out today in this decorous hall.” That “s,” and “e,” and “d” – there’s a pleasure in reading alliteration that can avoid the artifice of rhyme, with its straitjacket alterity. Armitage explains that “alliteration is the warp and weft of the poem, without which it is just so many fine threads” (that line itself replicating Anglo-Saxon meter, albeit in prose). For the translator, alliteration exists as “percussive patterning” which is there to “reinforce their meaning and to countersink them within the memory,” what Crystal calls “phonological mnemonics.”

But if the alliterative poetry of the High Middle Ages signaled not a rupture but an occluded continuity, then it is true that by the arrival of the Renaissance, rhyme would permanently supplant it as the prosodic element du jour. Adoption of continental models of verse are in large part due to the coming influence of Renaissance humanism, yet the 14th-century also saw the political turmoil of the Peasant’s Rebellion, when an English-speaking rabble organized against the French-speaking aristocracy, motivated by a proto-Protestant religious movement called Lollardy and celebrated in a flowering of vernacular spiritual writing, which contributed to the Church’s eventual fear of bible translation. As the rebellion would be violently put down in 1381, there was a general distrust among the ruling classes of scripture rendered into an English tongue–could a similar distrust of alliteration, too common and simple, have shifted poetry away from it for good? Melvyn Bragg writes in The Adventures of English: The Biography of a Language that poets like Langland (himself possibly a Lollard) rendered their verse “more believable for being so plainly painted,” a poetry “meant to sound like the language of the people.” This was precisely what was to be avoided, and so perhaps alliteration migrated from the rarefied realms of poetry back to simple speech, eventually reserved mostly for instances in which, as Bragg writes, there is a need to “Grab the listener’s or reader’s attention…such as in news headlines and advertising slogans.”

Madison Avenue understands alliteration’s deep wisdom, such that “Guinness is good for you” (possibly true) and “Greyhound going great” (undoubtedly not true).” While at home in advertising, Attridge explained that alliteration is a “rare visitor to the literary realm,” with Hirsch adding that it now exists as a “subterranean stream in English-language poetry.” Subterranean streams can still have a mighty roar however, as alliteration’s preponderance in the popular realm of slogans and lyrics can attest to. Alliteration exists because alliteration works, and while it’s been largely supplanted by rhyme for several hundred years it remains in the wheelhouse of some of our most canonical poets. The 19th-century British poet Gerard Manley Hopkins largely derived his “sprung rhythm” from Anglo-Saxon meter, where he lets the Sybil’s leaves declare “let them be left, wildness and wet; Love live the weeds and the wilderness yet,” calling forth all the forward momentum of “The Wanderer.”

For exhibition of alliteration’s sheer magnificent potential, examine W.H. Auden’s under-read 1947 masterpiece The Age of Anxiety, which demonstrates our indigenous trope’s full power, especially in a contemporary context, and not just as a fantasy novel affectation for when an author needs characters to sound like stock Saxons. Across six subsections spread amongst 138 pages, four war-time characters in a Manhattan bar reflect on modernity’s traumas in a series of individual inner alliterative monologues. Alliteration is perfectly attuned to this setting, which Auden later described as “an unprejudiced space where nothing particular ever happens,” because alliteration simultaneously announced itself as common (i.e. “This is what English speakers sound like”) while also clearly poetic (“But we don’t normally alliterate as regularly as that”). Take the passage in which one of the characters, now drunk in a stream of consciousness reverie (for that’s what good Guinness does for you…), examines his own face across from himself in the barroom mirror (as one does). Auden writes:
How glad and good when you go to bed,
Do you feel, my friend? What flavor has
That liquor you lift with your left hand;
Is it cold by contrast, cool as this
For a soiled soul; does yourself like mine
Taste of untruth? Tell me, what are you
Hiding in your heart, some angel face,
Some shadowy she who shares in my absence,
Enjoys my jokes? I’m jealous, surely,
Nicer myself (though not as honest),
The marked man of romantic thrillers
Whose brow bears the brand of winter
No priest can explain, the poet disguised,
Thinking over things in thieves’ kitchens.
That thrum supplied by alliterative meter, so much sharper and more angular than rhyme’s pleasing congruencies or assonance’s soft rounding, feels like nothing so much as the rushing blood in the temples of that drunk staring at his own disheveled reflection. Savor the slink of that “shadowy she who shares” or the pagan totemism of the “marked man” and the colloquialism of that which is “Hiding in your heart.” The Age of Anxiety perfectly synthesizes a type of vernacular bar-room speech that nonetheless is clearly poetry in the circumscribed constrictions of its language. In choosing our rhetorical tropes certain metaphysical implications necessarily announce themselves. Were The Age of Anxiety rhymed or in free verse it would be a very different poem, in this case a less successful one (despite our collective ignorance about Auden’s elegy). At one point, one of the characters imagines the future, imagines us, and she predicts that the future will be “Odourless ages, an ordered world/Of planned pleasures and passport-control, /Sentry-go sedatives, soft drinks and/Managed money, a moral planet/Tamed by terror.” This is, first of all, an obviously accurate prediction of life in 2019. It’s also one all the more terrifying in the familiar wax and wane of alliteration, for it calls forth the beating heart of English itself, and while not sounding like that stock medieval herdsman it still harkens back towards the primordial hymns of our tongue, of Cædmon strumming his lute while he pops antidepressants and checks Facebook for the 500th time that day.

Such experiments as The Age of Anxiety have been at best understood as literary affectations, or at worst declarations of nativist Anglophilia (as with Pound). Kinzie complains that alliteration draws “attention away from what words mean to hint at what are often the…irrational similarities in their sounds,” though I’d argue that that describes all rhetorical devices, from metaphor to metonymy, catachresis to chiasmus. She writes that alliteration appears “towards the self-conscious end of the continuum of diction,” but it’s precisely self-awareness of medium that makes language poetry. Hirsch writes that “Alliteration can reinforce preexisting meanings…and establish effective new ones,” which Kinzie interprets as mere “fondness for gnomic utterance.” But that’s the prodigious brilliance of alliteration–its uniquely English oracular quality. As with all verse, it’s these “gnomic utterances” that are birthed from the pregnant potential of our particular words, and alliteration’s naturalism is what makes it so apt for this purpose in our language. Prosody at its most affecting incongruously marries the realism of speech with the defamiliarization that announces poetry as being artifice, and that’s why alliteration is among the most haunting of our aural ghosts. Poetry is of the throat before it is of the brain, and alliteration is the common well-spring of English, sounding neither as contrived nor as straight-jacketed as rhyme. As such, why not embrace alliterative meter as more than just gimmick; why not draw attention to that aural quality of our language that is our common ownership? Our tongues already talk in alliteration, so let us once again proclaim our poems in alliteration, let us declare our dreams in it.

Binding the Ghost: On the Physicality of Literature

“Homer on parchment pages! / The Iliad and all the adventures/ Of Ulysses, for of Priam’s kingdom, / All locked within a piece of skin / Folded into several little sheets!”—Martial, Epigrammata (c. 86-103)

“A good book is the precious life-blood of a master spirit, embalmed and treasured up on purpose to a life beyond life.” -—John Milton, Aeropagitica (1644)

At Piazza Maunzio Bufalini 1 in Cesena, Italy, there is a stately sandstone building of buttressed reading rooms, Venetian windows, and extravagant masonry that holds slightly under a half-million volumes, including manuscripts, codices, incunabula, and print. Commissioned by Malatesta Novello in the 15th century, the Malatestiana Library opened its intricately carved walnut door to readers in 1454, at the height of the Italian Renaissance. The nobleman who funded the library had his architects borrow from ecclesiastical design: The columns of its rooms evoke temples, its seats the pews that would later line cathedrals, its high ceilings as if in monasteries.

Committed humanist that he was, Novello organized the volumes of his collection through an idiosyncratic system of classification that owed more to the occultism of Neo-Platonist philosophers like Marsilio Ficino, who wrote in nearby Florence, or Giovanni Pico della Mirandola, who would be born shortly after its opening, than to the arid categorization of something like our contemporary Dewey Decimal System. For those aforementioned philosophers, microcosm and macrocosm were forever nestled into and reflecting one another across the long line of the great chain of being, and so Novello’s library was organized in a manner that evoked the connections of both the human mind in contemplation as well as the universe that was to be contemplated itself. Such is the sanctuary described by Matthew Battles in Library: An Unquiet History, where a reader can lift a book and test its heft, can appraise “the fall of letterforms on the title page, scrutinizing marks left by other readers … startled into a recognition of the world’s materiality by the sheer number of bound volumes; by the sound of pages turning, covers rubbing; by the rank smell of books gathered together in vast numbers.”

An awkward-looking yet somehow still elegant carved elephant serves as the keystone above one door’s lintel, and it serves as the modern library’s logo. Perhaps the elephant is a descendant of one of Hannibal’s pachyderms who thundered over the Alps more than 15 centuries before, or maybe the grandfather of Hanno, Pope Leo X’s pet—gifted to him by the King of Portugal—who would make the Vatican his home in less than five decades. Like the Renaissance German painter Albrecht Durer’s celebrated engraving of a rhinoceros, the exotic and distant elephant speaks to the concerns of this institution—curiosity, cosmopolitanism, and commonwealth.

It’s the last quality that makes the Malatestiana Library so significant. There were libraries that celebrated curiosity before, like the one at Alexandria whose scholars demanded that the original of every book brought to port be deposited within while a reproduction would be returned to the owner. And there were collections that embodied cosmopolitanism, such as that in the Villa of Papyri, owned by Lucius Calpurnius Piso Caesoninus, the uncle of Julius Caesar, which excavators discovered in the ash of Herculaneum, and that included sophisticated philosophical and poetic treatises by Epicurus and the Stoic Chrysopsis. But what made the Malatestiana so remarkable wasn’t its collections per se (though they are), but rather that it was built not for the singular benefit of the Malatesta family, nor for a religious community, and that unlike in monastic libraries, its books were not rendered into place by a heavy chain. The Bibliotheca Malatestiana would be the first of a type—a library for the public.

If the Malatestiana was to be like a map of the human mind, then it would be an open-source mind, a collective brain to which we’d all be invited as individual cells. Novella amended the utopian promise of complete knowledge as embodied by Alexandria into something wholly more democratic. Now, not only would an assemblage of humanity’s curiosity be gathered into one temple, but that palace would be as a commonwealth for the betterment of all citizens. From that hilly Umbrian town you can draw a line of descent to the Library Company of Philadelphia founded by Benjamin Franklin, the annotated works of Plato and John Locke owned by Thomas Jefferson and housed in a glass-cube at the Library of Congress, the reading rooms of the British Museum where Karl Marx penned Das Kapital (that collection having since moved closer to King’s Cross Station), the Boston Public Library in Copley Square with its chiseled names of local worthies like Ralph Waldo Emerson and Henry David Thoreau ringing its colonnade, and the regal stone lions who stand guard on Fifth Avenue in front of the Main Branch of the New York Public Library.

More importantly, the Malatestiana is the progenitor of millions of local public libraries from Bombay to Budapest. In the United States, the public library arguably endures as one of the last truly democratic institutions. In libraries there are not just the books collectively owned by a community, but the toy exchanges for children, the book clubs and discussion groups, the 12 Step meetings in basements, and the respite from winter cold for the indigent. For all of their varied purposes, and even with the tyrannical ascending reign of modern technology, the library is still focused on the idea of the book. Sometimes the techno-utopians malign the concerns of us partisans of the physical book as being merely a species of fetishism, the desire to turn crinkled pages labeled an affectation; the pleasure drawn from the heft of a hardback dismissed as misplaced nostalgia. Yet there are indomitably pragmatic defenses of the book as physical object—now more than ever.

For one, a physical book is safe from the Orwellian deletions of Amazon, and the electronic surveillance of the NSA. A physical book, in being unconnected to the internet, can be as a closed-off monastery from the distraction and dwindling attention span engendered by push notifications and smart phone apps. The book as object allows for a true degree of interiority, of genuine privacy that cannot be ensured on any electronic device. To penetrate the sovereignty of the Kingdom of the Book requires the lo-fi method of looking over a reader’s shoulder. A physical book is inviolate in the face of power outage, and it cannot short-circuit. There is no rainbow pinwheel of death when you open a book.

But if I can cop to some of what the critics of us Luddites impugn us with, there is something crucial about the weight of a book. So much does depend on a cracked spine and a coffee-stained page. There is an “incarnational poetics” to the very physical reality of a book that can’t be replicated on a greasy touch-screen. John Milton wrote in his 1644 Aeropagitica, still among one of the most potent defenses of free speech written, that “books are not absolutely dead things, but do contain a potency of life in them to be as active as that soul whose progeny they are.” This is not just simply metaphor; in some sense we must understand books as being alive, and just as it’s impossible to extricate the soul of a person from their very sinews and nerves, bones, and flesh, so too can we not divorce the text from the smooth sheen of velum, the warp and waft of paper, the glow of the screen. Geoffrey Chaucer or William Shakespeare must be interpreted differently depending on how they’re read. The medium, to echo media theorist Marshall McLuhan, has always very much been the message.

This embodied poetics is, by its sheer sensual physicality, directly related to the commonwealth that is the library. Battles argues that “the experience of the physicality of the book is strongest in large libraries”; stand amongst the glass cube at the center of the British Library, the stacks upon stacks in Harvard’s Widener Library, or the domed portico of the Library of Congress and tell me any differently. In sharing books that have been read by hundreds before, we’re privy to other minds in a communal manner, from the barely erased penciled marginalia in a beaten copy of The Merchant of Venice to the dog-ears in Leaves of Grass.

What I wish to sing of then is the physicality of the book, its immanence, its embodiment, its very incarnational poetics. Writing about these “contraptions of paper, ink, carboard, and glue,” Keith Houston in The Book: A Cover-to-Cover Exploration of the Most powerful Object of our Time challenges us to grab the closest volume and to “Open it and hear the rustle of paper and the crackle of glue. Smell it! Flip through the pages and feel the breeze on your face.” The exquisite physicality of matter defines the arid abstractions of this thing we call “Literature,” even as we forget that basic fact that writing may originate in the brain and may be uttered by the larynx, but it’s preserved on clay, papyrus, paper, and patterns of electrons. In 20th-century literary theory we’ve taken to call anything written a “text,” which endlessly confuses our students who themselves are privy to call anything printed a “novel” (regardless of whether or not its fictional). The text, however, is a ghost. Literature is the spookiest of arts, leaving not the Ozymandian monuments of architectural ruins, words rather grooved into the very electric synapses of our squishy brains.

Not just our brains though, for Gilgamesh is dried in the rich, baked soil of the Euphrates; Socrates’s denunciation of the written word from Plato’s Phaedrus was wrapped in the fibrous reeds grown alongside the Nile; Beowulf forever slaughters Grendel upon the taut, tanned skin of some English lamb; Prospero contemplates his magic books among the rendered rags of Renaissance paper pressed into the quarto of The Tempest; and Emily Dickinson’s scraps of envelope from the wood pulp of trees grown in the Berkshires forever entombs her divine dashes. Ask a cuneiform scholar, a papyrologist, a codicologist, a bibliographer. The spirit is strong, but so is the flesh; books can never be separated from the circumstances of those bodies that house their souls. In A History of Reading, Alberto Manguel confesses as much, writing that “I judge a book by its cover; I judge a book by its shape.”

Perhaps this seems an obvious contention, and the analysis of material conditions, from the economics of printing and distribution to the physical properties of the book as an object, has been a mainstay of some literary study for the past two generations. This is as it should be, for a history of literature could be written not in titles and authors, but from the mediums on which that literature was preserved, from the clay tablets of Mesopotamia to the copper filaments and fiber optic cables that convey the internet. Grappling with the physicality of the latest medium is particularly important, because we’ve been able to delude ourselves into thinking that there is something purely unembodied about electronic literature, falling into that Cartesian delusion that strictly separates the mind from the flesh.

Such a clean divorce was impossible in earthier times. Examine the smooth vellum of a medieval manuscript, and notice the occasionally small hairs from the slaughtered animals that still cling to William Langland’s Piers Plowman or Dante’s The Divine Comedy. Houston explains that “a sheet of parchment is the end product of a bloody, protracted, and physical process that begins with the death of a calf, lamb, or kid, and proceeds thereafter through a series of grimly anatomical steps until parchment emerges at the other end,” where holding up to the light one of these volumes can sometimes reveal “the delicate tracery of veins—which, if the animal was not properly bled upon its slaughter, are darker and more obvious.” It’s important to remember the sacred reality that all of medieval literature that survives is but the stained flesh of dead animals.

Nor did the arrival of Johannes Guttenberg’s printing press make writing any less physical, even if was less bloody. Medieval literature was born from the marriage of flesh and stain, but early modern writing was culled from the fusion of paper, ink, and metal. John Man describes in The Gutenberg Revolution: How Printing Changed the Course of History how the eponymous inventor had to “use linseed oil, soot and amber as basic ingredients” in the composition of ink, where the “oil for the varnish had to be of just the right consistency,” and the soot which was used in its composition “was best derived from burnt oil and resin,” having had to be “degreased by careful roasting.” Battles writes in Palimpsest: A History of the Written Word that printing is a trade that bears the “marks of the metalsmith, the punch cutter, the machinist.” The Bible may be the word of God, but Guttenberg printed it onto stripped and rendered rags with keys “at 82 percent lead, with tin making up a further 9 percent, the soft, metallic element antimony 6 percent, and trace amounts of copper among the remainder,” as Houston reminds us. Scripture preached of heaven, but made possible through the very minerals of the earth.

Medieval scriptoriums were dominated by scribes, calligraphers, and clerics; Guttenberg was none of these, rather a member of the goldsmith’s guild. His innovation was one that we can ascribe as a victory to that abstract realm of literature, but fundamentally it was derived from the metallurgical knowledge of how to “combine the supple softness of lead with the durability of tin,” as Battles writes, a process that allowed him to forge the letter matrices that fit into his movable printing-press. We may think of the hand-written manuscripts of medieval monasteries as expressing a certain uniqueness, but physicality was just as preserved in the printed book, and, as Battles writes, in “letters carved in word or punched and chased in silver, embroidered in tapestry and needlepoint, wrought in iron and worked into paintings, a world in which words are things.”

We’d do well not to separate the embodied poetics of this thing we’ve elected to call the text from a proper interpretation of said text. Books are not written by angels in a medium of pure spirit; they’re recorded upon wood pulp and we should remember that. The 17th-century philosopher Rene Descartes claimed that the spirit interacted with the body through the pineal gland, the “principal seat of the soul.” Books of course have no pineal gland, but we act as if text is a thing of pure spirit, excluding it from the gritty matter upon which it’s actually constituted. Now more than ever we see the internet as a disembodied realm, the heaven promised by theologians but delivered by Silicon Valley. Our libraries are now composed of ghosts in the machine. Houston reminds us that this is an illusion, for even as you read this article on your phone, recall that it is delivered by “copper wire and fiber optics, solder and silicon, and the farther ends of the electromagnetic spectrum.”

Far from disenchanting the spooky theurgy of literature, an embrace of the materiality of reading and writing only illuminates how powerful this strange art is. By staring at a gradation of light upon dark in abstracted symbols, upon whatever medium it is recorded, an individual is capable of hallucinating the most exquisite visions; they are able to even experience the subjectivity of another person’s mind. The medieval English librarian Richard de Bury wrote in his 14th-century Philobiblon that “In books I find the dead as if they were alive … All things are corrupted and decay in time; Saturn ceases not to devour the children that he generates; all the glory of the world would be buried in oblivion, unless God had provided mortals with the remedy of books.”

If books are marked by their materiality, then they in turn mark us; literature “contrived to take up space in the head and in the world of things,” as Battles writes. The neuroplasticity of our mind is set by the words that we read, our fingers cut from turned pages and our eyes strained from looking at screens. We are made of words as much as words are preserved on things; we’re as those Egyptian mummies who were swaddled in papyrus printed with lost works of Plato and Euripides; we’re as the figure in the Italian Renaissance painter Giuseppe Arcimboldo’s 1566 The Librarian [above], perhaps inspired by those stacks of the Malatestiana. In that uncanny and beautiful portrait Arcimboldo presents an anatomy built from a pile of books, the skin of his figure the tanned red and green leather of a volume’s cover, the cacophony of hair a quarto whose pages are falling open. In the rough materiality of the book we see our very bodies reflected back to us, in the skin of the cover, the organs of the pages, the blood of ink. Be forewarned: to read a book as separate from the physicality that defines it is to scarcely read at all.

Image: Wikimedia Commons

A Year in Reading: Ed Simon

For my first ever Year in Reading at The Millions, I will only be featuring books which I checked out from the local public library in my sleepy Massachusetts town a few miles north of the Red Line’s terminus. Constructed in 1892 and modeled after the Renaissance Palazzo della Cancelleria in Rome, I’ve made this sandstone building a regular part of the itinerary on my way back from Stop ‘n Shop. The library has a resplendent mahogany reading room, the edges lined with framed 17th century drawings, with the back walls decorated with an incongruous painting of Napoleon’s ill-fated Russia campaign and a North African souk scene, all oranges and lemons in the sun. This room contains all of the new novels that come through the library, and after moving to Massachusetts and getting my card I made it a point to come every other week, and to take out more books than I had time to read.
I will not be considering books that I bought at the Harvard Co-Op or Grolier Poetry Bookshop, which without the deadline of a due-date tend to pile up next to my chair where they get chewed on by my French bulldog puppy. Nor will I write about books which I’ve taught these past two semesters, or which I published appraisals of and benefited from the generosity of publisher’s review copies. I’m also excluding non-fiction, preferring for the duration of this essay to focus entirely on the novel as the most exquisite vehicle for immersing ourselves in empathetic interiority to yet be devised by humans. And while there were seemingly endless books which I dipped into, reread portions of, skimmed, and started without finishing, holding to Francis Bacon’s contention in my beloved 17th century that “Some books are to be tasted… some books are to be read only in parts, others to be read, but not curiously,” I’ve rather chosen only to highlight those which the philosopher would have categorized as books that are “to be swallowed… to be chewed and digested.” Looking over the detritus of that complete year in reading, and examining that which was digested as a sort of literary coprologist, I’ve noticed certain traces of things consumed – namely novels of politics and horror, of imagination and immortality, of education and identity.


Campus novels are my comfort fiction, taking an embarrassing enjoyment in reading about people superficially like myself and proving the adage that there is nothing as consoling as our own narcissism. By my estimation the twin triumphs of that genre are my fellow Pittsburgher Michael Chabon’s Wonder Boys and John Williams’s Stoner, the later of which remains alongside F. Scott Fitzgerald’s The Great Gatsby as among the most perfect examples of 20th century American prose, where not even a comma is misplaced. While nothing quite reached those heights, the campus novels which I did read reminded me of why I love the genre so much – the excruciating personal politics, the combustible interactions between widely divergent personalities, and the barest intimations that the Ivory Tower is supposed to (and sometimes does) point to things transcendent and eternal.

Regarding that last, utopian quality of what we hope that higher education is supposed to do, I recently read Lan Samantha Chang’s All is Forgotten, Nothing is Lost. The director of the esteemed University of Iowa Writer’s Workshop, Chang’s slender novel follows the literary careers of the poets who all trained together in the graduate seminar of Miranda Sturgis at fictional Bonneville College. Chang uses the characters of Bernard Sauvet and Roman Morris to interrogate how careerism, aesthetics, and competition all factor into something as seemingly rarefied as poetry. Roman has far more professional success, but is always haunted by the aridness of his verse; his is an abstraction polished to an immaculate sheen, but lacking in human feeling. Bernard, however, is a variety of earnest, celibate, very-serious-young-man with an affection for High Church Catholicism that Chang presents with precise verisimilitude, and who toils monastically in the production of an epic poem about the North American Jesuit martyrs. It’s a strange, quick read that risks falling into allegory, but never does.

A very different campus novel was Francine Prose’s Blue Angel, which details over the course of one semester a brief affair between creative writing professor Ted Swenson and his talented, if troubled, student Angela Argo. Intergenerational infidelity is one of the most hackneyed themes of the campus novel, and Prose’s narrative threatens to spill into the territory of David Mamet’s Oleanna. A lesser writer could have turned The Blue Angel, which is loosely based on Josef von Sternberg’s 1930 film classic, into a conservative, scolding denunciation of gender politics; the twist being that it’s a woman whose delivering invective against the movement towards great accountability concerning sexual harassment. No doubt the novel must read very different after #MeToo, but the text itself doesn’t evidence the sympathy for Ted which some critics might accuse Prose of. As a character, Ted is nearer to Vladimir Nabokov’s Humbert Humbert from Lolita, albeit less charming. When read as the account of an unreliable narrator, The Blue Angel isn’t a satire of feminist piety, but to the contrary an exploration of Ted’s ability to rationalize and obfuscate, most crucially to himself.

Ryan McIlvain’s novel The Radicals is only superficially a campus novel; its main characters Eli and Sam are both graduate students at NYU, but the author’s actual subject is how political extremism can justify all manner of things which we’d never think ourselves capable of, even murder. Reflecting back on the first day they really connected (at that most David Foster Wallace of pastimes – a tennis game), Eli says of Sam “I couldn’t have known I was standing across the net from a murderer, and neither could he,” which I imagine would be the sort of thing you’d remember when reflecting on the halcyon days of an activist group that turned deadly. McIlvain’s prose is a minimalist in a manner that I’m traditionally not attracted towards, but which in The Radicals he imbues with a sense of elegant parsimony. The politics of The Radicals is weirdly hermetically sealed, lower Manhattan during the early Obama years more a set piece for McIlvain to perform a thought experiment on the psychology of insular, extreme groups. Sam, initially the less committed of the two, though whom we’re given indications of his character during a disturbing road rage incident in the opening pages of the book, ultimately becomes the leader of an anarchist cell that emerges out of a movement which seems similar to Occupy Wall Street. As the group stalks through the Westchester estate of an executive implicated in the ’08 financial crash, we’re presented with a riveting account of how ideology can quickly veer into the cultish.
There is an elegiac quality to McIlvain’s novel, a sort of eulogy for Occupy, though of course the actual movement never fizzled out in a spasm of violence as The Radicals depicts. A more all-encompassing portrait of American politics in our current moment is Nathan Hill’s The Nix (2017). Hill’s book is a door-stopper, and for that and other reasons it has accurately drawn comparisons to the heaviest of Thomas Pynchon’s novels. The Nix follows the story of another ill-fated creative writing instructor, the unfortunately named Samuel Andresen-Anderson, though unlike Prose’s protagonist his vice isn’t sleeping with his students, but an addiction to a World of Warcraft-type video game. Samuel is only one of dozens of characters in the book, including his ‘60s radical mother who is in legal trouble for throwing rocks in Chicago’s Grant Park at a right-wing presidential candidate who evokes Roy Moore, his entitled student who functions as a millennial stereotype that somehow avoids being overly cliché, the musical prodigy of his youth whom he still pines for, her Iraq War veteran brother, and even the interior monologues of Allen Ginsberg and Hubert Humphrey. Hill’s most immaculate creation is the trickster-god of a book agent Guy Periwinkle, a mercurial, amoral, nihilistic Svengali who reads as an incarnation of the era of Twitter and Facebook.

The narrative threads are so many, so complicated, and so interrelated that it’s difficult to succinctly explain what The Nix is about, but to give a sense of its asynchronous scope the novel ranges from Norway on the eve of World War II, the stultifying conformity of 60’s Iowa, the ’68 Democratic National Convention (and the subsequent protests), suburban Illinois in the ‘80s, New York during the anti-war protests of 2003, as well as the Iraq War, and the imagined alternative universe of 2016. Its concerns include political polarization, the trauma that family can inflict across generation, the neoliberal university, and video-game addiction. Few novels capture America as it is right now with as much emotional accuracy as The Nix, but it’s all there – the rage, the vertigo, the exhaustion. Of course, haunting the pages of The Nix is a certain Fifth Avenue resident, who is never mentioned, but is very much the embodiment of our garbage era. More than that, Hill performs an excavation of the long arc of our contemporary history, and the scenes with Samuel’s mother in ’68 draw a direct connection between those events of a half-century ago and today, so that the real ghost which permeates the novel is less the mythical Norwegian sprite that gives the book its title, than that other “Nix” whose presidency set the template for a corrupt, compromised, polarized, spiteful, and hateful age.

Adam Haslett’s Union Atlantic covered similar political and economic ground as both The Radicals and The Nix do, though as channeled through the mini-drama between upwardly mobile, self-made banker Doug Fanning and his new neighbor, the retired school-teacher Charlotte Graves. Union Atlantic follows Charlotte’s war of attrition against both Doug and the McMansion that he’s building in their tony Boston suburb. There is something almost Victorian about Haslett’s concerns; Doug’s journey from being raised by an alcoholic single mother in Southie to becoming a millionaire banker living in a Belmont-like suburb has a bit of the Horatio Alger boot-strap story about it, save for the fact that his protagonist never rises to the same heights of sympathy. Haslett portrays the contradictions of Massachusetts with admirable accuracy – the liberalism and the wealth, the Catholic city and the Protestant suburbs, the working class and the Boston Brahmins. As a nice magical realist touch, Charlotte is in the process of losing her mind, hearing her dogs speak to her in the voices of Cotton Mather and Malcolm X. I couldn’t help but be charmed by a dog who sputters invective in the tongue of the colonial Puritan theologian, saying things like “You dwell in Memory like some Perversity of the Flesh. A sin against the gift of Creation it is to harp on the dead while the living still suffer.”
A chilling evocation of those themes of sin and memory is supplied by Nick Laird in Modern Gods, though not without a bit of melancholic Irish wit. Laird provides a novel in two parts; the first concerns the wedding of Allison Donnelly to a man whom she later discovers was involved with the Ulster Unions in an act of spectacularly horrific violence during the Troubles, the second her anthropologist sister Liz’s trip to the appropriately named New Ulster in Papua New Guinea where she is involved in BBC documentary about the emergence of a cargo cult competing against the American evangelical missionaries who’re trying to convert the natives. Laird’s focus is on the horrors of sectarian violence, and the faith which justifies those acts. He could be writing of either the cargo cult, the evangelical missionaries, or the Ulster Protestants when he describes the “imagery of sacrifice and offering, memorials and altars … disguised as just the opposite, a sanctuary from materialism… a marketplace of cold transactions.” Laird’s most sympathetic (and disturbing) character is the cult leader herself, a native named Belef (just “belief” with the “I” taken out…) who appears as a character out of Joseph Conrad, and whose air of cold malice is as characteristic and as evocative of old Ulster as it is of new.
Cults from The Radicals to Modern Gods are very much on authors’ minds in our season of violent political rallies and epistemological anarchy, and so they’re a concern as well in Naomi Alderman’s science fiction parable The Power, where we see the emergence of a religion in opposition to the machinations of the patriarchy. Part of a tradition of feminist dystopian science fiction that finds its modern genesis in Margaret Atwood’s classic The Handmaid’s Tale (that author not for nothing prominently blurbing The Power). Alderman imagines an alternate world in which women are suddenly endowed with a physical strength that completely upends traditional gender roles, causing radical shifts in power from eastern Europe to Saudi Arabia, the Midwest to London. Alderman writes with narrative panache, moving rapidly between various intertwined plots and across wildly divergent voices, including that of the abused foster girl Allie who becomes the the leader of the new faith and christens herself Mother Eve; Roxie, the daughter of a Cockney-Jewish gangster; an American politician named Margot Cleary and her daughter Jocelyn; a Nigerian journalist named Tunde (who is the only major male character in the novel); and the Melania-like first-lady of Moldova, Tatiana Moskalev, who offs her piggish husband and establishes a female-sanctuary in her former country. The Power is a thought-provoking book, and one with some exquisite moments of emotional Schadenfreude, as when newly self-liberated women riot against repressive regimes in places like Riyadh, and yet it’s not a particularly hopeful book, as the new order begins to replicate the worst excesses of the old.

The Power is only one book in our current renaissance of feminist science fiction, written in large part as a response to the rank misogyny and anti-woman policies of our nation’s current regime. In The Guardian Vanessa Thorpe explains that this is a “matching literary revolution,” which sees a new “breed of women’s ‘speculative’ fiction, positing altered sexual and social hierarchies.” Louise Erdrich provides one such example in her Future Home of the Living God which reads as a sort of cracked, post-apocalyptic nativity tale. In a premise like that of P.D. James’s Children of Men, though without the implied reactionary politics, Erdrich presents the diary of Cedar Hawk Songmaker, college student and the adopted Ojibwe daughter of crunchy, upper middle-class Minnesota liberals. Cedar Hawk finds herself pregnant during an autumn when it seems as if evolution itself has started to reverse, as all manner of primeval beings hatch from eggs, one of which is the proverbial gestation of a theocratic government reacting to the ecological collapse. Erdrich remains one of our consummate prose stylists, and Cedar Hawk is an immaculate creation (in several different ways). A precocious and intelligent student, Cedar Hawk is a Catholic convert who grapples with women’s spirituality, and Erdrich presents a book that is both Catholic and vehemently pro-choice (while also understanding that to be pro-choice isn’t to be anti-pregnancy).

Genre fiction is perhaps the best way to explore our current moment, where the “Current Affairs” section and “Science Fiction” are increasingly indistinguishable. Erdrich and Alderman write in a tradition of literary speculative fiction which recalls recent work by Atwood, Chabon, Philip Roth, Cormac McCarthy, and Jim Crace, but old fashioned hard science fiction with all of its intricate world-building never loses its charms. Sam Miller provides just that in his infectiously enjoyable Blackfish City, which follows the intertwined stories of several characters living in a floating, mechanical city above the Arctic Circle in an early 22nd century ravaged by climate change. Despite hard science fiction’s reputation for being all about asteroid mining colonies and silvery faster-than-light starships, the reality is that from Samuel Delaney to Octavia Butler, science fiction has always been more daring in how it approaches questions of race and gender than conservative literary fiction can be. Miller’s novel provides a detailed, fascinating account of how the geothermal powered city (which is operated by a consortium of Thai and Swedish companies) actually works, but his thematic concerns include economic stratification, deregulation, global warming, and gender fluidity. That, and he has depicted neuro-connected animal familiars that communicate with their human partners, including a polar bear and an orca whale. So, there’s that!


Science fiction isn’t the only genre attuned to our neoliberal, late capitalist, ascendant fascistic hell-scape – there’s also horror, of course. Paul Tremblay offers a visceral, thrilling, and disturbing account of a home invasion/hostage situation in his horror pastoral The Cabin at the End of the World, which makes fantastic use of narrative ambiguity in rewriting the often-over-played apocalyptic genre. One of the scariest novels I read in the past year was Hari Kunzru’s postmodern gothic White Tears. The strange ghost tale has been discussed as if it was a simple parody of white hipster culture’s appropriation of black music, and yet White Tears grapples with America’s racial history in a manner that evokes both William Faulkner and Toni Morrison. Kunzru’s story follows the fraught friendship of Seth and Carter, who share a love of lo-fi Mississippi Delta blues music, both listening to and producing songs as an act of musical obsessiveness worthy of R. Crumb. Carter crafts a faux Robert Johnson style number attributed to an invented musician he christens “Charles Shaw,” based off of a recording of random, diegetic patter between two men playing chess in Washington Square Park which Seth picks up on one of his forays through New York to preserve ambient sound. The two discover that the fictional bluesman might be more real than they suppose.

The complexities and contradictions of American culture are also explored in Paul La Farge’s The Night Ocean, which though perhaps not a horror novel itself is still a loving homage to the weird fiction of H.P. Lovecraft. La Farge’s novel is an endlessly recursive frame-tale which follows a series of inter-nestled narratives ranging from the (fictional) homosexual relationship of Lovecraft with a young Floridian named Robert Barlow, to New York author Charlie Willett’s obsession with finding a lost pornographic work of the master himself, which is of course titled The Erotonomicon. Along the way the reader confronts questions of artifice and authenticity, as well as a consideration of the darker reaches of Lovecraft’s brilliant, if bigoted, soul. Le Farge moves across a century of history, and from the horror author’s native Providence to Mexico City on Dia de los Muertos, from northern Ontario to the Upper West Side, with a cameo appearance from Beat novelist William S. Burroughs. La Farge’s novel isn’t quite weird fiction itself, but he writes with an awareness that Lovecraft’s cold, chthonic, unfeeling, anarchic, nihilistic stories of meaninglessness are as apt an approach to our contemporary moment as any, where Cthulhu’s tentacles reach further than we’d care to admit and the Great Old Ones always threaten to devour us. Facing the uncertainties of terrifying push notification, reflect on the master himself, who wrote that the “oldest and strongest emotion of mankind is fear, and the oldest and strongest kind of fear is fear of the unknown.”
La Farge’s narrative progresses Zelig-like through 20th century literary history, its story encompassing fictionalized accounts of the intersection of both experimental and genre writing. I’ve always been drawn to picaresque, delighted by the appearance of historical figures as they arrive briefly in a story. Matt Haig’s masterful How to Stop Time has plenty of cameos in the life of its main character Tom Hazard, from William Shakespeare and Captain Cook to Zelda and F. Scott Fitzgerald. Tom isn’t quite an immortal, but in all the ways that matter he nearly is. Haig describes an entire secret fraternity of incredibly old people called the “Albatross Society” who vampire-like scurry about the margins of history. A Huguenot refugee who comes of age in Elizabethan England, Tom’s narrative follows his yearning to discover the missing daughter of his dead wife, the former a near-immortal like himself. Haig’s is a risky gambit, jumping from the 16th century to the 21st, yet he performs the job admirably, and as somebody who cashes checks from writing about the Tudor era, I can attest to the accurate feel of the Renaissance scenes in the book. Word is that a film adaptation is on the way, starring Benedict Cumberbatch (predictably), but more than even its cinematic action about secret societies and historical personages, How to Stop Time offers an estimably human reflection on what it means to grow old, and to lose people along the way.

As the nights grow dimmer and the temperature drops, the distant beginning of the year seems paradoxically closer, the months folding back in on themselves as the Earth reaches the same location in its annual terminus around our sun. January’s reading seems more recent to me than those summer beach indulgences when I got sand from Manchester-by-the-Sea in the creases of my library books, and so I end like an Ouroboros biting its own tale with the first book of 2018 which I read: Paul Kingsnorth’s enigmatic fable Beast. Founder of the Dark Mountain Project, which encourages artists and writers to grapple with what they see as an approaching climate apocalypse, Kingsnorth has been writing increasingly avant-garde prose in reaction to our inevitable demise. His main (and only) character Edward Buckmaster seems to be the same protagonist from his earlier novel The Wake, albeit that earlier novel takes place in the Dark Ages and is written in an Anglo-Saxon patois that is equally beautiful as tedious, while Beast by all intents seems to be broadly contemporary in its setting.
I’m unsure as to whether they’re the same character, or if Edward is to be understood as the reincarnation of his namesake, but both novels share a minimalist, elemental sensibility where the very nature of prose and narrative are stripped to bare essentials. Beast follows the surreal ruminations of Edward as he phases in and out of consciousness in a cottage on the English moors, in a landscape uninhabited by people, while he both stalks and is stalked by some sort of fantastic creature. The nature of the animal is unclear – is it a big cat? A wolf? Something else? And the setting is bizarrely wild, if not post-apocalyptic feeling, when compared to the reality of the urbanized English countryside. Beast is as if Jack London’s Call of the Wild was rewritten by Albert Camus. It’s the sort of “Man vs. Nature” plot that I always want to like and which I rarely do – save for this time, where I very much did enjoy Kingsnorth’s strange allegory. At least it feels like an allegory, but the nature of its implications are hard to interpret. Proffering a hypothesis, I will say that reading Beast, where boredom threaded by a dull anxiety is occasionally punctuated by moments of horror, is as succinct an experiential encapsulation of 2018 as any.

More from A Year in Reading 2018

Do you love Year in Reading and the amazing books and arts content that The Millions produces year round? We are asking readers for support to ensure that The Millions can stay vibrant for years to come. Please click here to learn about several simple ways you can support The Millions now.

Don’t miss: A Year in Reading 2017201620152014201320122011201020092008200720062005

God Among the Letters: An Essay in Abecedarian

“When they ask what [God’s] name is, what shall I tell them?” —Exodus 3:13

“Language is only the instrument of science, and words are but the signs of ideas.” —Dr. Johnson, A Dictionary of the English Language (1755)
1.
Attar of Nishapur, the 12th-century Persian Sufi, wrote of a pilgrimage of birds. His masterpiece The Conference of the Birds recounts how 30 fowls were led by a tufted, orange hoopoe (wisest of his kind) to find the Simurgh, a type of bird-god or king. So holy is the hoopoe, that the bismillah is etched onto his beak as encouragement to his fellow feathered penitents. From Persia the birds travel to China, in search of the Simurgh, a gigantic eagle-like creature with the face of a man (or sometimes a dog) who has lived for millennia, possesses all knowledge, and like the Phoenix has been immolated only to rise again.

In the birds’ desire to see the Simurgh, we understand how we should yearn for Allah: “Do all you can to become a bird of the Way to God; / Do all you can to develop your wings and your feathers,” Attar writes. An esoteric truth is revealed to the loyal hawk, the romantic nightingale, the resplendent peacock, and the stalwart stork. There is no Simurgh awaiting them in some hidden paradise, for the creature’s name is itself a Farsi pun on the phrase “30 birds.” Attar writes that “All things are but masks at God’s beck and call, / They are symbols that instruct us that God is all.” There is no God but us, and we are our own prophets.

As a dream vision, The Conference of the Birds appears to be borderline atheistic, but only if you’re oblivious that such mysticism is actually God-intoxicated. And as with all mystical literature, there is (purposefully) something hard to comprehend, though a clue on interpretation when Attar writes that “The shadow and its maker are one and the same, / so get over surfaces and delve into mysteries.” Equivalence of shadow and maker—it’s a moving understanding of what writing is as well, where the very products of our creation are intimations of our souls. My approach to these mysteries, plumbing past the surfaces of appearance, is in an illustration of the epic’s themes done in the characteristic Islamic medium of calligraphy. Alongside the intricate miniatures which defined Persian art, there developed a tradition whereby ingenious calligraphers would present Arabic or Persian sentences in artful arrangements, so that whole sentences would compose the illusion of a representational picture.

One such image is nothing but the word “Simurgh” itself, yet the way in which the artist has configured letters like the ascending alif, horizontal jim, rounded dhal, and complex hamzah presents the appearance of a bird rearing with regal countenance—all feather, claw, and beak. A beautiful evocation of Attar’s very lesson itself, for as the avian penitents learn that there is no Simurgh save for their collective body, so, too, do we see that the illusion of the picture we’re presented with is simply an arrangement of letters.

Pithy demonstration of the paradox of literature as well. If the Simurgh of The Conference of the Birds is simply composed by the fowl themselves, and if the image of the calligrapher’s art is constituted by letters, might there be a lesson that divinity itself is constructed in the later way? Just as each bird is part of the Simurgh, may each letter be part of God? For as images had been banned, they still can’t help but arise out of these abstracted letters, these symbols imbued with a fiery life. Little wonder that incantations are conveyed through words and that we’re warned not to take the Lord’s name in vain, for it’s letters that both define and give life. A certain conclusion is unassailable: God is an alphabet—God is the alphabet.

2.
“Bereshit” is the word by which Genesis is inaugurated, and it’s from that word that the name of the book derives in its original language. No text more explicitly deals with the generative powers of speech than Genesis, and in seeing the Torah as both product of and vehicle for God’s creation, we get closer to the sacredness of the Alphabet. Bereshit begins with the second letter of the Hebrew alphabet—bet—which looks like this: ב. Something about the shape of the abstracted letter reminds me of a tree with a branch hanging out at an angle, appropriate when we consider the subject of the book.

There’s something unusual in the first letter of the Torah being bet, for why would the word of God not begin with Her first letter of Aleph? Medieval kabbalists, adept in numerology, had an answer: It was to indicate that reality has two levels—the physical and the spiritual, or as Attar called them, the surfaces and the mysteries. But if the surface of the sheep vellum which constitutes a physical Torah is one thing, the actual reality of the letter is another. A deeper truth is conveyed by the mystery of letters themselves, the way in which abstract symbol can make us hallucinate voices in our heads, the way in which entire worlds of imagination can be constructed by dying the skin of dead animals black with ink.

We dissuade ourselves against magic too easily, especially since literacy itself is evidence of it. That language is sacred should be an obvious truth. Even as the old verities of holiness are discarded, the unassailable fact that language has a magic is intuited at the level of an eye scanning a page and building universes from nothingness. Jewish sages believed that the alphabet preceded that initial Bereshit; indeed, that was a requirement that letters existed before creation, for how would God’s accomplishment of the latter even be possible without Her access to the former? As the kabbalistic book Sefer Yetsira explains: “Twenty-two letters did [God] engrave and carve, he weighed them and moved them around into different combinations. Through them, he created the soul of every living being and the soul of every word.”

3.
Chiseled onto the sandy-red shoulder of a sphinx found at Serabit el-Khadim in the Sinai Peninsula is evidence of the alphabet’s origins that is almost as evocative as the story told in the Sefer Yetsira. As enigmatic as her cousins at Giza or Thebes, the Sinai sphinx is a votive in honor of the Egyptian goddess Hathor, guardian of the desert, and she who protected the turquoise mines which dotted the peninsula and operated for close to eight centuries producing wealth for distant Pharaohs. The Serabit el-Khadim sphinx is only a little under 24 centimeters, more than diminutive enough to find her new home in a British Museum cabinet. Excavated in 1904 by Flinders and Hilda Petrie, founder of Egyptology as a discipline, the little Hathor lioness lay in wait for perhaps 3,800 years, graffiti etched into her side attesting to alphabetic origins.

The sphinx was carved by laborers whose language was a Semitic tongue closely related to Hebrew (and indeed some have connected the inscription to the Exodus narrative). In Alpha Beta: How 26 Letters Shaped the Western World, John Man describes how these “Twelve marks suggest links between Egyptian writing and later Semitic letters,” for though what’s recorded at Serabit el-Khadim are glyphs like “an ox-head, an eye, a house, a snake, and water,” what is found on the haunches of Hathor’s sphinx are the abstracted “roots of our own a, b, v, u, m. p, w, and t.” By 1916, Alan Gardiner used the decipherable Egyptian hieroglyphic inscription between the sphinx’s breasts, which read, “Beloved of Hathor, Lady of the Turquoise” to translate the 11 marks on her side, making this one of the earliest examples of a script called “Proto-Sinaitic,” the most ancient instance of alphabetic writing to ever be found. Gardiner hypothesized that this was an alphabetic letter system, arguing that it was either a form of simplified pidgin Egyptian used by the administrators, or that it was a simplified system invented by the workers. By simplifying the process of communication, the alphabet’s purpose was pragmatic, but its implications rank it among the most paradigm-shifting of history.

From In the Beginning: A Short History of the Hebrew Language, Joel M. Hoffman explains that if it’s “easier to learn the tens of hundreds of symbols required for syllabic system than it is to learn the thousands required for a purely logographic system,” than to learn easier still consonantal systems (as both proto-Sinaitic and Hebrew are), as these system “generally require fewer than 30 symbols.” Vowels may be the souls of words, but consonants are their bodies. The former awaited both the Greek alphabet and the diacritical marks of Masoretic Hebrew, but the skeletons of our alphabet were already recorded in homage to the goddess Hathor.

Man writes that three features mark the alphabet as crucial in the history of communication: “its uniqueness, its simplicity and its adaptability.” Perhaps even more importantly, where pictograms are complicated, they’re also indelibly wed to the tongue which first uttered them, whereas alphabets can “with some pushing and shoving, be adapted to all languages.” The alphabet, a Semitic invention born from Egyptian materials for practical ends, “proved wildly successful,” as Hoffman writes, with proto-Sinaitic developing into the Phoenician alphabet and then the Hebrew, which was “used as the basis for the Greek and Latin alphabets, which, in turn, along with Hebrew itself, were destined to form the basis for almost all the world’s alphabets.” Birthed from parsimony, proto-Sinaitic would become the vehicle through which abstraction could be spread. Still, the blurred edges of our letters proclaim their origin in pictures—the prostrate penitent worshipping prayerfully in an “E;” in an “S,” the slithering of the snake who caused the fall.

4.
Every single major alphabetic system, save for Korean Hangul developed in the 15th century, can trace its origins back to this scratching on a sphinx. The Phoenicians, a people who spoke a Semitic language, developed one of the first proper alphabets. Michael Rosen, in Alphabetical: How Every Letter Tells a Story, explains that the Phoenicians “used abstract versions of objects to indicate letters: a bifurcated (horned?) sign was an ‘ox’ (in their language ‘aleph’), and on down through the words for ‘house,’ ‘stick,’ ‘door’ and ‘shout’ up to ‘tooth’ and ‘mark.’” The alphabet is universal, applicable in any cultural setting, and yet the immediate context of its creation is of sailors and turquoise miners living in the Bronze Age.

An epiphany when some turquoise miner abstracted the intricate pictures of Egyptian hieroglyphics, but used them not for ideas, but rather units of sound. The sea-faring Phoenicians, clad in their Tyrian purple cloth dyed from the mucus of clams, would disseminate the alphabet around Mediterranean ports. It’s the origin of elegant Hebrew, which God used when he struck letters of fire into the tablets at Sinai; the genesis of Arabic’s fluid letters by which Allah dictated the Qur’an. The Greeks adapted the Phoenicians’ invention (as they acknowledge) into which the oral poems of Homer could finally be recorded; the death-obsessed Etruscans whose tongue we still can’t hear appropriated the symbols of Punic sailors, as did the Romans who would stamp those letters on triumphant monuments throughout Europe and Africa in so enduring a way that you’re still reading them now. Languid Ge’ez in Ethiopian gospels, blocky Aramaic written in the tongue of Christ, Brahmic scripts which preserved Dharmic prayers, the mysterious Ogham of Irish druids, the bird-scratch runes of the Norseman, the stolid Cyrillic of the Czars, all derive from that initial alphabet. Even Sequoyah’s 19th-century Cherokee, though a syllabary and not technically an alphabet, draws several of its symbols from a Latin that can be ultimately traced back to the mines of Serabit el-Khadim.

Matthew Battles, in Palimpsest: A History of the Written Word, writes how this “great chain of alphabetical evolution collapses in a welter of characters, glyphs, and symbols, mingling in friendly, familial and even erotic enthusiasms of conversant meaning.” We sense familiarity across this family tree of alphabetical systems, how in an English “A” we see the Greek α, or how Hebrew ח evokes the Greek η. But as the French rabbi Marc-Allain Ouknin explains in The Mysteries of the Alphabet, all of our letters were ultimately adapted by the ancient Canaanites from Egyptian pictures, for before there was an “A” there was the head of an ox, before there was “H” there was an enclosure. Ouknin writes that the “history of meaning is the history of forgetting the image, the history of a suppression of the visible.” In the beginning there was not the word, but rather the image.

5.
During the 17th century, the German Jesuit polymath Athanasius Kirchner was bedeviled by the question of how image and word negotiated over dominion in the Kingdom of Meaning. Kirchner is an exemplar of the Renaissance; born not quite in time for the Enlightenment, he was fluent in conjecture rather than proof, esoterica rather than science, wonder rather than reason. His was the epistemology not of the laboratory, but of the Wunderkammer. In The Alphabetic Labyrinth: The Alphabet in History and Imagination, art historian Johanna Drucker writes that Kirchner’s studies included that of the “structure of the subterranean world of underground rivers, volcanic lava flow and caves, an exhaustive text on all extant devices for producing light,” and most importantly “compendia of information on China, [and] Egypt.”

Kirchner is both the first Egyptologist and first Sinologist, even as his conclusions about both subjects would be proven completely inaccurate in almost all of their details. His 1655 Oedipus Aegyptiacus was both an attempt to decipher the enigmatic symbols on papyri and monuments, as well as a “restoration of the hieroglyphic doctrine,” the secret Hermetic knowledge which the priest associated with the ancients. He concurred with the ancient Neo-Platonist Plotinus, who in his Enneads claimed that the Egyptians did not use letters “which represent sounds and words; instead they use designs of images, each of which stands for a distinct thing … Every incised sign is thus, at once, knowledge, wisdom, a real entity captured in one stroke.” Kirchner thus “translated” an inscription on a 2-millennia-old obelisk which sat in the Villa Celimontana in Rome, explaining that the hieroglyphs should read as “His minister and faithful attendant, the polymorphous Spirit, shows the abundance and wealth of all necessary things.” Not a single word is accurate.

For Kirchner, what made both hieroglyphics and Chinese writings so evocative was that they got as close to unmediated reality as possible, that they were not mere depiction, but essence. In The Search for the Perfect Language, Umberto Eco explains that Kirchner’s enthusiasms were mistaken, because his “assumption that every hieroglyph was an ideogram … was an assumption which doomed his enterprise at the outset,” for contrary to his presupposition, neither Mandarin nor ancient Egyptian operated like some sort of baroque rebus.

Still, Kirchner’s was a contention that “hieroglyphs all showed something about the natural world,” as Eco writes. Pictograms were as a window unto the world; fallen letters were simply scratches in the sand. Where Kirchner and others faltered was in letting abstraction obscure the concreteness of the alphabet. If you flip an “A” upside down, do you not see the horns of the ox which that letter originally signified? If you turn a “B” on its side, do you not see the rooms of a house? Or in the curvature of a “C” that of the camel’s hump?

6.
Iconoclasm explains much of our amnesia about the iconic origins of our letters, but it’s also that which gives the alphabet much of its power. Imagery has been the nucleus of human expression since the first Cro-Magnon woman blew red ochre from her engorged cheeks onto the cave wall at Lascaux so as to trace the outline of her hand. But the shift from pictographic writing to alphabetic inaugurated the reign of abstraction whereby the imagistic forebearers of our letters had to be forgotten. Marc-Alain Ouaknin explains that “Behind each of the letters with which we are so familiar lies a history, changes, mutations based on one or more original forms.”

Since Gardiner’s translation of Serabit el-Khadim, there have been a few dozen similar abecedariums found at sites mostly in the Sinai. From those sparse examples, scholars trace the morphology of letters back to their original, when they brewed from that primordial soup of imagery, their original meanings now obscured. From our Latin letters we move back to the indecipherable Etruscan, from those northern Italians we trace to the Greeks, and then the purple-clad Phoenicians, finally arriving at the ancient Semites who crafted the alphabet, finding that the our letters are not a, b, and c, nor alpha, beta, and gamma, or even Aleph, Bet, and Gimmel, but rather their original pictures—an ox, a house, and a camel.

Philologists and classicists have identified all of the images from which the 26 letters derive. In proto-Sinaitic, “D” was originally a door. If you flip an “E” on its side you see the arms outstretched above the head of a man in prayer. “I” was originally a hand; the wavy line of “M” still looks like the wave of water which it originally was. “R” still has at its top the head above a body which it originally signified; “U” still looks like that which an oar was placed upon in a boat. Kirchner thought that hieroglyphics were perfect pictures of the real world, but hidden within our own alphabet absconded from the courts of Egypt are the ghostly after-images of the originals.

7.
The alphabet spread something more than mere convenience—it spread monotheism. Man argues that the “evolution of the belief in a single god was dependent on an ability to record that belief and make it accessible; and that both recording and accessibility were dependent on the invention of the alphabet.” God made the alphabet possible, and it would seem that the alphabet returned the favor. What first had to be forgotten, however, were the meaning of the letters’ original shapes, for in pictograms there lay the risk of idolatry, of conjuring those old gods who birthed them.

At Mt. Sinai, the Lord supposedly used fire to emblazon Moses’ tablets with his commandments, the second of which demands that none shall make any “likeness that is in heaven above, or that is in the earth beneath, or that is in the water under the earth.” When writing those letters God very well couldn’t use ones that happened to look like a man, or an ox, or a camel’s hump. Ouaknin conjectures that “iconoclasm required the Jews to purge proto-Sinaitic of images,” for the “birth of the modern alphabet created from abstract characters is linked to the revelation and the receiving of the law.” The rabbi argues that it was “Under the influence of monotheistic expression [that] hieroglyphics began to shed some of its images, resulting in the first attempt of an alphabet.” Accessible abstractions of the alphabets were not a fortuitous coincidence, but rather a demand of the Mosaic covenant, since the newly monotheistic Jews couldn’t worship God if the letters of their writing system evoked falcon-headed Horus, the jackal Anubis, or baboon-faced Thoth with stylus in hand. Man writes that “both new god and new script worked together to forge a new nation and disseminate an idea that would change the world.”

A skeptic may observe that the alphabet hardly caused an immediate rash of conversions to monotheism in Greece, Rome, or the north country, as Zeus, Jupiter, and Tyr still reigned amongst their respective peoples. Yet alphabetic writing’s emergence occurred right before a period which the Austrian philosopher Karl Jaspers called “the Axial Age.” Jaspers observed that in the first millennium before the Common Era, there was a surprising synchronicity between radically disparate cultures which nonetheless produced new ways of understanding reality which still had some unifying similarities between each other.

Monotheism in the Levant, Greek philosophy, Persian Zoroastrianism, and the Indian Upanishads can all be traced to the Axial Age. For Jaspers, a paradigm shift in consciousness resulted in abstraction. What all of these different methods, approaches, and faiths shared was enshrinement the universal over the particular, the reality which is unseen over the shadows on the cave wall. In The Origin and Goal of History, Jaspers describes the Axial Age as “an interregnum … a pause for liberty, a deep breath bringing the most lucid consciousness.”

Jaspers noted the simultaneous emergence of these faiths, but proffered not a full hypothesis as to why. I wonder if the abstractions of the alphabet were not that which incubated the Axial Age? In Moses and Monotheism, Sigmund Freud claimed that this “compulsion to worship a God whom one cannot see … meant that as a sensory perception was given second place to what may be called an abstract idea—a triumph of intellectuality over sensuality.” This triumph of abstraction included not just the prophets Isaiah and Elijah, but the philosophers Parmenides and Heraclitus, and the sages Siddhartha and Zarathustra, all of whose words were made eternal in the alphabet.

From the Aegean to the Indus River, the common thread of the Axial Age was alphabetic writing, with the one major exception being China. In The Alphabet Versus the Goddess: The Conflict Between Word and Image, Leonard Shlain observed that the rise of phonetic letters coincided with the disappearance of idol worship in the Levant, writing that the “abstract alphabet encouraged abstract thinking,” a progeny born from the curve and line of the Word. Yet old gods can always be born again, their voices barely heard, yet still present in sacred phoneme, their faces peaking out in the spaces between our letters.

8.
In the Babylonian desert, excavators frequently find small bowls, ringed with Aramaic and designed to capture demons. Molded by magi, the demon bowls are a trap, a harnessing of the magical efficacy of the alphabet. These talismans combined word and image to tame the malignant lesser gods who still stalked the earth, even after God’s supposed victory.

Appropriate that God’s alphabet is that which is able to constrain in clay the machinations of erotic Lilith and bestial Asmodeus. One such bowl, which depicts the succubus Lilith at its center as an alluring woman with long hair barely obscuring breasts and genitalia, incants that “60 men who will capture you with copper ropes on your feet and copper shackles on your hands and caste collars of copper upon your temples.” Israeli scholar Naama Vilozny is an expert on the images of demons painted on these bowls by otherwise iconoclastic Jews. In Haaretz, Vilozny says that you “draw the figure you want to get rid of and then you bind it in a depiction and bind it in words.” There is control in the alphabet, not just in trapping demons, but in the ability to capture a concept’s essence. Writing’s theurgic power of writing, where curses against hell are as strong as baked clay.

Magic and monotheism need not be strictly separated; a sense of paganism haunts our faith as well as our letters. The psychologist Julian Jaynes, in his The Origins of Consciousness in the Breakdown of the Bicameral Mind, posited a controversial hypothesis that human beings were only “conscious” relatively recently, since shortly before the Axial Age. The alphabet perhaps played a role in this development, theoretically eliminating the others gods in favor of the one voice of God, the only voice in your head. But Jaynes explains that the “mind is still haunted by its old unconscious ways; it broods on lost authorities.” Certainly true when a frightened Babylonian places a bowl in the earth to capture those chthonic spirts which threaten us even though their dominion has been abolished.

The alphabet facilitated a new magic. Consider that the fourth commandment, which reads “Thou shalt not take the name of the Lord thy God in vain,” is not an injunction against blasphemy in the modern sense, for surely the omnipotent can abide obscenity, but that in historical context it specifically meant that you shouldn’t use God’s name to perform magic. To know the letters of someone’s name is to have the ability to control them; there’s a reason that the “angel” whom Jacob wrestles with refuses to be named. The four Hebrew letters which constitute the proper name of God—יהוה—are commonly referred to as the Tetragrammaton, there being no clear sense of what exactly the word would have actually been pronounced as.

These letters have a charged power, no mere ink-stain on sheep-skin, for the correct pronunciation was guarded as an occult secret. Hoffman writes that the letters were “chosen not because of the sounds they represent, but because of their symbolic powers in that they were the Hebrew’s magic vowel letters that no other culture had.” The yod, hay, vov, hay of the Tetragrammaton demonstrated both the victory of monotheism, but also the electric power of the alphabet itself. God encoded into the very name, which in turn was the blueprint for our reality. A dangerous thing, these letters, for just as demons could be controlled with their names painted onto the rough surface of a bowl, so, too, could the most adept of mages compel the Creator to their bidding.

9.
Incantation is sometimes called prayer, other times poetry, and occasionally the alphabet can substitute for both. As acrostic, alphabetic possibilities have long attracted poets. In Edward Hirsch’s A Poet’s Glossary, he writes about “Abecedarians,” that is, verses where each line begins with the respective letter of the alphabet. As all formal poetry does, the form exploits artificial constraint—in this circumstance, so as to mediate upon the alphabet itself. This is an “ancient form often employed for sacred works”; Hirsch explains how all of the “acrostics in the Hebrew Bible are alphabetical, such as Psalm 119, which consists of twenty-two eight-line stanzas, one for each letter of the Hebrew of the alphabet.” The “completeness of the form,” Hirsch writes, “enacts the idea of total devotion to the law of God.”

St. Augustin, the fourth-century Christian theologian, wrote an abecedarian against the Donatist heretics; nearly a millennium later, Chaucer tried his hand at the form as well. Centuries later, the English journalist Alaric Watt wrote his account of the 1789 Hapsburg Siege of Belgrade in alliterative abecedarian: “An Austrian army, awfully arrayed, / Boldly by battery besieged Belgrade. / Cossack commanders cannonading come, / Dealing destruction’s devastating doom.” There are, to the best of my knowledge, no major examples of abecedarian prose. Perhaps somebody will write something soon? Because as Hirsch notes, the form has “powerful associations with prayer,” the rapturous repetition of the alphabet stripping meaning to its bare essence, emptying both penitence and supplication of ego, in favor of the ecstasies of pure sound.

Such was the wisdom of the Baal Shem Tov, founder of Hasidism, who was inspired by the ecstasies of Pietists to return worship to its emotional core. He sought to strip ritual of empty logic and to re-endow it with that lost sense of the glowing sacred. Sometimes prayer need not even be in words, the sacred letters themselves function well enough. The Baal Shem Tov’s honorific means “Master of the Good Name”; he who has brought within the very sinews of his flesh and the synapses of his mind the pulsating power of the Tetragrammaton. So much can depend on four letters.

The Baal Shem Tov, or “Besht” as he was often called, lived in the Pale of Settlement, the cold, grey Galician countryside. Drucker writes that the Besht exhorted the “practicing Jew to make of daily life a continual practice of devotion,” whereby “each of the letters which pass one’s lips are ascendant and unite with each other, carrying with them the full glory.” The Besht taught that letters were not incidental; the alphabet itself was necessary for “true unification with the Divinity.”

According to Hasidic legend, one Yom Kippur, the Besht led his congregation in their prayers. Towards the back of the synagogue was a simple-minded but pious shepherd boy. The other worshipers, with fingers pressing prayer book open, repeated the words of the Kol Nidre, but the illiterate shepherd could only pretend to mouth along, to follow writing which he could not read. Emotions became rapturous as black-coated men below and women in the balcony above began to sway and shout out the prayers. Finally, overcome with devotion but unable to repeat after the rest of his fellow Jews, the shepherd boy shouted out the only prayer he could: “Aleph. Bet. Gimmel. Daleth …” through the rest of the 18 Hebrew letters.

There was an awkward silence in the sanctuary. Embarrassed, the young man explained, “God, that is all I can do. You know what your prayers are. Please arrange them into the correct order.”

From the rafters of the shul, decorated with Hebrew letters in blocky black ink, came the very voice of God, leading the entire congregation in the holiest of prayers, repeated from that of the simple shepherd: “Aleph. Bet. Gimmel. Daleth …” And so, in the court of the Baal Shem Tov, in the early 18th century in a synagogue upon the Galician plain, God deigned to teach women and men how to worship once again, in the holiest prayer that there is. The alphabet, repeated truthfully with faith in your soul, is the purest form of prayer.

10.
Alphabets are under-theorized. Because it’s so omnipresent, there is a way in which it’s easy to forget the spooky power of 26 symbols. Considering how fundamental to basic functioning it is, we frequently overlook the sheer, transcendent magnificence of the letters which structure our world. Disenchantment, however, need not be our lot, for there is a realization that letters don’t convey reality, but rather that they are reality. Ecstatic to comprehend, the way in which stains on dead tree are the conduit through which all meaning traverses, much like the electrons illuminating our screens. Fundamentally, what I’m arguing for is not just that our alphabet is a means of approaching the divine—no, not just that. God is the alphabet, and the alphabet is God. Heaven is traversed through the alpha and the omega. I argue that the alphabet betrays its origins, for word and image are joined together in symbiosis, no matter how occluded.

Just as Kirchner believed hieroglyphics contained reality, so, too, is the alphabet haunted by pictures obscure; as Ouaknin enthuses, it’s in “unearthing the traces of the origin of letters and understanding how they evolved” that provide occult wisdom. Knowing that letters shift back and forth, so that they can return to the images which birthed them, as in the calligraphy which illustrates Attar’s Simurgh, is a demonstration of their fluid nature. Literal though we may misapprehend Egyptian pictograms to be, their abstract progeny in the form of our 26 letters are still haunted by their origins, and we can imbue them with a sense of their birthright now and again.

Moreover, the mysteries of the alphabet subconsciously affect us, so that as Battles claims concerning letters since “whether alphabetic or ideographic, they start out as pictures of things,” the better to explain “why writing works for us, and why it has conserved these signs so well over these three millennia.” Nevertheless, the haunting of previous incarnations of letters’ past shapes can’t alone explain their strange power. Only something divine can fully explicate how some marks on Hathor’s hide charts a direct line to the letters you’re reading right now. Perhaps “divine” is a loaded term, what with all of those unfortunate religious connotations; “transcendent” would be just as apt. Questions can certainly be raised about my contentions; I do not wish to be read as airy, but with every letter of my sentences I can’t help but believe that the kabbalists and Gnostics were right—the alphabet constitutes our being.

Reality, I believe, can be completely constituted from all 26 letters (give or take). Sift through all of them, and realize that the answer to any question lay between Aleph and Tav, not just as metaphor, but those answers are simply uncovered by finding the proper organization of those letters. The answer to any inquiry, the solution to any problem, the very wisdom that frees, can be discovered simply by finding the correct arrangement of those letters. Underneath the surface of these shapes are indications of their birth, but also that fuller reality just beyond our gaze. Vexation need not follow such an observation, but rather embrace the endless transition between image and word which is the alphabet. We need not pick between letter or picture, there is room enough for both. Xenoglossic is what we should be: fluent in language unknown to our tongues, but rather spoken in our souls. You need only repeat the alphabet as if you’re an illiterate shepherd in the assembly of the Baal Shem Tov. Zealots of the alphabet, with those very letters carved by fire into our hearts.

Image: Temple of Hathor remains in Serabit el-Khadim by Einsamer Schütze

Veil of Shadows: On Jewish Trauma, Place, and American Anti-Semitism

A little less than 50 miles from Krakow, at the confluence of the Vistula and Sola rivers, there is a town of slightly under 50,000 inhabitants named Oświęcim. The official tourist site for Oświęcim describes the city as being “attractive and friendly.” Image searches bring up Victorian buildings the color of yellow-frosted wedding cakes, and of modest public fountains; red-tiled homes and blue-steepled churches. There is a castle and a newly built hockey arena. Oświęcim would be simply another Polish town on a map, all diacriticals and consonants, were it not for its more famous German name—Auschwitz.

Everyday people wake up under goose feather duvets, go to work in fluorescent-lit offices, buy pierogis and kielbasa, prepare halupki, and go to bed in Auschwitz. Women and men are born in Auschwitz, live in Auschwitz, marry, make love, and raise children in Auschwitz. People walk schnauzers and retrievers in Auschwitz—every day. Here, at the null point of humanity, in the shadow of that factory of death, people live normal lives. Amidst an empire of fire, ash, and Zyklon B. A mechanized, industrial hell on earth derived its name from this town. Theodor Adorno opined in his 1951 “Cultural Criticism and Society” that “To write poetry after Auschwitz is barbaric.” Here, in the dark presence of gas chamber and crematorium, there are no doubt women and men who pass their time reading Czeslaw Milosz or Wislawa Szymborska, oblivious to the philosopher’s injunction.

If you are to read that observation as optimism—that even in the midst of such trauma, such horror and evil, the music still plays—then you misunderstand me. Nor am I condemning those who live in Oświęcim, who’ve had no choice in being born there. Their lives are not an affront; I do not impugn to them an assumed lack of respect concerning this absence-haunted place. Such as it is to exist amidst the enormity of sacred stillness, a quiet that can only ever result from tremendous horror. The lives of Oświęcim’s citizens are simply lives like any other. Whatever the ethics of poetry after Auschwitz, the fact remains that there can’t help but still be verse—and waking, and working, and sleeping, and living. This has nothing to do with the perseverance of life in the face of unspeakable trauma; rather, it’s to understand that quotidian existence simply continues after Auschwitz—that very rupture in the space-time continuum of what it means to be a human—because there is no other choice.

In Auschwitz we understand a bit about how the gravitational pull of trauma warps and alters space and place, and a true consideration of that singularity must also admit how demon-haunted other corners of our fallen world are, how blood-splattered and ghost inflected the very Earth itself is. What makes Auschwitz such an incomparable evil is not that it’s so very different from the rest of the world, but that it is even more like the world than all of the rest of the world already is. In that perverse way, Auschwitz is the most truthful of places.

Judaism’s genius is that it understands how trauma permeates place. Auschwitz may be the exemplar for this praxis of suffering, but Jewish history is arguably a recounting of cyclical hatreds, all the way back to Pharaoh. Such an elemental, irrational hatred as anti-Semitism is very deep within the metaphysic of the West, seemingly in the marrow of its bones and drawn with mother’s milk, so much so that anyone truly surprised by its resurgence is either disingenuous or not paying attention. The Tanakh is a litany of those who’ve tried to destroy the Jews—the Egyptians, the Assyrians, the Babylonians, and the Romans. Such is this basic narrative reoccurrence that it almost makes one concur that there is something to the concept of chosenness, but as the old Jewish joke goes, “Couldn’t G-d choose someone else sometime?”

But if Judaism is a recounting of trauma (and perseverance in spite of said trauma), it’s also a religion of place, and what it means to live separate from particular places. The Tanakh itself recounts exile as the human condition. Before the MS St. Louis was turned back from Havana, from Miami, from Halifax and returned to the dark heart of Nazi Germany; before millions of Jews boarded Hamburg ships that were New York-bound; before the survivors of Czarist pogroms found succor in Hapsburg lands; before the expelled Sephardim of the Reconquista; before the Romans burned Jerusalem during Simon bar-Kokhba’s failed rebellion, and before the destruction of the second Temple; before the attempted genocide of Haman in Persia, and before the Babylonian Exile of the Judeans; before even the Assyrians scattered the 10 tribes of the Israelites; exile was at the core of the Jewish experience. The earliest reference to Israel is the Merneptah Stele, chiseled in Egypt some 12 centuries before Jesus Christ, predating the oldest extant scripture. There, at the bottom of an account of Pharaonic victories against adversaries, some nameless Egyptian scribe wrote, “Israel is laid waste and his seed is not.” The first reference to the Hebrews is how there are no longer any Hebrews.

Exile and diaspora are the twin curses and gifts of the Jews; exile an individual condition and diaspora a collective one. This is the story of Abraham going into Egypt, of Moses being a “stranger in a strange land,” of being by the “rivers of Babylon, there we sat down, yea, we wept when we remembered Zion.” Even the earliest story of Genesis is one of exile, of being kept from the Promised Land by the flaming swords of Seraphim with their eyeball-covered wings and their fiery tongue of perverse ecstasy. Even now that a state which claims to speak on behalf of and in defense of Jewry governs from an undivided Holy City (with all the attendant geopolitical ramifications) the traditional Pesach injunction remains “Next year in Jerusalem!” for, there is a wisdom in understanding that the spoken Jerusalem is not the real Jerusalem.

Such is the itinerary of Ahasuerus, the so-called “Wandering Jew,” who was a feature of Christian folklore; an immortal from the lifetime of Christ condemned to wander the world until the Second Coming. Yet a sense of dislocation, of fallenness, should be central to all understandings of what it means to be human. Judaism merely keeps that awareness front and center. One should always have bags packed since you never know when you might suddenly have to leave. Exile is, of course, intimately tied to the idea of place; for in being an exilic one is acknowledging that there is a place in which you feel you should be, but that which you are not.

Such a condition only exists if there is an acknowledged home to which you are no longer privy. A useful distinction between what humanistic geographers call “place” in contrast to “space.” Far from obvious synonyms, the geographer Yi-Fu Tuan in his classic Space and Place: The Perspective of Experience explains that “‘Space’ is more abstract than ‘place.’ What begins as undifferentiated space becomes place as we begin to know it better and endow it with value.” Hard to ever build a place when you’re always on the road, though—when your bag always needs to be packed for that moment’s notice.

Part of what Jewish history teaches us is the incommensurate difficulty of actually being able to turn space into place. The horrors that have been experienced over millennia are a genealogy of how trauma can transform place and space back and forth into each other. A dusty alleyway in the shadow of Herod’s Temple can be a place where one cooks lentils in olive oil and drinks wine from earthen clay pots, but that same place can very quickly be transformed into an abstract space once it’s been violated by the violence which sees family members’ blood spilled on those same dusty streets.

If trauma is the crucible that can transform place into space, then the exile which results from that trauma counterintuitively transforms space back into place. When one is a wandering Jew with no country, then one is forced to make the whole world into one’s country. Such is the true origin of humanism, of the Persian poet Kahlil Gibran’s contention that “The universe is my country and the human family is my tribe,” or the American radical Thomas Paine’s mantra that “The world is my country, all mankind are my brethren, to do good is my religion,” with neither of these men themselves being Jewish. Such perspective is the true gift of chosenness.

Zion becomes that which you carry within you. Again, the spoken Jerusalem cannot be the true Jerusalem. This embrace of diaspora is an embrace of a humanism, which engendered a suspicion in stupid little anti-Semites like Joseph Stalin, who slurred the Jews as “rootless cosmopolitans,” not understanding that there is the most solemn strength in that very rootlessness. Today, the inheritors of that brutal myopia use the word “globalist” instead, but the same rank provincialism is still displayed. Judaism’s cosmopolitanism was born from trauma, for in the biblical age the faith was supremely concrete, the locus of worship projected onto a few square miles occupied on the Temple Mount. Yet the destruction of that sanctuary necessitated that a new Temple be found, one built in text and inscribed in memory and taken from place to new place. What results is a type of abstraction, if not the very invention of abstraction. God no longer dwells in the Holy of Holies, but rather in the scroll of the Torah, in the very imagination itself.

Literary critic George Steiner has identified a hatred of abstraction as the ultimate origin of anti-Semitism. In his contribution to Berel Lang’s anthology Writing and the Holocaust, Steiner argues that people “fear most those who demand of us a self-transcendence, a surpassing of our natural and common limits of being. Our hate and fear are the more intense precisely because we know the absolute rightness, the ultimate desirability of the demand.” Across his career, in novels like The Portage to San Cristobal of A.H., and in books such as In Bluebeard’s Castle, Steiner has claimed that it’s precisely Judaism’s humanistic abstraction that engenders such perennial, if irrational, anti-Semitism.

In that later book, he claims that there are three dispensations, “Monotheism at Sinai, primitive Christianity, [and] messianic socialism” where Western culture was presented with “’the claims of the ideal.” Steiner argues that these are “three stages, profoundly interrelated, through which Western consciousness is forced to experience the blackmail of transcendence.” Western culture has been presented with three totalizing abstractions that have a Judaic origin; abstractions that are born from the traumas of dislocation and that reject the idolatrous specificity of place in favor of the universalism of space. These tripartite covenants are represented by Moses, Christ, and Karl Marx, and Steiner sees in the rejection of the idealized utopian promise which each figure represents the origin of this pernicious and enduring hatred.

For Steiner, anti-Semitism is at the very core of the Western metaphysic, irreducible to other varieties of white supremacy. Telling that the fascism which so often directs its rage against Jews is of the “Blut und Boden” variety, the “Blood and Soil” mythos which elevates a few miles of land and the superficial phenotypical commonalties between arbitrarily linked groups of people into an idol. Naive faiths that turn ooze and mud into the locus of belief, rejecting the rootlessness which praises the Temple that is all of creation. When such rhetoric as that of these fascists rears up again, it’s no surprise to see a resurgence of that primordial bigotry, for those that speak of blood and soil have no compunctions about staining the latter with the former.

For American Jews, this has historically been more difficult to see. Historian Lila Corwin Berman asks in The Washington Post if we should have ever “believed in American exceptionalism, even just for Jews, when all around us was evidence of the limitations and ravages of that exceptionalism?” Berman asks an important question, one which gets to the heart of a paradoxical and complicated issue. America has long been imagined as a New Israel, even a New Eden, where our national civic religion is a strangely Hebraic branch of heretical Protestantism. Rhetoric from the 17th-century Puritan divine John Winthrop as delivered aboard the ship Arbela has long been enshrined in American consciousness, that we shall be as a “city on a hill.” Though Winthrop was alluding to the Book of Matthew, American faith is often written in that Hebraic idiom, where the “New World” is dreamt of as a “land of milk and honey,” a place where the New Jerusalem may dwell and where history is brushed aside in the regenerative millennialism of the continent itself.

In Exile and Kingdom: History and Apocalypse in the Puritan Migration to America, the Israeli historian Avihu Zakai explains that there were two biblical templates for early American understandings of colonization: “the Genesis type, which is a peaceful religious migration based … upon God’s promise to his chosen nation that he will appoint a place for them,” and the “Exodus type, which is a judgmental crisis and apocalyptic migration, marking the ultimate necessity of God’s chosen people to depart.” Zakai has argued that those models have organized American self-understanding, where that initial migration of Puritans to America is as the Jews in the wilderness, imagining the push to the western frontier as a version of the Hebrews coming into Canaan. This subconscious philo-Semitism, which appropriates scriptural narrative and idiom, is arguably that which sets this nation’s experience regarding the Jews as being so different from that of Europe, and it goes somewhat toward an explanation of the national “exceptionality” that Berman rightly interrogates.

Christendom has historically defined itself as being that which is not Jewish, yet that particular metaphysic is not foregrounded in American self-definition. If anything, the Jewish narrative as transposed onto American experience was an inoculation against the same sort of anti-Semitism that defined Jewish life in Europe. Despite the anti-Semitism that marks the nation’s history, alongside every other form of race hatred and bigotry, this was still the country where President George Washington could with right celebration write to the Jews of Newport, Rhode Island, in 1790 that the “children of the stock of Abraham who dwell in this land [will] continue to merit and enjoy the good will of the other inhabitants—while everyone shall sit in safety under his own vine and fig tree and there shall be none to make him afraid.” Washington’s language consciously echoed that of the prophet Micah, where the spoken Jerusalem would be uttered in an American tongue.

America as New Zion, however, has encoded within its own calamities, for divorced from the moorings of the faith which inaugurated it, the model remains dangerously embraced: this myth of America as empty, promised land awaiting settlement. In his classic study Virgin Land: The American West as Symbol and Myth, Henry Nash Smith wrote that one of the “most persistent generalizations … is the notion that our society has been shaped by the pull of a vacant continent drawing population westward through the passes of the Alleghenies, across the Mississippi Valley, over the high plains and mountains of the Far West to the Pacific Coast.” The tragedy was that the land was far from vacant, and how place would be defined in the Alleghenies, the Mississippi Valley, the Far West, and the Pacific Coast would be through a similar type of amnesia as that which allows the citizens of Oświęcim to buy their groceries and go to work every day.

Anti-Semitism may not have been the central organizing metaphysic of America, but the loathsome and genocidal ethos of what the historian Richard Drinnon termed “the metaphysics of Indian-hating” was. Colonization was not a simple process of transforming abstracted space into place as settlers burnt a line across North America all the way to the Pacific; rather it was an exercise in trauma—in genocide and ethnic cleansing. We may ask ourselves how it is that the people of Oświęcim can live their lives in the shadow of a death factory, and yet in America we do a near equivalent. My own charming little corner of Massachusetts was witness to the almost gothic horror of the 17th-century Pequot War, and of King Philip’s War, which per capita remain among some of the most violent in American history—we live our lives on top of those mass graves. Historian Timothy Snyder in Black Earth: The Holocaust as History and Warning explains how Adolf Hitler’s expansionist and eliminationist nightmare of Lebensraum was directly inspired by American Manifest Destiny, writing that for the dictator, “the exemplary land empire was the United States of America,” for this country’s example “led Hitler to the American dream.” The current president may similarly speak of himself as a descendant of those who “tamed a continent,” but never forget that those settlers wrote their scriptures in blood.

Any uncomplicated celebration of how America has been good for the Jews must keep that aforementioned metaphysic in mind. So much of the mythopoesis of America is that this was always a land of refugee, the resting place for the Mother of Exiles. As true as some particulars of that myth may be, Lady Liberty’s torch can obscure as much as it can illuminate, for it would be very dangerous to pretend that America’s shores are a place where history had somehow stopped. Philo-Semitism can easily curdle into its near twin, and the American metaphysic is not so distant from the Christendom that birthed it. Anti-Semitism has re-emerged as poisonous fascist ideologies thrive from Budapest to Brasilia. Only the profoundly near-sighted could pretend that America—especially at this current moment—is immune from hatred of the “rootless cosmopolitans.” From the first arrival of Sephardic conversos to New Amsterdam in the 17th century until today, and the worst pogrom in American history happened last month in Pittsburgh, a half mile from where I grew up. As my fellow Pittsburgher Jacob Bacharach wrote in Truthdig following the Tree of Life massacre, “they are coming for Jews, for my people, coming for us again.”

The corner of Wilkins and Shady is a few blocks from where I went to elementary school; it’s where I waited for the 74A when I was too lazy to continue my walk home from the stores in Squirrel Hill; it’s across the street from Chatham’s campus where I went to summer camp. This is a place that I love, and continue to love, and now it is the site of the worst pogrom in American Jewry’s four-century history. We ground ourselves in place, but there is always the threat of it being converted back into space, so better to carry those Jerusalems in our hearts. I draw an inspiration from the sacred condition of exile, from the undefined ideology of Diapsorism. Just as it’s impossible and still necessary to write over the trauma of place with a liturgy of mundane life, so, too, do I often see my own identity as being an exilic within exile, distant Jewish roots defining me as such in the eyes of the anti-Semite. I’m supremely cognizant of Jean Paul Sartre’s observation in Anti-Semite and Jew that “Jew is the man whom other men consider a Jew.” If we go by that definition, then I can show you a litany of emails in response to my political writings where the increasingly not anonymous anti-Semites of the world very much consider me to be a Jew.

John Proctor declares in Arthur Miller’s The Crucible, “it is my name! Because I cannot have another in my life!” Just as I would never overwrite my own surname, so, too, would I never overwrite the trauma of place. Shira Telushkin, in a remarkable piece for Tablet, explains how in Jewish burial the bodies of those who are martyred in the practice of the faith are buried in the same state as when they were murdered, for “their blood cannot be forgotten, simply scrubbed away and disposed of. It must be honored, collected, and buried.” For this ultimately is what we must do: We must honor the blood of the dead, honor the trauma of these places, because rupture is preserved to remind us of God’s broken covenant, of America’s broken covenant. There must not be an exorcism of these ghosts of place; rather, there is no choice but to live with them. Moving from place to place, we carry that imagined Jerusalem within us, we carry that imagined America within us, warmed by the utopian lamp of the Mother of Exiles more than we ever could be by the disappointing reality of the actual one.

Image: Tree of Life memorial; official White House Photo by Andrea Hanks