We know that ghosts cannot speak until they have drunk blood; and the spirits which we evoke demand the blood of our hearts.—Ulrich von Wilamowitz-Moellendorff, Greek Historical Writing, and Apollo (1908)
Thirteen years ago, when I lived briefly in Glasgow, I made it a habit to regularly attend the theater. An unheralded cultural mecca in its own right, overshadowed by charming, medieval Edinburgh to the east, the post-industrial Scottish capitalI was never lacking in good drama. Also, they let you drink beer during performances. Chief among those plays was a production of Sophocles’s Antigone, the final part of his tragic Theban Cycle, and one of the most theorized and staged of dramas from that Athenian golden age four centuries before the Common Era, now presented in the repurposed 16th-century Tron Church. Director David Levin took the Attic Greek of Sophocles and translated it into the guttural brogue of Lowlands Scotts, and in a strategy now deployed almost universally for any production of a play older than a century, the chitons of the ancient world were replaced with business suits, and the decrees of Creon were presented on television screen, as the action was reimagined not in 441 BCE but in 2007.
Enough to remind me of that headline from The Onion which snarked: “Unconventional Director Sets Shakespeare Play in Time, Place that Shakespeare Intended.” The satirical newspaper implicitly mocks adaptations like Richard Loncraine’s Richard III which imagined the titular character (devilishly performed by Ian McKellen) as a sort of Oswald Mosley-like fascist, and Derek Jarman’s masterful version of Christopher Marlowe’s Edward II, which makes a play about the Plantagenet line of succession into a parable about gay rights and the Act Up movement. By contrast, The Onion quips that its imagined “unconventional” staging of The Merchant of Venice is one in which “Swords will replace guns, ducats will be used instead of the American dollar or Japanese yen, and costumes, such as…[the] customary pinstripe suit, general’s uniform, or nudity, will be replaced by garb of the kind worn” in the Renaissance. The dramaturgical perspective behind Levin’s Antigone was definitely what the article parodied; there was nary a contorted dramatic mask to be found, no Greek chorus chanting in dithyrambs, and, as I recall, lots of video projection. The Onion aside, British philosopher Simon Critchley would see no problem with Levin’s artistic decisions, writing in his new book Tragedy, the Greeks, and Us that “each generation has an obligation to reinvent the classics. The ancients need our blood to revise and live among us. By definition, such an act of donation constructs the ancients in our image.”
Antigone, coming from as foreign a culture as it does, still holds our attention for some reason. The story of the titular character—punished by her uncle Creon for daring to defy his command that her brother Polynices’s corpse be left to fester as carrion for the buzzards and worms in the field where he died because he has raised arms against Thebes—would seem to have little to do with Tony Blair’s United Kingdom. When a Glaswegian audience hears Sophocles’s words, however, that “I have nothing but contempt for the kind of governor who is afraid, for whatever reason, to follow the course the he knows is best for the State; and as for the man who sets private friendship above the public welfare—I have no use for him either” a bit more resonance may be heard. Critchley argues that at the core of Greek tragedy is a sublime ambivalence, an engagement with contradiction that classical philosophy can’t abide;as distant as Antigone’s origins may be, its exploration of the conflict between the individual and the state, terrorism and liberation, surveillance and freedom seemed very of the millennium’s first decade. Creon’s countenance of the unthinkable punishment of his niece, to be bricked up behind a wall, was delivered in front of a camera as if George W. Bush announcing the bombing of Iraq from the Oval Office on primetime television. “Evil sometimes seems good / To a man whose mind / A god leads to destruction,” Sophocles wrote. This was a staging for the era of the Iraq War and FOX News, of the Patriot Act and NSA surveillance, and of the coming financial collapse. Less than a year later, and I’d be back in my apartment stateside watching Barack Obama deliver his Grant Park acceptance speech. It was enough to make one think of Antigone’s line: “Our ship of fate, which recent storms have threatened to destroy, has come to harbor at last.” I’m a bad student of the Greeks; I should have known better than to embrace that narcotic hope that pretends tragedy is not the omnipresent condition of humanity.
What could Sophocles, Euripides, and Aeschylus possibly have to say in our current, troubled moment? Tragedy, the Greeks, and Us is Critchley’s attempt to grapple with those disquieting 32 extant plays that whisper to us from an often-fantasized collective past. What survives of Greek tragedy is four less plays than all of those written by Shakespeare; an entire genre of performance for which we have titles referenced by philosophers like Plato and Aristotle, with only those three playwrights’ words enduring, and where often the most we can hope for are a few fragments preserved on some surviving papyri. Critchley emphasizes how little we know about plays like Antigone, or Aeschylus’s Oresteia, or Euripides’s Medea; that classicists often hypothesized that they were born from the Dionysian rituals, or that they focused on satyr psalms, the “song of the goats,” giving tragedy the whiff of the demonic, of the demon Azazel to whom sacrifices of the scapegoat must be made in the Levantine desert.
Beyond even tragedy’s origin, which ancient Greek writers themselves disagreed about, we’re unsure exactly how productions were staged or who attended. What we do have are those surviving 32 plays themselves and the horrific narratives they recount—Oedipus blinded in grief over the patricide and incest that he unknowingly committed but prophetically ensured because of his hubris; Medea slaughtering her children as a revenge on the unfaithfulness of her husband; Pentheus ripped apart by her frenzied Maenads in ecstatic thrall to Dionysius because the Theban ruler couldn’t countenance the power of irrationality. “There are at least thirteen nouns in Attic Greek for words describing grief, lamentation, and mourning,” Critchley writes about the ancients; our “lack of vocabulary when it comes to the phenomenon of death speaks volumes about who we are.” Tragedy, the Greeks, and Us is Critchley’s attempt to give us a bit of their vocabulary of excessive lamentation so as to better approach our predicament.
Readers shouldn’t mistake Tragedy, the Greeks, and Us as a conservative defense of the canon; this is no paean to the superior understanding of the ancients, nor is its highfalutin’ self-help. Critchley’s book isn’t Better Living Through Euripides. Easy to misread the (admittedly not great) title as an advertisement for a book selling the snake-oil of traditionalist cultural literacy, that exercise in habitus that confuses familiarity with the “Great Books” as a type of wisdom. Rather, Critchley explores the Greek tragedies in all of their strange glory, as an exercise in aesthetic rupture, where the works of Sophocles, Aeschylus, and Euripides configure a different type of space that renders a potent critique against oppressive logic. His task is thus the “very opposite of any and all kinds of cultural conservatism.” Critchley sees the plays not as museum pieces, or as simple means of demonstrating that you went to a college with diplomas written in Latin, but rather as a “subversive traditionalism” that helps us to critique “ever more egregious forms of cultural stupefaction that arise from being blinded by the myopia of the present.” This is all much larger than either celebrating or denouncing the syllabi of St. John’s College; Critchley has no concern for boring questions about “Western Civilization” or “Defending the Canon,” rather he rightly sees the tragedies as an occasion to deconstruct those idols of our current age—of the market, of society, of law, of religion, of state. He convincingly argues that any honest radical can’t afford to ignore the past, and something primal and chthonic calls to us from those 32 extant plays, for “We might think we are through with the past, but the past isn’t through with us.”
Critchley explains that the contemporary world, perhaps even more so than when I watched Antigone in Glasgow, is a “confusing, noisy place, defined by endless war, rage, grief, ever-growing inequality. We undergo a gnawing moral and political uncertainty in a world of ambiguity.” Our moment, the philosopher claims, is a “tragicomedy defined by war, corruption, vanity, and greed,” for if my Antigone was of its moment, then Tragedy, the Greeks, and Us could only have been written after 2016. That year, and the characters it ushered into our national consciousness, can seem a particular type of American tragedy, but Critchley’s view (even while haunted by a certain hubristic figure with a predilection for the misspelled tweet) is more expansive than that. In his capable analysis, Critchley argues that tragedy exists as a mode of representing this chaos; a type of thinking at home with inconsistency, ambiguity, contradiction, and complexity. It’s those qualities that have made the form suspicious to philosophers.
Plato considered literature in several of his dialogues, concluding in Gorgias that the “effect of speech upon the structure of the soul / Is as the structure of drugs over the nature of bodies” (he wasn’t wrong), and famously having his puppet Socrates argue in The Republic that the just city-state would ban poets and poetry from their affairs for the aforementioned reason. Plato’s disgruntled student Aristotle was more generous to tragedy, content rather to categorize and explain its effects in Poetics, explaining that performance is the “imitation of an action that is serious, and also, as having magnitude, complete in itself…with incidents arousing pity and fear, wherewith to accomplish its catharsis of such emotions.” Aristotle’s view has historically been interpreted as a defense of literature in opposition to Plato, whereby that which the later found so dangerous—the passions and emotions roiled by drama—were now justified as a sort of emotional pressure gauge that helped audiences purge their otherwise potentially destructive emotions. By the 19th century a philosopher like Friedrich Nietzsche would anticipate Critchley (though the latter might chaff at that claim) when he exonerated tragedy as more than mere moral instruction, coming closer to Plato’s claim about literature’s dangers while ecstatically embracing that reality. According to Nietzsche, tragedy existed in the tension between “Apollonian” and “Dionysian” poles; the first implies rationality, order, beauty, logic, and truth; the second signifies the realm of chaos, irrationality, ecstasy, and intoxication. Nietzsche writes in The Birth of Tragedy that the form “sits in sublime rapture amidst this abundance of life, suffering and delight, listening to a far-off, melancholy song…whose names are Delusion, Will, Woe.” For the German philologist that’s a recommendation, to “join me in my faith in this Dionysiac life and the rebirth of tragedy.”
As a thinker, Critchley Agonistes is well equipped in joining these predecessors in systematizing what he argues is the unsystematizable. Faculty at the New School for Social Research,and coeditor for The New York Times philosophy column “The Stone” (to which I have contributed), Critchley has proven himself an apt scholar who engages the wider conversation. Not a popularizer per se, for Critchley’s goal isn’t the composition of listicles enumerating whacky facts about Hegel, but a philosopher in the truest sense of being one who goes into the Agora and grapples with the circumstances of meaning as they manifest in the punk rock venue, at the soccer stadium, and in the movie theater. Unlike most of his countrymen who recline in the discipline, Critchley is a British scholar who embraces what’s called “continental philosophy,” rejecting the arid, logical formulations of analytical thought in favor of the Parisian profundities of thinkers like Jacques Derrida, Emanuel Levinas, and Martin Heidegger. Critchley has written tomes with titles like The Ethics of Deconstruction: Derrida and Levinas and Ethics-Politics-Subjectivity: Essays on Derrida, Levinas, & Contemporary French Thought, but he’s also examined soccer in What We Think About When We Think About Football (he’s a Liverpool fan) and in Bowie he analyzed, well, Bowie. Add to that his provocative take on religion in Faith of the Faithless: Experiments in Political Theology and on death in The Book of Dead Philosophers (which consists of short entries enumerating the sometimes bizarre ways in which philosophers died, from jumping into a volcano to love potion poisoning) and Critchley has announced himself as one of the most psychedelically mind-expanding of people to earn their lucre by explaining Schopenhauer and Wittgenstein to undergraduates.
What makes Critchley such an engaging thinker about the subjects he examines is both his grounding in continental philosophy (which asks questions about being, love, death, and eternity, as opposed to its analytical cousin content to enumerate all the definitions of the word “is”) and his unpretentious roots in working class Hertfordshire, studying at the glass-and-concrete University of Essex as opposed to tony Oxbridge. Thus, when Critchley writes that “there is an ancient quarrel between philosophy and poetry,” it seems pretty clear that he’s a secret agent working for the latter against the former. He rejects syllogism for stanza and embraces poetics in all of its multitudinous and glorious contradictions. The central argument of Tragedy, the Greeks, and Us is that the form “invites its audience to look at such disjunctions between two or more claims to truth, justice, or whatever without immediately seeking a unifying ground or reconciling the phenomena into a higher unity.” What makes Antigone so devastating is that the title character’s familial obligation justifies the burial of her brother, but the interests of the state validates Creon’s prohibition of that same burial. The tragedy arises in the irreconcilable conflict of two right things, with Critchley explaining that Greek drama “presents a conflictually constituted world defined by ambiguity, duplicity, uncertainty, and unknowability, a world that cannot be rendered rationally fully intelligible through some metaphysical first principles or set of principles, axioms, tables of categories, or whatever.”
This is the central argument: that the “experience of tragedy poses a most serious objection to that invention we call philosophy.” More accurately, Critchley argues that tragedy’s comfort with discomfort, its consistent embrace of inconsistency, its ordered representation of disorder, positions the genre as a type of radical critique of philosophy, a genre that expresses the anarchic rhetoric of the sophists, rather than their killjoy critic Socrates and his dour student Plato. As a refresher, the sophists were the itinerant and sometimes fantastically successful rhetoricians who taught Greek politicians a type of disorganized philosophy that, according to Socrates, had no concern with the truth, but only with what was convincing. Socrates supposedly placed “Truth” at the core of his dialectical method, and, ever since, the discipline has taken up the mantle of “a psychic and political existence at one with itself, which can be linked to ideas of self-mastery, self-legislation, autonomy, and autarchy, and which inform the modern jargon of authenticity.” Tragedy is defined by none of those things; where philosophy strives for order and harmony, tragedy dwells in chaos and division; where syllogism strives to eliminate all contradiction as irrational, poetry understands that it’s in the complexity of inconsistency, confusion, and even hypocrisy that we all dwell. Sophistry and tragedy, to the recommendation of both, are intimately connected; both being methods commensurate with the dark realities of what it means to be alive. Critchley claims that “tragedy articulates a philosophical view that challenges the authority of philosophy by giving voice to what is contradictory about us, what is constricted about us, what is precarious about us, and what is limited about us.”
Philosophy is all arid formulations, dry syllogisms, contrived Gedankenexperiments; tragedy is the knowledge that nothing of the enormity of what it means to be alive can be circumscribed by mere seminar argument. “Tragedy slows things down by confronting us with what we do not know about ourselves,” Critchley writes. If metaphysics is contained by the formulations of the classroom, then the bloody stage provides a more accurate intimation of death and life. By being in opposition to philosophy, tragedy is against systems. It becomes both opposite and antidote to the narcotic fantasy that everything will be alright. Perhaps coming to terms with his own discipline, Critchley argues that “it is necessary to try and think theatrically and not just philosophically.” Tragedy, he argues, provides an opportunity to transcend myths of progress and comforts of order, to rather ecstatically enter a different space, an often dark, brutal, and subterranean place, but one which demonstrates the artifice of our self-regard.
A word conspicuous in its absence from Tragedy, the Greeks, and Us is that of the “sacred.” If there is any critical drawback to Critchley’s argument, it seems to be in the hesitancy, or the outright denial, that what he claims in his book has anything to do with something quite so wooly as the noumenal. Critchley gives ample space to argue that, “Tragedy is not some Dionysian celebration of the power of ritual and the triumph of myth over reason,” yet a full grappling with his argument seems to imply the opposite. The argument that tragedy stages contradiction is one that is convincing, but those sublime contradictions are very much under the Empire of Irrationality’s jurisdiction. Critchley is critical of those that look at ancient tragedy and “imagine that the spectators…were in some sort of prerational, ritualistic stupor, some intoxicated, drunken dumbfounded state,” but I suppose much of our interpretation depends on how we understand ritual, religion, stupor, and intoxication.
His claims are invested in an understanding of the Greeks as not being fundamentally that different from us, writing that “there is a lamentable tendency to exoticize Attic tragedy,” but maybe what’s actually called for is a defamiliarization of our own culture, an embrace of the irrational weirdness at the core of what it means to be alive 2019, where everything that is solid melts into air (to paraphrase Marx). Aeschylus knew the score well; “Hades, ruler of the nether sphere, / Exactest auditor of human kind, / Graved on the tablet of his mind,” as he describes the prince of this world in Eumenides. Critchley, I’d venture, is of Dionysius’s party but doesn’t know it. All that is argued in Tragedy, the Greeks, and Us points towards an awareness, however sublimated, of the dark beating heart within the undead cadaver’s chest. “To resist Dionysius is to repress the elemental in one’s own nature,” writes the classicist E.R. Dodds in his seminal The Greeks and the Irrational, “the punishment is the sudden complete collapse of the inward dykes when the elemental breaks through…and civilization vanishes.”
Absolutely correct that tragedy is in opposition to philosophy; where the latter offers assurances that reason can see us through, the former knows that it’s never that simple. The abyss is patient and deep, and no amount of analysis, of interpretation, of calculation, of polling can totally account for the hateful tragic pulse of our fellow humans. Nietzsche writes “what changes come upon the weary desert of our culture, so darkly described, when it is touched by…Dionysius! A storm seizes everything decrepit, rotten, broken, stunted; shrouds it in a whirling red cloud of dusty and carries it into the air like a vulture.” If any place best exemplifies that experience, and this moment, it’s Euripides’s The Bacchae, to which Critchley devotes precious little attention. That play depicts the arrival of that ambiguous god Dionysius to Thebes, as his followers thrill to the divine and irrational ecstasies that he promises. It ends with a crowd of those followers, the Maenads, mistaking the ruler Pentheus for a sacrificial goat and pulling him apart, his bones from their sockets, his organs from their cavities. Until his murder, Pentheus simultaneously manifested a repressed thrill towards the Dionysian fervor and a deficiency in taking the threat of such uncontained emotion seriously. “Cleverness is not wisdom,” Euripides writes, “And not to think mortal thoughts is to see few days.” If any didactic import comes from The Bacchae, it’s to give the devil as an adversary his due, for irrationality has more power than the clever among us might think.
Circling around the claims of Critchley’s book is our current political situation, alluded to but never engaged outright. In one sense, that’s for the best; those demons’ names are uttered endlessly all day anyhow. It’s desirable to at least have one place where you need not read about them. But in another manner, fully intuiting the Dionysian import of tragedy becomes all the more crucial when we think about what that dark god portends in our season of rising authoritarianism. “Tragedy is democracy turning itself into a spectacle,” and anyone with Twitter will concur with that observation of Critchley’s. Even more important is Critchley’s argument about those mystic chords of memory connecting us to a past that we continually reinvent; the brilliance of his claim about why the Greeks matter to us now, removing the stuffiness of anything as prosaic as canonicity, is that tragedy encapsulates the way in which bloody trauma can vibrate through the millennia and control us as surely as the ancients believed fate controlled humans. Critchley writes that “Tragedy is full of ghosts, ancient and modern, and the line separating the living from the dead is continually blurred. This means that in tragedy the dead don’t stay dead and the living are not fully alive.” We can’t ignore the Greeks, because the Greeks aren’t done with us. If there is anything that hampers us as we attempt to extricate the Dionysian revelers in our midst, it’s that many don’t acknowledge the base, chthonic power of such irrationality, and they refuse to see how violence, hate, and blood define our history in the most horrific of ways. To believe that progress, justice, and rationality are guaranteed, that they don’t require a fight commensurate with their worthiness, is to let a hubris fester in our souls and to court further tragedy among our citizens.
What Medea or The Persians do is allow us to safely access the Luciferian powers of irrationality. They present a more accurate portrayal of humanity, based as we are in bloodiness and barbarism, than the palliatives offered by Plato in The Republic with his philosopher kings. Within that space of the theater, Critchley claims that at its best it “somehow allows us to become ecstatically stretched out into another time and space, another way of experiencing things and the world.” Far from the anemic moralizing of Aristotelian catharsis—and Critchley emphasizes just how ambiguous that word actually is—that is too often interpreted as referring to a regurgitative didacticism, tragedy actually makes a new world by demolishing and replacing our world, if only briefly. “If one allows oneself to be completely involved in what is happening onstage,” Critchley writes, “one enters a unique space that provides an unparalleled experience of sensory and cognitive intensity that is impossible to express purely in concepts.” I recall seeing a production of Shakespeare’s Othello at London’s National Theatre in 2013, directed by Nicholas Hytner and starring Adrian Lester as the cursed Moor and Rory Kinear as a reptilian Iago. Dr. Johnson wrote that Othello’s murder of Desdemona was the single most horrifying scene in drama, and I concur; the play remains the equal of anything by Aeschylus or Euripides in its tragic import.
When I watched Lester play the role, lingering over the dying body of his faithful wife, whispering “What noise is this? Not dead—not yet quite dead?” I thought of many things. I thought about how Shakespeare’s play reflects the hideous things that men do to women, and the hideous things that the majority do to the marginalized. I thought about how jealousy noxiously fills every corner, no matter how small, like some sort of poison gas. And I thought about how unchecked malignancy can shatter our souls. But mostly what I thought wasn’t in any words, but was better expressed by Lester’s anguished cry as he confronted the evil he’d done. If tragedy allows for an audience to occasionally leave our normal space and time, then certainly I felt like I was joined with those thousand other spectators on that summer night at South Bank’s Olivier Theatre. The audience’s silence after Othello’s keening subsided was as still as the space between atoms, as empty as the gap between people.
“For years the old Italians have been dying/all over America.” -Lawrence Ferlinghetti
On the second floor of Harvard’s Fogg Museum, in an airy, well-lit, white-walled gallery, near a slender window overlooking a red-bricked Cambridge street, there is a display case holding three portraits on chipped wood not much bigger than post-cards. Of varying degrees of aptitude, the paintings are of a genre called “Fayum Portraits” from the region of Egypt where they’re commonly found. When the Roman ruling class established itself in this Pharaonic land during the first few centuries of the Common Era, they would mummify themselves in the Egyptian fashion while affixing Hellenistic paintings onto the faces of their preserved bodies. Across the extent of the Roman empire, from damp Britain to humid Greece, little of the more malleable painted arts survived, but in sun-baked Egypt these portraits could peer out 20 centuries later as surely as the desert dried out their mummified corpses. When people envision ancient Mediterranean art, they may think of the grand sculptures blanched a pristine white, Trajan’s Arch and the monumental head of Constantine, the colorful paint which once clung to their surfaces long since eroded away. And while the monumental marbles of classical art are what most people remember of the period, the Fayum portraits of Harvard provide an entirely more personal gaze across the millennia.
If white is the color we associate with those sculptures, then the portraits here in Cambridge are of a different hue. They are nut-brown, tanned from the noon-day sun, yellow-green, and olive. Mummy Portrait of a Woman with an Earring, painted in the second century, depicts in egg tempura on wood a dark-skinned middle-aged woman with commanding brown eyes, her black hair showing a bit of curl even as it is pulled back tightly on her scalp; a woman looking out with an assuredness that belies her anonymity over time. Mummy Portraits of a Bearded Man shows the tired look of an old man, grey beard neatly clipped and groomed, his wavy grey hair still with a hint of auburn and combed back into place. Fragments of a Mummy Portrait of a Man represents a far younger man, cleft chinned with a few days’ black stubble over his olive skin. What’s unnerving is the eerie verisimilitude of this nameless trio.
That they look so contemporary, so normal, is part of what’s unsettling. But they also unsettle because they’re there to assist in overturning our conceptions about what Roman people, those citizens of that vast, multicultural, multilingual, multireligious empire, looked like. Our culture is comfortable with the lily-white sculptures we associate with our Roman forebearers which were then imitated in our own imperial capitals; easier to pretend that the ancient Romans had nothing to do with the people who live there now, and yet when looking at the Fayum portraits I’m always struck by how Italian everybody looks.
The old man could be tending tomatoes in a weedy plot somewhere in Trenton; the middle-aged woman wearily volunteering for a church where she’s not too keen on the new Irish priest, and the young man with the stubble looks like he just got off a shift somewhere in Bensonhurst and is texting his friends to see who wants to go into the city. I’ve never seen anyone who actually looks like the statue of Caesar Augustus of Primo Porta carved from white stone, but you’ll see plenty of people who look like the Fayum Portraits in North Boston, Federal Hill, or Bloomfield (the one either in Jersey or in Pittsburgh). When I look at the Fayum portraits, I see people I know; I see my own family.
Despite my surname vowel deficiency, I’m very much Italian-American. Mathematically, twice as much as Robert DeNiro, so I feel well equipped to offer commentary in answering the question with which I’ve titled this piece. Furthermore, as a second-generation American, I’m not that far removed from Ellis Island. My mother’s father immigrated from Abruzzo, that mountainous, bear-dwelling region that was the birthplace of Ovid, and his entire family and much of his fellow villagers were brought over to Westmoreland County, Pennsylvania, to work as stone masons, a trade they’d been plying since the first rock was laid in the Appian Way. My grandmother’s family was from Naples, where Virgil was born, a teeming volcanic metropolis of orange and lemon trees, a heaven populated by devils as native-son Giordano Bruno wrote in the 16th century. For me, being Italian was unconscious; it simply was a fact no more remarkable than my dark hair or brown eyes.
Being Italian meant at least seven fishes on Christmas Eve and the colored lights rather than the white ones on the tree, it meant (and still means) cooking most things with a heavy dollop of olive oil and garlic, it means at least once a week eating either veal parmesan, prosciutto and melon, calamari, spaghetti with tuna, and buffalo mozzarella with tomatoes. Being Italian meant laminated furniture in the homes of extended family, and Mary-on-the-half-shell; it meant a Catholicism more cultural than theological, with the tortured faces of saints vying alongside a certain type of pagan magic. Being Italian meant assumed good looks and a certain ethnic ambiguity; it meant uncles who made their own wine and grew tomatoes in the backyard.
Being Italian-American meant having an identity where the massive pop culture edifice that supplies representations of you implies that the part before the hyphen somehow makes the second half both more and less true. My position was much like Maria Laurino’s in Were You Always an Italian?: Ancestors and Other Icons of Italian America, where she writes that “All the pieces of my life considered to be ‘Italian’…I kept distinct from the American side, forgetting about the hyphen, about that in-between place where a new culture takes form.” What I do viscerally remember is the strange sense I had watching those corny old sword-and-sandal epics that my middle school Latin teacher used to fill up time with, a sense that those strangely Aryan Romans presented on celluloid were supposed to somehow be related to me. Actors whose chiseled all-American whiteness evoked the marbles that line museum halls. Sculptures of Caesar Augustus were once a lot more olive than white as well. That classical Greek and Roman statuary was vividly painted, only to fade over time, has been known since the 19th century, even as contemporary audiences sometimes violently react to that reality. Using modern technology, archeologist Vinzenz Brinkmann has been able to restore some of the most famous Greek and Roman statues to glorious color, as he details in his Gods in Color: Polychromy in the Ancient World, but as the classicist Sarah Bond writes in Forbes, “Intentional or not, museums present viewers with a false color binary of the ancient world.” We think of the Romans as lily white, but the Fayum portraits demonstrate that they very much weren’t. That the individuals in these pictures should appear so Italian shouldn’t be surprising—Romans are Italians after all. Or at least in the case of the Fayum portraits they’re people from a mélange of backgrounds, including not just Romans, but Greeks, Egyptians, Berbers, Arabs, Jews, Ethiopians, and so on. Rome was, like our own, a hybridized civilization, and it’s marked on the faces that peer out towards us on that wall.
Last fall, at a handful of Boston-area colleges just miles from the museum, classical imagery was appropriated for very different means. Students awoke to find their academic halls papered with posters left during the night by members of one of these fascistic groups to have emerged after the 2016 presidential election, a bigotry that has been revealed as if discovering all of the fungus growing underneath a rotting tree stump that’s been kicked over. This particular group combined images of bleached classical sculpture and neo-fascist slogans to make their white supremacist arguments. The Apollo Belvedere is festooned with the declaration “Our Future Belongs to Us,” 17th-century French Neo-Classical sculptor Nicolas Coustou’s bust of Julius Caesar has “Serve your People” written beneath it, and in the most consciously Trumpy of posters, a close-up on the face of Michelangelo’s David injuncts “Let’s Become Great Again.” There’s something particularly ironic in commandeering the David in the cause of white supremacy. Perhaps they didn’t know that that exemplar of the Italian Renaissance was a depiction of a fierce Jewish king as rendered by a gay, olive-skinned artist?
Such must be the central dilemma of the confused white supremacist, for the desire to use ancient Rome in their cause has been irresistible ever since Benito Mussolini concocted his crackpot system of malice known as fascismo corporativo, but the reality is that the descendants of those very same Romans often don’t appear quite as “white” as those supremacists would be comfortable with. This is especially important when considering that the Romans “did not speak in terms of race, a discourse invented many centuries later,” as scholar Nell Irvin Painter writes in The History of White People. Moral repugnance is a given when it comes to racist ideologies, but one should also never forget the special imbecility that comes along with arguing that you’re innately superior because you kinda, sorta, maybe physically resemble dead people who did important things. What makes the case of the posters more damning is that those who made them don’t even actually look like the people whose culture they’ve appropriated.
No doubt the father of celebrated journalist Gay Talese would be outraged by this filching. In a 1993 piece for The New York Times Book Review, he remembers his “furious and defensive father” exploding after he’d learned that the Protestant-controlled school board had rejected his petition to include Ovid and Dante in the curriculum, the elder man shouting at his son that “‘Italy was giving art to the world when those English were living in caves and painting their faces blue!’” A particular twist as the descendants of those same WASPs paper college campuses with posters of Italian sculptures that they somehow claim patrimony from. But that’s always been the predicament of the Western chauvinist, primed to take ownership over another culture as evidence of his own genius, while simultaneously having to explain his justifications for the disdain in which he holds the actual children of that culture.
Since the late 19th-century arrival of millions of immigrants from the Mezzogiorno, American racists have long contrived baroque justifications for why white Anglo-Saxon Protestants are the inheritors of Italian culture, while Italians themselves are not. Some of this logic was borrowed from Italy itself, where even today, Robert Lumley and Jonathan Morris record in The New History of the Italian South, some northerners will claim that “Europe ends at Naples. Calabria, Sicily, and all the rest belong to Africa.” Northern Italians, comparatively wealthier, better educated, and most importantly fairer, had been in the United States a generation before their southern cousins, and many Anglo-Americans borrowed that racialized animus against southerners which reigned (and still does) in the old country.
As Richard Gambino writes in Blood of my Blood: The Dilemma of the Italian-Americans, it was in the “twisted logic of bigotry” that these immigrants were “flagrantly ‘un-American.’ And Italians replaced all the earlier immigrant groups as targets of resentment about the competition of cheap labor.” This was the reasoning that claimed that all Italian accomplishments could be attributed to a mythic “Nordic” or “Teutonic” influence, so that any Mediterranean achievements were written away, orienting Rome towards a Europe it was only tangentially related to and away from an Africa that long had an actual influence. Notorious crank Madison Grant in his unabashedly racist 1916 The Passing of the Great Race claimed that Italians were now “storming the Nordic ramparts of the United States and mongrelizing the good old American stock,” with Gambino explaining that “In his crackpot explanation, Italians are the inferior descendants of the slaves who survived when ancient Rome died.”
Paleontologist Stephen Jay Gould writes that Grant’s book was “the most influential tract of American scientific racism” and Adolf Hitler wrote a letter to the author claiming “The book is my Bible.” A lawyer and eugenicist, Grant’s writings were influential in both the Palmer Raids, a series of unconstitutional police actions directed by the Wilson administration against immigrants suspected of harboring anarchist and communist sympathies, as well as the xenophobic nastiness of the 1924 Johnson-Reed Act which made eastern and southern European immigration slow to a trickle. Incidentally, it was the Johnson-Reed Act that, had it been passed 10 years earlier, would have barred my mother’s father from entering the United States; a law that former Attorney General Jefferson Beauregard Sessions lauded in a 2017 interview with Stephen Bannon, arguing that the banning of immigrants like those in my family “was good for America.”
In Chiaroscuro: Essays of Identity, Helen Barolini writes that “Italian Americans are too easily used as objects of ridicule and scorn,” and while that’s accurate, it’s a rhetoric that has deep and complicated genealogies. Italy has always occupied a strange position in the wider European consciousness. It is simultaneously the birthplace of “Western Civilization,” and an exoticized, impoverished, foreign backwater at the periphery of the continent; the people who first modeled a noxious imperialism, and the subjugated victims of later colonialism. A pithy visual reminder of Italy’s status in early modern Europe can be seen in the German painter Hans Holbein the Younger’s 1533 masterpiece The Ambassadors, which depicts two of the titular profession surrounded by their tools. On a shelf behind them sits a globe. Europe is differentiated from the rest of the world by being an autumnal brown-green, with the exception of two notable regions colored the same hue as Africa—Ireland and Sicily. In Are Italians White?: How Race is Made in America, coedited with Salvatore Salerno, historian Jennifer Guglielmo explains that the “racial oppression of Italians had its root in the racialization of Africans,” something never more evident than in the anti-Italian slur “guinea” with its intimations of Africanness, this implication of racial ambiguity having profound effects on how Italians were understood and how they understood themselves.
In the rhetoric and thought of the era, Italy was somehow paradoxically the genesis of Europe, while also somehow not European. As such, Italians were to be simultaneously emulated and admired, while also reviled and mocked. During that English Renaissance, which was of course a sequel to the original one, books like Baldassare Castiglione’s The Courtier and Niccolò Machiavelli’s The Prince, with their respective paeans to sensuality and duplicity, molded a particular view of Italianness that has long held sway in the English imagination. Consider all of the Shakespeare plays in an imagined Italy: The Taming of the Shrew, Two Gentleman of Verona, Much Ado About Nothing, Romeo and Juliet, Julius Caesar, Titus Andronicus, Othello, Coriolanus, The Winter’s Tale and The Merchant of Venice, not to mention the occasional appearance of Romans in other plays. Shakespeare’s plays, and other icons of the English Renaissance, set a template that never really faded. A simultaneous attraction to and disgust at a people configured as overly emotional, overly sexual, overly flashy, overly corrupt, overly sensual, and with a propensity less cerebral than hormonal. And the criminality.
Long before Mario Puzo or The Sopranos, Renaissance English writers impugned Italians with a particular antisocial perfidy. Such is displayed in Thomas Nash’s 1594 The Unfortunate Traveller: or, the Life of Jack Wilton, which could credibly be called England’s first novel. In that picaresque, the eponymous character perambulates through the Europe of the early 16th century, encountering luminaries like Thomas More, Erasmus, Henry Howard, Martin Luther, and Cornelius Agrippa, and witnessing events like the horrific siege at Munster in the Low Countries. Most of Nash’s narrative, however, takes place in “the Sodom of Italy,” and an English fever dream of that country’s excess settles like a yellow fog. One Englishmen laments that the only lessons that can be learned here are “the art of atheism, the art of epicurizing, the art of whoring, the art of poisoning, the art of sodomitry.”
The narrative circumstances of Nash’s penultimate scene, which reads like Quentin Tarantino, has an Italian nobleman being executed for the violent revenge he took upon his sister’s rapist. On the scaffold, the nobleman declares that “No true Italian but will honor me for it. Revenge is the glory of arms and the highest performance of valor,” and indeed his revenge was of an immaculate quality. He’d first forced his sister’s assailant to abjure God and condemn salvation, and then, satisfied that such blasphemy would surely send his victim to hell, he shot him in the head. A perfect revenge upon not just the body, but the soul. Nash presents such passion as a ritual of decadent Mediterranean vendetta, simultaneously grotesque and inescapably evocative.
From Nash until today there has often been a presumption of vindictive relativist morality on the part of Italians, and it has slurred communities with an assumption of criminality. In the early 20th century sociologists claimed that the dominant Italian ethic was “familial amoralism,” whereby blood relations had precedence over all other social institutions. Nash’s nobleman is the great-grandfather to Michael Corleone in the collective imagination. Do not read this as squeamish sensitivity, I’d never argue that The Godfather, written and directed by Italians, is anything less than an unmitigated masterpiece. Both Puzo’s novel and Francis Ford Coppola’s adaptation are potent investigations of guilt, sin, and evil. I decided not to join the Sons of Italy after I saw how much of their concern was with stereotypes on The Sopranos, which I still regard as among the greatest television dramas of all time. I concur with Bill Tonelli, who in his introduction to The Italian American Reader snarked that “nobody loves those characters better than Italian Americans do,” and yet I recall with a cringe the evaluation of The Godfather given to me by a non-Italian, that the film was about nothing more than “spaghetti and murder.”
Representations of Italianness in popular culture aren’t just Michael Corleone and Tony Soprano, there’s also the weirdly prevalent sitcom stereotype of the lovable, but dumb, hypersexual goombah. I enter into consideration Arthur “The Fonze” Fonzarelli from Happy Days, Tony Micelli from Who’s the Boss?, Vinny Barbarino of Welcome Back Kotter, and of course Friends’ Joey Tribbiani. Once I argued with my students if there was something offensive about The Jersey Shore, finally convincing them of the racialized animus in the series when I queried as to why there had never been an equivalent about badly behaving WASPs called Martha’s Vineyard?
Painter explains that “Italian Americans hovered longer on the fringes of American whiteness,” and so any understanding must take into account that until recently Italians were still inescapably exotic to many Americans. Tonelli writes that “in an era that supposedly values cultural diversity and authenticity, the portrait of Italian Americans is monotonous and observed from a safe distance.” The continued prevalence of these stereotypes is a residual holdover from the reality that Italians are among the last of “ethnics” to “become white.” Tonelli lists the “mobsters, the urban brute, the little old lady shoving a plate of rigatoni under your nose,” declaiming that “it gets to be like a minstrel show after a while.”
Consider Judge Webster Thayer who after the 1921 sham-trial of anarchists Barolomeo Vanzetti and Nicola Sacco would write that although they “may not have committed the crime” attributed to them, they are “nevertheless morally culpable” because they were both enemies of “our existing institutions… the defendant’s ideals are cognate with crime.” Privately, Thayer bragged to a Dartmouth professor, “Did you see what I did to those anarchistic bastards the other day?” As late as 1969, another professor, this one at Yale, felt free to tell a reporter in response to a query about a potential Italian-American New York mayoral candidate that “If Italians aren’t actually an inferior race, they do the best imitation of one I’ve seen.”
But sometime in the
decades after World War II, Italians followed the Irish and Jews into the
country club of whiteness with its carefully circumscribed membership. Guglielmo
explains that initially “Virtually all Italian immigrants [that] arrived in the
United States [did so] without a consciousness about its color line.” Victims
of their birth nation’s rigid social stratification based on complexion and geography,
the new immigrants were largely ignorant of America’s racial history, and thus
were largely immune to the anti-black racism that was prevalent. These
immigrants had no compunction about working and living alongside African
Americans, and often understood themselves to occupy a similar place in
But as Guglielmo explains, by the second and third generation there was an understanding that to be “white meant having the ability to avoid many forms of violence and humiliation, and assured preferential access to citizenship, property, satisfying work, livable wages, decent housing, political power, social status, and a good education, among other privileges.” Political solidarity with black and Hispanic Americans (we forget that Italians are Latinx too) was abandoned in favor of assimilation to the mainstream. Jennifer Gillan writes in the introduction to Growing up Ethnic in America: Contemporary Fiction about Learning to be American that “American have often fought bitter battles over what it means to be American and who exactly get to qualify under the umbrella term,” and towards the end of the 20th century Italians had fought their way into that designation, but they also left many people behind. In the process, a beautiful radical tradition was forgotten, so that we traded Giuseppe Garibaldi for Frank Rizzo, Philly’s racist mayor in the 70s; so that now instead of Sacco and Vanzetti we’re saddled with Antonin Scalia and Rudy Giuliani. As Guglielmo mourns, “Italians were not always white and the loss of this memory is one of the great tragedies of racism in America.”
If there is to be any anecdote, then it must be in words; where literature allows for imaginative possibilities and the complexities of empathy. What is called for is a restitution, or rather a recognition, of an Italian-American literary canon acting as bulwark against both misrepresentations and amnesia. Talese infamously asked if there were no “Italian-American Arthur Millers and Saul Bellows, James Baldwins and Toni Morrisons, Mary McCarthys and Mary Gordons, writing about their ethnic experiences?” There’s an irony to this question, as Italians in the old country would never think to ask where their writers are, the land of Virgil and Dante secure in its literary reputation, with more recent years seeing the celebrated post-modernisms of Italo Calvino, Primo Levi, Dario Fo, and Umberto Eco. In this century, the citizens of Rome, Florence, and Milan face different questions than their cousins in Newark, Hartford, or Providence. Nor do we bemoan a dearth of examples in other fields: that Italians can hit a baseball or throw a punch can be seen in Joe DiMaggio’s homeruns and Rocky Marciano’s slugging; that we can strike a note is heard in Frank Sinatra and Dean Martin; that we can shoot a picture is proven by Coppola, Brian DePalma, and Martin Scorsese.
Yet in the literary arts no equivalent names come up, at least no equivalent names that are thought of as distinctly Italian. Regina Barreca in the introduction to Don’t Tell Mama!: The Penguin Book of Italian American Writing says that there is an endurance of the slur which sees Italians as “deliberately dense, badly educated, and culturally unsophisticated.” By this view the wider culture is fine with the physicality of boxers and baseballs players, the emotion and sensuality of musicians, even the Catholic visual idiom of film as opposed to the Protestant textuality of the written word, so that the “intellectual” pursuits of literature are precluded. She explains that what remains is an “idea of Italian Americans as a people who would never choose to read a book, let alone write one,” though as Baraca stridently declares this is a “set of hazardous concepts [which] cannot simply be outlived; it must be dismantled.”
I make no claims to originating the idea that we must establish an Italian-American literary canon, such has been the mainstay of Italian-American Studies since that field’s origin in the ’70s. This has been the life’s work of scholars like Gambino, Louise DeSalvo, and Fred Gardaphé, not to mention all of the anthology editors I’ve referenced. Tonelli writes that “Our time of genuine suffering at the hands of this bruising country passed more or less unchronicled, by ourselves or anyone else,” yet there are hidden examples of Italian-American voices writing about an experience that goes beyond mafia or guido stereotypes. For many of these critics, the Italian-American literary canon was something that already existed, it was merely a question of being able to recognize what exists beyond the stark black and red cover of The Godfather. Such a task involved the elevation of lost masterpieces like Pietro di Donato’s 1939 proletarian Christ in Concrete , but also a reemphasis on the vowels at the ends of names for authors who are clearly Italian, but are seldom thought of as such.
That Philip Roth is a Jewish author goes without saying, but rarely do we think of the great experimentalist Don DeLillo as an Italian-American author. A restitution of the Italian-American literary canon would ask what precisely is uniquely Italian about a DeLillo? For that matter, what are the Italian-American aesthetics of poets like Lawrence Ferlinghetti, Jay Parini, Diane Di Prima, and Gregory Corso? What can we better say about the Italianness of Gilbert Sorrentino and Richard Russo? Where do we locate the Mezzogiorno in the criticism and scholarship of A. Bartlett Giamatti, Frank Lentricchia, and Camille Paglia? Baraca writes that “Italian Americans live (and have always lived) a life not inherited, but invented,” and everything is to be regained by making a reinvention for ourselves. Furthermore, I’d suggest that the hybridized nature of what it has always meant to be Italian provides a model to avoid the noxious nationalisms that increasingly define our era.
Guglielmo writes that “Throughout the twentieth century, Italian Americans crafted a vocal, visionary, and creative oppositional culture to protest whiteness and build alliances with people of color,” and I’d argue that this empathetic imagination was born out of the pluralistic civilization of which the Italians were descendants. Contrary to pernicious myths of “racial purity,” the Romans were as diverse as Americans are today, drawing not just from Italic peoples like the Umbrians, Sabines, Apulians, and Etruscans, but also from Egyptians, Ethiopians, Berbers, Carthaginians, Phoenicians, Greeks, Anatolians, Gauls, Huns, Dacians, Franks, Teutons, Vandals, Visigoths, Anglo-Saxons, Normans, Iberians, Jews, Arabs, and Celts, among others. A reality quite contrary to the blasphemy of those posters with their stolen Roman images.
Rome was both capital and periphery, a culture that was a circle with no circumference whose center can be everywhere. Christine Palamidessi Moore in her contribution to the Lee Gutkind and Joanna Clapps Herman anthology Our Roots are Deep with Passion: Creative Nonfiction Collects New Essays by Italian American Writers notes that “Italy is a fiction: a country of provinces, dialects, and regions, and historically because of its location, an incorporator of invaders, empires, and bloodlines.” Sitting amidst the Mare Nostrum of its wine-dark sea, Italy has always been at a nexus of Europe, Africa, and Asia, situated between north and south, east and west. Moore explains that the “genuineness of the ethnicity they choose becomes more obscure and questionable because of its mixed origins; however, because it is voluntary, the act of choosing sustains the identity.”
The question then is not “What was Italian America?” but rather “What can Italian America be?” In 1922 W.E.B. DuBois, the first black professor at Harvard, spoke to a group of impoverished Italian immigrants at Chicago’s Hull House. Speaking against the Johnson-Reed Act, DuBois appealed to a spirit of confraternity, arguing that there must be a multiethnic coalition against a “renewal of the Anglo-Saxon cult: the worship of the Nordic totem, the disenfranchisement of Negro, Jew, Irishman, Italian, Hungarian, Asiatic and South Sea Islander.” When DuBois spoke against the “Anglo-Saxon cult” he condemned not actual English people, but rather the fetish that believes only those of British stock can be “true Americans.” When he denounced the “Nordic totem,” he wasn’t castigating actual northern Europeans, but only that system that claims they are worthier than the rest of people. What DuBois condemned was not people, but rather a system that today we’ve taken to calling “white privilege,” and he’s just as correct a century later. The need for DuBois’s coalition has not waned.
Italian-Americans can offer the example of a culture that was always hybridized, always at the nexus of different peoples. Italians have never been all one thing or the other, and that seems to me the best way to be. It’s this liminal nature that’s so valuable, which provides answer to the idolatries of ancestry that are once again emerging in the West (with Italy no exception). DuBois offered a different vision, a coalition of many hues marshaled against the hegemony of any one. When I meet the gaze of the Fayum portraits, I see in their brown eyes an unsettling hopefulness from some 20 centuries ago, looking past my shoulder and just beyond the horizon where perhaps that world may yet exist.
“Homer on parchment pages! / The Iliad and all the adventures/ Of Ulysses, for of Priam’s kingdom, / All locked within a piece of skin / Folded into several little sheets!”—Martial, Epigrammata (c. 86-103)
“A good book is the precious life-blood of a master spirit, embalmed and treasured up on purpose to a life beyond life.” -—John Milton, Aeropagitica (1644)
At Piazza Maunzio Bufalini 1 in Cesena, Italy, there is a stately sandstone building of buttressed reading rooms, Venetian windows, and extravagant masonry that holds slightly under a half-million volumes, including manuscripts, codices, incunabula, and print. Commissioned by Malatesta Novello in the 15th century, the Malatestiana Library opened its intricately carved walnut door to readers in 1454, at the height of the Italian Renaissance. The nobleman who funded the library had his architects borrow from ecclesiastical design: The columns of its rooms evoke temples, its seats the pews that would later line cathedrals, its high ceilings as if in monasteries.
Committed humanist that he was, Novello organized the volumes of his collection through an idiosyncratic system of classification that owed more to the occultism of Neo-Platonist philosophers like Marsilio Ficino, who wrote in nearby Florence, or Giovanni Pico della Mirandola, who would be born shortly after its opening, than to the arid categorization of something like our contemporary Dewey Decimal System. For those aforementioned philosophers, microcosm and macrocosm were forever nestled into and reflecting one another across the long line of the great chain of being, and so Novello’s library was organized in a manner that evoked the connections of both the human mind in contemplation as well as the universe that was to be contemplated itself. Such is the sanctuary described by Matthew Battles in Library: An Unquiet History, where a reader can lift a book and test its heft, can appraise “the fall of letterforms on the title page, scrutinizing marks left by other readers … startled into a recognition of the world’s materiality by the sheer number of bound volumes; by the sound of pages turning, covers rubbing; by the rank smell of books gathered together in vast numbers.”
An awkward-looking yet somehow still elegant carved elephant serves as the keystone above one door’s lintel, and it serves as the modern library’s logo. Perhaps the elephant is a descendant of one of Hannibal’s pachyderms who thundered over the Alps more than 15 centuries before, or maybe the grandfather of Hanno, Pope Leo X’s pet—gifted to him by the King of Portugal—who would make the Vatican his home in less than five decades. Like the Renaissance German painter Albrecht Durer’s celebrated engraving of a rhinoceros, the exotic and distant elephant speaks to the concerns of this institution—curiosity, cosmopolitanism, and commonwealth.
It’s the last quality that makes the Malatestiana Library so significant. There were libraries that celebrated curiosity before, like the one at Alexandria whose scholars demanded that the original of every book brought to port be deposited within while a reproduction would be returned to the owner. And there were collections that embodied cosmopolitanism, such as that in the Villa of Papyri, owned by Lucius Calpurnius Piso Caesoninus, the uncle of Julius Caesar, which excavators discovered in the ash of Herculaneum, and that included sophisticated philosophical and poetic treatises by Epicurus and the Stoic Chrysopsis. But what made the Malatestiana so remarkable wasn’t its collections per se (though they are), but rather that it was built not for the singular benefit of the Malatesta family, nor for a religious community, and that unlike in monastic libraries, its books were not rendered into place by a heavy chain. The Bibliotheca Malatestiana would be the first of a type—a library for the public.
If the Malatestiana was to be like a map of the human mind, then it would be an open-source mind, a collective brain to which we’d all be invited as individual cells. Novella amended the utopian promise of complete knowledge as embodied by Alexandria into something wholly more democratic. Now, not only would an assemblage of humanity’s curiosity be gathered into one temple, but that palace would be as a commonwealth for the betterment of all citizens. From that hilly Umbrian town you can draw a line of descent to the Library Company of Philadelphia founded by Benjamin Franklin, the annotated works of Plato and John Locke owned by Thomas Jefferson and housed in a glass-cube at the Library of Congress, the reading rooms of the British Museum where Karl Marx penned Das Kapital (that collection having since moved closer to King’s Cross Station), the Boston Public Library in Copley Square with its chiseled names of local worthies like Ralph Waldo Emerson and Henry David Thoreau ringing its colonnade, and the regal stone lions who stand guard on Fifth Avenue in front of the Main Branch of the New York Public Library.
More importantly, the Malatestiana is the progenitor of millions of local public libraries from Bombay to Budapest. In the United States, the public library arguably endures as one of the last truly democratic institutions. In libraries there are not just the books collectively owned by a community, but the toy exchanges for children, the book clubs and discussion groups, the 12 Step meetings in basements, and the respite from winter cold for the indigent. For all of their varied purposes, and even with the tyrannical ascending reign of modern technology, the library is still focused on the idea of the book. Sometimes the techno-utopians malign the concerns of us partisans of the physical book as being merely a species of fetishism, the desire to turn crinkled pages labeled an affectation; the pleasure drawn from the heft of a hardback dismissed as misplaced nostalgia. Yet there are indomitably pragmatic defenses of the book as physical object—now more than ever.
For one, a physical book is safe from the Orwellian deletions of Amazon, and the electronic surveillance of the NSA. A physical book, in being unconnected to the internet, can be as a closed-off monastery from the distraction and dwindling attention span engendered by push notifications and smart phone apps. The book as object allows for a true degree of interiority, of genuine privacy that cannot be ensured on any electronic device. To penetrate the sovereignty of the Kingdom of the Book requires the lo-fi method of looking over a reader’s shoulder. A physical book is inviolate in the face of power outage, and it cannot short-circuit. There is no rainbow pinwheel of death when you open a book.
But if I can cop to some of what the critics of us Luddites impugn us with, there is something crucial about the weight of a book. So much does depend on a cracked spine and a coffee-stained page. There is an “incarnational poetics” to the very physical reality of a book that can’t be replicated on a greasy touch-screen. John Milton wrote in his 1644 Aeropagitica, still among one of the most potent defenses of free speech written, that “books are not absolutely dead things, but do contain a potency of life in them to be as active as that soul whose progeny they are.” This is not just simply metaphor; in some sense we must understand books as being alive, and just as it’s impossible to extricate the soul of a person from their very sinews and nerves, bones, and flesh, so too can we not divorce the text from the smooth sheen of velum, the warp and waft of paper, the glow of the screen. Geoffrey Chaucer or William Shakespeare must be interpreted differently depending on how they’re read. The medium, to echo media theorist Marshall McLuhan, has always very much been the message.
This embodied poetics is, by its sheer sensual physicality, directly related to the commonwealth that is the library. Battles argues that “the experience of the physicality of the book is strongest in large libraries”; stand amongst the glass cube at the center of the British Library, the stacks upon stacks in Harvard’s Widener Library, or the domed portico of the Library of Congress and tell me any differently. In sharing books that have been read by hundreds before, we’re privy to other minds in a communal manner, from the barely erased penciled marginalia in a beaten copy of The Merchant of Venice to the dog-ears in Leaves of Grass.
What I wish to sing of then is the physicality of the book, its immanence, its embodiment, its very incarnational poetics. Writing about these “contraptions of paper, ink, carboard, and glue,” Keith Houston in The Book: A Cover-to-Cover Exploration of the Most powerful Object of our Time challenges us to grab the closest volume and to “Open it and hear the rustle of paper and the crackle of glue. Smell it! Flip through the pages and feel the breeze on your face.” The exquisite physicality of matter defines the arid abstractions of this thing we call “Literature,” even as we forget that basic fact that writing may originate in the brain and may be uttered by the larynx, but it’s preserved on clay, papyrus, paper, and patterns of electrons. In 20th-century literary theory we’ve taken to call anything written a “text,” which endlessly confuses our students who themselves are privy to call anything printed a “novel” (regardless of whether or not its fictional). The text, however, is a ghost. Literature is the spookiest of arts, leaving not the Ozymandian monuments of architectural ruins, words rather grooved into the very electric synapses of our squishy brains.
Not just our brains though, for Gilgamesh is dried in the rich, baked soil of the Euphrates; Socrates’s denunciation of the written word from Plato’s Phaedrus was wrapped in the fibrous reeds grown alongside the Nile; Beowulf forever slaughters Grendel upon the taut, tanned skin of some English lamb; Prospero contemplates his magic books among the rendered rags of Renaissance paper pressed into the quarto of The Tempest; and Emily Dickinson’s scraps of envelope from the wood pulp of trees grown in the Berkshires forever entombs her divine dashes. Ask a cuneiform scholar, a papyrologist, a codicologist, a bibliographer. The spirit is strong, but so is the flesh; books can never be separated from the circumstances of those bodies that house their souls. In A History of Reading, Alberto Manguel confesses as much, writing that “I judge a book by its cover; I judge a book by its shape.”
Perhaps this seems an obvious contention, and the analysis of material conditions, from the economics of printing and distribution to the physical properties of the book as an object, has been a mainstay of some literary study for the past two generations. This is as it should be, for a history of literature could be written not in titles and authors, but from the mediums on which that literature was preserved, from the clay tablets of Mesopotamia to the copper filaments and fiber optic cables that convey the internet. Grappling with the physicality of the latest medium is particularly important, because we’ve been able to delude ourselves into thinking that there is something purely unembodied about electronic literature, falling into that Cartesian delusion that strictly separates the mind from the flesh.
Such a clean divorce was impossible in earthier times. Examine the smooth vellum of a medieval manuscript, and notice the occasionally small hairs from the slaughtered animals that still cling to William Langland’s Piers Plowman or Dante’s The Divine Comedy. Houston explains that “a sheet of parchment is the end product of a bloody, protracted, and physical process that begins with the death of a calf, lamb, or kid, and proceeds thereafter through a series of grimly anatomical steps until parchment emerges at the other end,” where holding up to the light one of these volumes can sometimes reveal “the delicate tracery of veins—which, if the animal was not properly bled upon its slaughter, are darker and more obvious.” It’s important to remember the sacred reality that all of medieval literature that survives is but the stained flesh of dead animals.
Nor did the arrival of Johannes Guttenberg’s printing press make writing any less physical, even if was less bloody. Medieval literature was born from the marriage of flesh and stain, but early modern writing was culled from the fusion of paper, ink, and metal. John Man describes in The Gutenberg Revolution: How Printing Changed the Course of History how the eponymous inventor had to “use linseed oil, soot and amber as basic ingredients” in the composition of ink, where the “oil for the varnish had to be of just the right consistency,” and the soot which was used in its composition “was best derived from burnt oil and resin,” having had to be “degreased by careful roasting.” Battles writes in Palimpsest: A History of the Written Word that printing is a trade that bears the “marks of the metalsmith, the punch cutter, the machinist.” The Bible may be the word of God, but Guttenberg printed it onto stripped and rendered rags with keys “at 82 percent lead, with tin making up a further 9 percent, the soft, metallic element antimony 6 percent, and trace amounts of copper among the remainder,” as Houston reminds us. Scripture preached of heaven, but made possible through the very minerals of the earth.
Medieval scriptoriums were dominated by scribes, calligraphers, and clerics; Guttenberg was none of these, rather a member of the goldsmith’s guild. His innovation was one that we can ascribe as a victory to that abstract realm of literature, but fundamentally it was derived from the metallurgical knowledge of how to “combine the supple softness of lead with the durability of tin,” as Battles writes, a process that allowed him to forge the letter matrices that fit into his movable printing-press. We may think of the hand-written manuscripts of medieval monasteries as expressing a certain uniqueness, but physicality was just as preserved in the printed book, and, as Battles writes, in “letters carved in word or punched and chased in silver, embroidered in tapestry and needlepoint, wrought in iron and worked into paintings, a world in which words are things.”
We’d do well not to separate the embodied poetics of this thing we’ve elected to call the text from a proper interpretation of said text. Books are not written by angels in a medium of pure spirit; they’re recorded upon wood pulp and we should remember that. The 17th-century philosopher Rene Descartes claimed that the spirit interacted with the body through the pineal gland, the “principal seat of the soul.” Books of course have no pineal gland, but we act as if text is a thing of pure spirit, excluding it from the gritty matter upon which it’s actually constituted. Now more than ever we see the internet as a disembodied realm, the heaven promised by theologians but delivered by Silicon Valley. Our libraries are now composed of ghosts in the machine. Houston reminds us that this is an illusion, for even as you read this article on your phone, recall that it is delivered by “copper wire and fiber optics, solder and silicon, and the farther ends of the electromagnetic spectrum.”
Far from disenchanting the spooky theurgy of literature, an embrace of the materiality of reading and writing only illuminates how powerful this strange art is. By staring at a gradation of light upon dark in abstracted symbols, upon whatever medium it is recorded, an individual is capable of hallucinating the most exquisite visions; they are able to even experience the subjectivity of another person’s mind. The medieval English librarian Richard de Bury wrote in his 14th-century Philobiblon that “In books I find the dead as if they were alive … All things are corrupted and decay in time; Saturn ceases not to devour the children that he generates; all the glory of the world would be buried in oblivion, unless God had provided mortals with the remedy of books.”
If books are marked by their materiality, then they in turn mark us; literature “contrived to take up space in the head and in the world of things,” as Battles writes. The neuroplasticity of our mind is set by the words that we read, our fingers cut from turned pages and our eyes strained from looking at screens. We are made of words as much as words are preserved on things; we’re as those Egyptian mummies who were swaddled in papyrus printed with lost works of Plato and Euripides; we’re as the figure in the Italian Renaissance painter Giuseppe Arcimboldo’s 1566 The Librarian [above], perhaps inspired by those stacks of the Malatestiana. In that uncanny and beautiful portrait Arcimboldo presents an anatomy built from a pile of books, the skin of his figure the tanned red and green leather of a volume’s cover, the cacophony of hair a quarto whose pages are falling open. In the rough materiality of the book we see our very bodies reflected back to us, in the skin of the cover, the organs of the pages, the blood of ink. Be forewarned: to read a book as separate from the physicality that defines it is to scarcely read at all.
Image: Wikimedia Commons
In October 2015, Hogarth Press from Crown Publishing launched the Hogarth Shakespeare project, an anticipated eight-part series in which best-selling authors retell a Shakespearean classic as a contemporary novel. Jeanette Winterson’s cover of The Winter’s Tale—A Gap of Time—was published first, almost exactly 400 years after the Bard’s death. Five more installments have since been released, with the final one—Gillian Flynn’s cover of Hamlet—expected in 2021.
Contemporizing a Shakespearean play is a fairly common undertaking. As the Hogarth Shakespeare’s website notes, Shakespeare’s works have frequently “been reinterpreted for each new generation, whether as teen films, musicals, science-fiction flicks, Japanese warrior tales, or literary transformations.” Reimagining a Shakespearean story can often be a contentious effort as well. Many critics note the difficulty of believably translating a Shakespearean conflict—written centuries before the study of psychology—into a modern setting. Supporters, meanwhile, will often point to William Shakespeare himself and his own aptness to adapt and revise stories from various sources.
Regardless of one’s personal thoughts on Shakespearean adaptations, it is hard to overlook their significance to our cultural canon, from musicals like West Side Story and Kiss Me, Kate to films such as 10 Things I Hate About You and She’s the Man. Even having the original texts set in modern circumstances can be incredibly influential and timely, with the Public Theatre’s recent production of Julius Caesar—in which Caesar was modeled after Donald Trump—being the most notorious recent example.
As a lover of both Shakespearean drama and contemporary literature, I am an ardent follower of the Hogarth Shakespeare project. However, my interest in the project stems not from a desire to see creatively adapted Shakespearean plots; but, rather, an interest in seeing Shakespearean stories used to examine contemporary political, social, and cultural issues.
Each book in the series thus far has had varying success with this. Jeanette Winterson uses her cover of The Winter’s Tale to examine the devastating effects of hyper-masculinity and violence against women, as well as the normalcy of homoeroticism. Shylock Is My Name—Howard Jacobson’s cover of The Merchant of Venice—uses both Shylock himself and his modernized counterpart, Simon Strulovitch, to examine the past and present expectations of Jewish identity. In Anne Tyler’s Vinegar Girl, her cover of The Taming of the Shrew, the Petruchio character attempts to woo Kate into marriage so that he can avoid deportation. The most meta cover version—Margaret Atwood’s Hag-Seed—has the Prospero character produce The Tempest in a prison. This Tuesday marked the release of Edward St. Aubyn’s Dunbar—his cover of King Lear—which sees Lear reimagined as the head of an international media corporation.
Edward St Aubyn’s novel, however, was preceded by Tracy Chevalier’s New Boy, her cover of Othello and Hogarth’s first modernization of a Shakespearean tragedy. Chevalier’s retelling takes place over the course of a single day on a predominantly white elementary school playground, in which Ian, the playground bully, schemes to break up the budding relationship between Osei—a new student originally from Ghana—and Dee, a popular white student.
When it comes to contemporizing Shakespeare, Othello tends to be considered one of the most substantial texts to view through a modern lens, generally accompanied by The Merchant of Venice. Although Shylock is presented as the villain of Merchant, the anti-Semitism he experiences allows for a modern writer to examine Shylock’s personal tragedy as a victim of discrimination. Meanwhile, although race is not specifically mentioned as incentive for Iago’s escalating schemes against Othello, the implicit racial politics of both Othello’s interracial marriage to Desdemona and his military success as a man of color provide plenty of contemporary subjects for a modern author to examine. In his recent piece for The New York Times, “Shylock and Othello in the Time of Xenophobia,” Shaul Bassi writes, “If throughout the 20th century ‘Hamlet’ and ‘King Lear’ vied for the title of most topical political allegory, in the new millennium ‘The Merchant of Venice’ and ‘Othello’ are the plays that make Shakespeare our contemporary.”
The fatal misstep of New Boy, then, comes from the fact that Chevalier chose to set her retelling in 1974 Washington D.C., only 10 years after the signing of the Civil Rights Act and, from Bassi’s perspective, before Othello truly became the relevant allegory that it is today. There are only a few choice nods to early 1970s pop culture such as hippies, Oldsmobiles, and Roberta Flack, and only one fleeting reference is made to Watergate (despite impeachment proceedings presumably taking place minutes away from the playground). Instead, the main aspect of New Boy that gives it a sense of time is the overtness of the racism that it exhibits. Teachers are frequently overheard discussing Osei, expressing their relief that he isn’t in their classroom, sayings things like “This school isn’t ready for a black boy,” and commenting that Osei has given Dee “a taste for chocolate milk.” Osei and Dee’s teacher, a rumored Vietnam War veteran, functions primarily as a racist stock character, lashing out at Osei for minor infractions by calling him “boy” and telling him to watch himself. His arc appears in the final pages of the book, when he drops the novel’s one predictable and unnecessary n-word, as he yells to Osei to get off of the jungle gym.
Chevalier’s descriptions tend to hinder her storytelling as well. The most significant example of this is in her characterization of Osei’s sister, Sisi, who has begun to follow the Black Panther party back in New York. Her empowerment is described at one point as an “angry black girl performance,” and she is subsequently described as angry so often that her character appears flat and stereotyped. Chevalier’s writing can also begin to feel heavy-handed, with six instances in which characters start to call Osei black before stopping mid-word to correct themselves. Even with this occasional interruption, the word “black” is used so often that it begins to feel artificial and excessive. In Act IV, Chevalier writes:
[Osei] did not want to confront her, to have her get in his face, talking to him, telling more lies, treating him like her boyfriend, and then like the black boy on a white playground. The black sheep, with a black mark against his name. Blackballed. Blackmailed. Blacklisted. Blackhearted. It was a black day.
So how is it that Hogarth’s cover of The Merchant of Venice was so successful, while their Othello cover fell so flat? The answer appears to be because of writers that were assigned to them. Howard Jacobson is a Jewish novelist best known for writing about the struggles of Jewish characters. Jacobson reportedly asked to cover other plays before being assigned Merchant, indicating that Hogarth thoughtfully assigned the play knowing that he would modernize it in a provocative way. Meanwhile, Tracy Chevalier is a white woman best known for The Girl with the Pearl Earring, set in the Dutch Golden Age, which bares little resemblance to the conflicts of Othello. When asked in an promotional interview for Hogarth as to what attracted her to the text, Chevalier likened Othello’s otherness to that of hers living as an expat in Great Britain.
White writers opting to write about a time in the recent past when racism was more deliberate is not uncommon. Abandoning a nuanced discussion of micro-aggressions, structural and institutional racism, and white supremacy in favor of explicit and often dated racial language often simplifies the writing process, and keeps white audiences comfortable as they read. In a similar critique of Hollywood, Kara Brown noted in Jezebel last year, “Right now, Americans are only comfortable with a certain type of black person onscreen.” Although Chevalier occasionally hints at the possibility of a more complex discussion of micro-aggressions—the principal congratulates Osei on being “articulate” before telling the class to welcome him even though he is a “less fortunate” student, despite his father being a diplomat—she ultimately shies away from it.
Alternatively, in A Gap of Time, Jeanette Winterson uses her own background to add to her source material and intensify the text’s conflict. In The Winter’s Tale, King Leontes rather unexplainably believes that his friend King Polixenes is having an affair with his wife, Hermione. In A Gap of Time, Winterson—known for her autobiographical writing on LGBT issues—creates a previous affair between her Leontes and Polixenes, which, as Dean Bakopoulos points out in his New York Times review of the novel, “makes Leo’s overblown rage and irrational envy at the outset even more credible than it is in the original.” Therefore, although The Winter’s Tale isn’t usually listed with The Merchant of Venice and Othello as one of Shakespeare’s most politically relevant plays, Winterson’s unique additions make it more successful adaptation than Chevalier’s take on Othello, which idly favors a more overt racism than what is featured in her source text.
The choices of writers in many cases have led to fascinating twists on Shakespeare’s works, namely Jacobson’s parallel Shylocks in Shylock is My Name and Jeanette Winterson’s gay undertones in A Gap of Time. However, in a 2015 New York Times article detailing the Hogarth Shakespeare project, Alexandra Alter wrote that Winterson’s cover was, “a promising start to an ambitious new series from Hogarth, which has assembled an all-star roster of stylistically diverse writers to translate Shakespeare’s timeless plays into prose.” As the series has gained more traction, it is hard not to notice the word “stylistically” here. Although the writings of the Hogarth team are stylistically varied, their biographies are less so. Three of the writers are American and three are British, leaving Margaret Atwood (who is Canadian) and Norwegian writer Jo Nesbø, whose cover of Macbeth is expected next year. All eight writers are white—five women and three men—with only one under the age of 50 (Flynn is 46), and three writers in their 70s. Although each author did achieve some success within their own adaptation, imagine how rewarding the series would have been had it featured writers whose backgrounds varied more drastically from Shakespeare himself. It is disappointing when a project aims to see “the Bard’s plays retold by acclaimed, bestselling novelists and brought to life for a contemporary readership,” yet the writers selected are not ultimately representative of all that contemporary society has to offer.
Image Credit: Wikipedia
“All that he doth write / Is pure his own.” So a 17th-century poet praised William Shakespeare. This is not actually true.
Shakespeare was a reteller. Cardenio, also known as The Double Falsehood, which I’ve written about before for The Millions, was a retelling of the Cardenio episode in Don Quixote. As You Like It retold Thomas Lodge’s romance Rosalynde, The Two Noble Kinsmen comes from the Knight’s Tale in Chaucer’s Canterbury Tales and Troilus and Cressida from Chaucer’s Troilus and Criseyde. The Comedy of Errors is Plautus’s Menaechmi with an extra set of twins. The Winter’s Tale retold Robert Greene’s novella Pandosto without the incest. Much Ado About Nothing is Orlando Furioso, although Beatrice and Benedick are original. King Lear, Hamlet, and The Taming of the Shrew may be simple rewrites of earlier plays. In fact the only of Shakespeare’s plays to have original plots were The Tempest, A Midsummer Night’s Dream, Love’s Labour’s Lost, and The Merry Wives of Windsor. What makes Shakespeare, well — Shakespeare, is not his plots, but his language.
This month, Hogarth Press published the first entry — The Gap of Time by Jeanette Winterson — in a new collection of novels by today’s major practitioners that each rewrite one of Shakespeare’s plays. Tracy Chevalier will be retelling Othello; Margaret Atwood The Tempest; Gillian Flynn Hamlet; Edward St. Aubyn King Lear; Anne Tyler The Taming of the Shrew; Jo Nesbø Macbeth; and Howard Jacobson The Merchant of Venice. This is not a new endeavor, although it does seem to be a uniquely 20th- and 21st-century phenomenon. (The Romantics preferred to think of Shakespeare as an artless genius working under pure inspiration.) But as scholars have begun to recognize the extent of Shakespeare’s own retellings — and collaborations — modern writers have taken a page out of his book by rewriting his plays. (I’ll mention here the newly announced project by the Oregon Shakespeare Festival to “translate” Shakespeare’s plays into contemporary English, but that seems to stem from a different impulse.)
Perhaps this narrative is too simple. It is not as if, after all, writers in the last century suddenly discovered Shakespeare as a source and influence. For the past 400 years, Shakespeare’s poetry and plays have become as much a part of the common language and mythology as the King James Bible. In a sense, Noah’s flood is as much a foundational myth of our culture as the Seven Ages of Man. Like Marianne Dashwood and John Willoughby, we use Shakespeare as a way to understand and connect with each other. There is so much of Shakespeare woven into Moby-Dick, for instance, that the allusions and the words and the quotations feel like the warp and woof of the novel. The same could be said for just about anything by Milton, Dickens, Austen, Woolf, Frost, Eliot — in fact I could name most of the writers in the English and American canons, and, indeed, abroad. Borges, to name just one example, found in Shakespeare a kindred spirit in his exploration of magical realism; and Salman Rushdie’s definition of magical realism as “the commingling of the improbable with the mundane” is a pretty good description of some of Shakespeare’s plays — A Midsummer Night’s Dream comes to mind.
Let’s take, for an example, Woolf’s Between the Acts, her last novel. It is a book seemingly made entirely of fragments — scraps of literature spoken and overheard; parts of the village pageant, around which the novel centers, either omitted or the voices of the actors blown away by the wind; characters speaking to each other but failing to understand, or only managing to half-articulate their thoughts. In the midst of all this, Shakespeare is ever-present, a source for the poetry on everyone’s lips, inspiration for part of the pageant, and a symbol of what ought to be valued, not just in literature and art, but in life.
One of these piecemeal phrases that becomes a refrain in the book and in the consciousness of the characters is “books are the mirrors of the soul.” Woolf turns it around from meaning that books reflect the souls of their creators to meaning that the books we read reflect what value there might be in our souls. The person who is drawn to reading about Henry V must have that same heroism somewhere in him; the woman who feels the anguish of Queen Katherine also has some of her nobility. The younger generation of Between the Acts reads only newspapers, or “shilling shockers.” No one reads Shakespeare, although they try to quote him all the time. Shakespeare becomes a substitute for what they cannot put into words themselves, their “groanings too deep for words.” The worth of Shakespeare that emerges in Between the Acts is as a tap for the hidden spring in each of the characters that contains the things they wish they could say, the thoughts that otherwise they would have no way to communicate — instead of mirrors, books are the mouthpieces of the soul.
Shakespeare’s plays are a touchstone, and the way we react to them, the way we retell them, says more about us than about him. For example, Mary Cowden Clarke in 1850 created biographies for Shakespeare’s female characters in The Girlhood of Shakespeare’s Heroines. Each are made paragons of virtue and modesty, reflecting Victorian morals and values. But Clarke was also coopting Shakespeare for her own interest in women’s rights, using his stories of women with agency and power, and clothing them in Victorian modesty in order to provide an example and a way forward for herself and her female readers.
To take another example, Mark Twain retold Julius Caesar (actually, just Act III, Scene i) in “The Killing of Julius Caesar ‘Localized,’” but he used it to address the bully politics of his day. Shakespeare’s play becomes a news squib from the “Roman Daily Evening Fasces” and the title character becomes “Mr. J. Caesar, the Emperor-elect.” Twain’s Caesar successfully fends off each would-be assassin, “[stretching] the three miscreants at his feet with as many blows of his powerful fist.” The story also makes a claim about Twain’s status as a writer compared to Shakespeare: by mentioning Shakespeare as a supposed citizen of Rome who witnessed “the beginning and the end of the unfortunate affray,” Twain mocks the popular reverence for Shakespeare; he ceases to be a poetic genius and becomes merely a talented transcriber. But by doing so, Twain mocks himself as well; he is, after all, transcribing Shakespeare.
To turn to novels, I could mention Woolf’s Night and Day, Margaret Atwood’s Cat’s Eye, Robert Nye’s Falstaff, John Updike’s Gertrude and Claudius, Rushie’s The Moor’s Last Sigh, and a long list of others. In a way these are their own type; rather than appropriating Shakespeare, or quoting or alluding to Shakespeare, they purport to re-imagine his plays. Jane Smiley’s retelling of King Lear is probably the most well-known. A Thousand Acres manages to capture the horror of Lear. It is modern in that there is no ultimately virtuous character. Cordelia, or Caroline, becomes naive and blind and prejudiced as any other character in the play, and Larry Cook’s strange relationship to his daughters and the way it blows up says less about power and pride and love and aging than about abuse and bitterness. It is both horribly familiar and also fits surprisingly well into Shakespeare’s play. It becomes part of the lens through which we now must view Lear. It enriches our reading of Shakespeare while also giving us a new view of ourselves. And oh is it a cold hard view.
For her entry into the Hogarth series, Winterson had first pick, and chose The Winter’s Tale, which she says has always been a talismanic text for her. In The Gap of Time, Winterson has written what she calls a “cover version” of The Winter’s Tale. It’s a jazzy, news-y retelling, set insistently in a realistic world. Whereas Shakespeare takes pains to remind us that his play is just a play, Winterson’s emphatically tries to set the action in our own world. Hermione, for example, an actor and singer, has a Wikipedia page. Her acting debut was in Deborah Warner’s adaptation of Winterson’s novel The PowerBook, and she has performed at the Roundhouse Theatre in London. Leontes lives in London, where he is a successful businessman with a company called Sicilia, and Polixenes, a video game designer, lives in New Bohemia, which is recognizable as New Orleans. The characters are renamed with short, jazzy nicknames: Leontes becomes Leo; Polixenes is Zeno; Hermione is Mimi; the shepherd and clown who discover the lost Perdita become Shep and Clo. Only Perdita and Autolycus retain their full names. (Autolycus is the best translation of the book: he becomes a used car salesman trying to offload a lemon of a Delorean onto the clown.)
Shakespeare’s play is focused almost equally on the parent’s story and then the children’s, but Winterson’s focuses almost exclusively on the love triangle between Zeno, Leo, and Mimi. Whereas Shakespeare leaves open the possibility that Leontes may have some grounds for jealousy (though if we believe the oracle of Apollo, no room for the possibility of Hermione being guilty of adultery), Winterson is explicit that a love triangle does exist, but she inverts it. It is Leo who loves both Mimi and Zeno, Leo who has slept with both. And it’s clear that though Mimi chose Leo, there was a distinct connection between her and Zeno. Winterson even takes a hint from Shakespeare’s source in Pandosto and makes Leo consider romancing Perdita when he meets her. “As someone who was given away and is a foundling, I’ve always worked with the idea of the lost child,” Winterson has said. The part of Shakespeare’s tale that spoke to Winterson was the origin story, why the child was lost.
Shakespeare’s play, because it doesn’t insist upon existing in a realistic world, is full of wonder and mystery. It’s that magic that happens when you hear the words “Once upon a time.” The closest Winterson’s version gets to that place is in the scenes that take place inside of Zeno’s video game, when Zeno and Leo and Mimi play themselves but also become something a little grander, a little wilder, a little more numinous. But there is little of Shakespeare’s language present. Winterson’s The Winter’s Tale is as much a retelling of Pandosto as Shakespeare.
Why do we return again and again to Shakespeare’s plays, why do we keep rewriting them? Is it in hope that some of his genius will rub off? Are we searching for new possibilities for interpretation, hoping to mine new ore out of well covered ground? Or are we going toe-to-toe, trying our strength against the acknowledged genius of English literature? Perhaps it is simply that creativity is contagious. When a piece of art inspires you, it literally in-spires, breaths into you. It makes us want to create new art. Or, maybe it’s a more basic instinct. From the beginning of our lives, when we hear a good story, a story that as Winterson says becomes “talismanic” for us, what do we say? “Tell it again.”
Image Credit: Wikipedia.
Do you remember your last English teacher? Did he use colored chalk to diagram William Faulkner’s periodic sentences? Did she stand in the back of the room and read a poem from Denise Levertov, most of the words pushing past your ears, but a few, like “Aren’t there annunciations / of one sort or another / in most lives?” remaining like a refrain? Or was he forgettable, distributing misspelled study guides for The Merchant of Venice before playing a tired cassette recording?
Think about your last year in a high school English classroom. The uneven rows. The loud radiator. The re-used posters, corners double-taped. You were 17, 18. Your mind and your heart were elsewhere. That tension between distraction and focus is healthy. If we do not wish that we were somewhere else, doing something else, the collective, focused breath on a single line of a poem would not be so sweet. Back then, you were full of cynicism and hope. What a mixture: your wounds and joys felt so sharp.
I tell my seniors that I will likely be their last English teacher. They are enrolled in AP Literature and Composition, a difficult course that builds toward a three-hour exam. When I began teaching the course a decade ago, my classes were nearly half the size. Most students were hoping to major in English or philosophy. Now, out of two class sections of 55 students, it is a surprise to have three future English majors. I am lucky that they are no less talented and driven. They are ready to work.
I realize that my situation is unique. My students often attend the most competitive colleges in the country. In order to do so, their high school schedules are strained. They are expected to perform highly in several intellectual disciplines a day, with only six minutes to move between classrooms within an enormous outdoor campus, and 40 minutes to be teenagers at lunch. Still, they are very fortunate. They have the support of the community and district. They are good kids. Curious kids, who stay with me when we examine the difference between mimetic and textual voice in Faulkner’s As I Lay Dying, or parse Wallace Stevens’s “A Comment on Meaning in Poetry.”
I tell them that I am their last English teacher because many of these students will place out of composition or literature in college. They will spend the next four to eight years busy studying, and will go on to successful careers in medicine, law, and business. No one else will ever read them a poem by Gerard Manley Hopkins.
You may think this is melodramatic talk. I admit that teachers were born to perform. We are actors without stages. In a recent interview, poet Paul Muldoon said that most students struggle with poetry because of how it is taught in high school: “What’s usually happening is that the instructor, the teacher, is at pains to show what an extraordinary instructor or teacher he or she is, and the message I think that far too many of us get in high school is that poetry may only be read if you’ve got that instructor or teacher to show you what it’s really about.” I have heard this lament before, and it is not only about poetry. High school English teachers are responsible for flattening literature. We kill books.
These constant, unfounded digs are what cause teachers to be defensive. Teaching in an American public school is an idealistic act. Politicians will have you believe we are an insufferable bunch, our chests full of blind union pride, tenure our ticket to stasis. English teachers, less than perhaps only editors, live their days surrounded with the hopes, fears, eccentricities, and failures of generations of writers. Those words, classic and contemporary, seep into our souls. Why teach Beloved if you do not close your eyes and feel 124 shaking; if you do not feel your own heart shaking? That sensitivity bleeds out of the classroom.
In the latest round of testing frenzy, English teachers are unique targets. We teach the essential skill of communication — the ability to turn students’ feelings into spoken and written words — yet English is considered a light discipline compared to the rigor of sciences. I am not sure what an English teacher is supposed to be now. (I say that out of one side of my mouth; I strive to exceed the expectations of my district and state curriculum.) I mean in terms of my identity as an English teacher. I used to be considered a mentor. During my first few years as a teacher, I kept the prayer of St. Francis in my pocket: “grant that I may not so much seek to be consoled as to console; to be understood as to understand; to be loved as to love.” I was only years removed from almost entering a seminary to become a priest. You never lose that call; it simply takes another shape. My shape was a room with 28 desks and a chalkboard.
I teach every class like it is my last. It could be. When I started teaching, I thought my purpose was to create a legion of English majors. I have learned that my purpose is to pause the lives of my students for long enough that a line of poetry is the loudest sound they encounter during the day. I am uninterested in studies that assess the cognitive worth of reading poetry for future engineers. I don’t teach engineers. I teach people. My master is not a test; it is the belief that minutes reading beautiful language will stir souls. I want my students to see that words are sacraments, in the same way that Andre Dubus said each sandwich he made for his daughters was a sacrament: “physical, nutritious and pleasurable, and within it is love.” It is possible to be cold-hearted and teach, but why do so? Students experience enough private pain some days to fill a lifetime. Literature can be the salve for a weary heart. I do not mean directly; I do not think literature is a form of therapy. I mean that books enable students to experience an extraordinary range of emotions in 180 days.
Most literature we read will pass from their memory. Some works will stick. One poem might change them. It is a beautiful possibility that such an epiphany can occur in as mundane a place as a classroom. That same hope keeps me from burning out in a profession that is as exhausting as it is exhaustive. I hate how teachers are portrayed by politicians and education reformers; I hate how we are reduced to caricatures. But I keep that frustration from my students. After all, it is for them that I am here. I believe in them, and I believe in words; I better believe in both, because I might be somebody’s last English teacher.
Image Credit: Wikimedia Commons
My high school’s theater department put on two Shakespeare plays a year, and when I was old enough to audition, I ran to the front of the line – not to read for the part of Juliet in that year’s headlining Romeo and Juliet, but rather for her lesser known, and much more intoxicating complement, the lady Beatrice in Much Ado About Nothing. Miraculously, I got the part. At the time, I was young and knew little of the play save the recent Kenneth Branagh-Emma Thompson adaptation, but quickly found myself madly in love with this character: a strong-willed, funny, independent wordsmith. For years, I envisaged Beatrice and her ilk as the exemplar of female empowerment in literature and theater, and yet while I’ve personally fixated on the Beatrices that have populated the centuries, I’ve done so because it is clear they were the exceptions to the rule. The rule, in fact, was Juliet. It was Beatrice’s younger cousin, Hero. It was Bianca and Disney princesses and anything that presented an ingénue as a leading lady. Sadly, for every Scarlet O’Hara, there is a Melanie Hamilton offsetting an absurdly independent protagonist. Clearly this paradigm is what has propelled literature forward, but lately, as I’ve explored my bookshelves, it seems as though this requisite stock character, as antiquated as its stock cousins, is finding its way off the pages of great novels, leading me to believe that she has been graciously euthanized by literary fiction. And thankfully so.
The ingénue in contemporary fiction is a powerful mirror against which society is reflected, and its notable absence is indicative of ambitious thirst for change. That there has been a gradual evolution on the page that is sadly not reflected even off the page for female writers, female politicians, and female business leaders is significant in this long-awaited evolution. Pinning down the issue to the paltry representation of women writers in reviews and literary journals as explored through the latest VIDA counts extrapolates the problems women writers face in representation, coverage, and reviews, and there is much work to be done to establish equality. Yet this lack of real estate does not mean that there is a deficit of powerful female characters written today.
When looking directly at the content of contemporary fiction, however, I am as excited as I was when I got the part of Beatrice back in the mid-1990s. Writers, both male and female, are creating strong, authentic characters who can stand on their own. There may be criticism on the outside, but directly on the page, this glorious affirmation of strong-willed women drives me as a writer, as a lawyer, and as a woman, to know that we are represented on the page, whether instantly likable or not. As an aside, perhaps the hotly contested debate currently surrounding this question in fact hinges on the lack of ingénues populating today’s great novels. A simple glance at titles reflects this: The Woman Upstairs, Look At Me, Gone Girl, State of Wonder, On Beauty, We Need to Talk About Kevin, The Hours, and even in non-fiction with Sheryl Sandberg’s Lean In. None of these books apologizes for anger, frustration, strength, manipulation, power, emotion, sensuality. And mostly, none of these books requires a supporting ingénue waiting in the corner, ready to cry foil to a Lizzy Bennett or Jane Eyre or even Catherine Earnshaw.
In contemporary society and fiction, women run companies, perform surgeries, and question their desire to even have children. Dr. Marina Singh and Dr. Anneck Swenson battle wits in the South American jungles of State of Wonder, almost inverting the stereotype by making an ingénue out of the missing male doctor, Anders; Eva Khatchadourian begs people to question her “traditional female values” by often wishing she never had a child in We Need to Talk About Kevin; and women of three generations dominate The Hours, portraying this very evolution of the literary female character in a single brilliant narrative. I could continue to list the novels, but it would probably exceed my word count, so instead, it’s probably better to review how we got to this point.
It’s not that strong women were absent from literature in the past, but rather that they were welcomed with antithetical reception: if not written amongst a flock of female stereotypes (read, “the villain,” the “mother,” the “nurse”), they may have needed the ingénue as a foil to the less commonly recognized strong women of the time. In contemporary culture, however, no one denies the presence of strong, successful, complex women in every facet of society, and likewise, readers are not shocked when they turn up in great literature. It is simply that contemporary literary fiction portrays a realistic society so that ingénues are no longer needed within the texts — as foils or otherwise.
When looking back at some of our most beloved “strong women in literature” from Shakespeare to Victorian England to the early 20th century, almost none of these women is allowed to exist on her own, almost as if the supporting ingénue (or another stock female character) must balance the strong woman so that society may rest. This seesaw of female identity so portrayed in literature of the past seemed necessary in order to propel forward movement. By having the rare and special woman on one end and the stock female (usually the ingénue) on the other, their interaction pushed the story forward, enabled the game of wits to persist, and flexed the narrative into motion.
Beatrice, the gloriously witty self-effacing, proud bachelorette of Much Ado About Nothing, vows never to marry and is teased, mocked, and pitied as a result, countered by the requisite companion ingénue in the banal Hero. Kate of The Taming of the Shrew, who we all know and love as the girl who just didn’t want to fit in, is deemed eponymously shrewish by her unabashed expression, and of course, is, of course, neutralized by her ingénue of a sister, Bianca. Portia, the brilliant heiress of The Merchant of Venice, stands initially as a stellar example of intelligence, power, and leadership, but in order to fulfill her needs as an ingénue, she must impersonate a man. Although pillars of force, these women cannot be fully portrayed without a veil of disbelief, either by unrivaled presentation beside a flattering ingénue or the forced portrayal of a man, so that societal equilibrium of the time is restored.
Fast-forward to early 19th century England, not far from the domination of yet another female monarch, and strong women in literature are still not singularly permissible. Elizabeth Bennett of Pride and Prejudice, the presumed model of the era, is a wonderfully suspicious, intelligent representation of female strength, yet still must be presented beside her exhaustively ingénuesque sisters, so that we all know how rare and special a creature she is. Lizzy Bennett is sublime, and I share a name and nickname with her, so I can’t help but beam with pride whenever she is listed amongst the feminist wonders of the literary world; but the sad truth is that she is so well cited because she is the outlier. Society does not yield a sea of stereotypes in order to hone in on a strong woman, and nor should literature require this pool of ingénues, out of which we may select and conclude that, indeed, Ms. Bennett is different.
Even in late 19th/early 20th century literature, women who battled this stereotype were plagued with depression and expropriated labels. In England, Virginia Woolf wrote of depression and isolation, while in America, Charlotte Perkins Gilman openly divorced her husband, but not before writing about post-partum depression in an incisive story that had never been seen before on the page. Sadly, these women committed suicide, and their autobiographical roles were neither accepted nor credible by the male literary establishment, reflecting yet another mirror of their times. Their characters, however, have lived on, refusing to succumb to literary archetypes. Had they been written as ingénues, they would have evolved into that other stock character of “the madwoman in the attic.” Unfortunately, by removing the label of ingénue and refusing to share the scenes with a classic ingénue, these characters and their architects met a tragic end.
Now, however, strong female characters reign aplenty in literature without their necessary ingénue escorts, slowly eroding the role of that stock accompanying character. It’s not that these strong female characters newly exist, or that they suddenly gained mass appeal, but rather that they are surviving on their own. They are flawed, beautiful representations of women that provide depth, understanding, and sympathy, regardless of their periodic unlikeable actions. They bear their identities proudly, and never require an accompanying convention to confirm their individuality, so that the role of the primary and supportive ingénue is no longer required.
I recently went to hear Isabel Allende speak about her latest novel, Maya’s Notebook. At the Q&A, a young aspiring female writer rose to ask a question that surprised a majority of the audience. “You write a lot of strong women in your books,” she said, before asking, “Has there been anyone who has influenced you?” Allende either didn’t understand the question or wanted to emphasize the lunacy of it, and after three attempts replied: “Do you know any weak women?” Needless to say, a resounding uproar of applause emerged from the previously unobtrusive audience. This is not a topic that is far from the consciousness of the literary establishment, nor is it one that should be. It is so prevalent on people minds and hearts precisely because of its relevance. Readers don’t want to see any more ingénues or stock characters. They want to see the people that they know, the strong women who populate their lives, because, as Isabel Allende so bluntly and perfectly stated, there really aren’t weak women.
I’m not naively suggesting that contemporary fiction has conclusively banished the ingénue from its pages; nor am I claiming that the character is close to her coffin in certain genres, but I am suggesting that that she should be. Fiction, as any vital art form, serves a purpose to reflect society in its emotional, environmental, and political nuances. It informs us, teaches us, reflects humanity in its reverie. If the ingénue, which may be dying in literary fiction, begins to fade in all genres of contemporary literature, if we accept the evolution of the young female protagonist in literature, we may stop expecting women off the page to play that stock role, as well. By exiling the word to the trash bin or perhaps feeling a little bit guilty whenever used, we might continue to represent women as they are – likeable or not. Powerful characters who sometimes want love, sometimes want power, ache with ambition and passion, refuse to be called ingénues, or any other pile of stock stereotypes. They are merely women who need no other label.
Image via Wikimedia Commons
Several years ago a friend of a friend of mine received free tickets to a new production of The Taming of The Shrew in Washington D.C. and made the unfortunate decision of bringing me along. I grew dismayed as the play progressed, believing that it was perhaps impossible to try to reinterpret a play so rife with misogyny. I’ve listened to the many reasons people have provided for why this play is actually a critique of patriarchy, how the final scene is so obviously repellent that it is impossible for anyone, least of all Shakespeare, to be condoning these values. In researching others opinions, I found a review of Conall Morrison’s 2008 version of this play for The Royal Shakespeare Company by Peter Lathan, who stated “In our modern political correctness we tend to think that Shrew was a play about keeping women in their place, just as we relate Merchant (the companion piece to Shrew in the RSC Theatre Royal season) to anti-Semitism, but that, perhaps, says more about contemporary preoccupations than it does about Shakespeare, for certainly Conall Morrison’s Shrew is more about status than misogyny.”
In my mind, the fact that some people today do still find women and minority rights to be mere “contemporary preoccupations” rather than actual human rights issues, makes the issue of lauding or critiquing a new interpretation of an old play especially slippery. Generally speaking, new versions of older literary works strive to do one of two things: exalt the original author’s story, or else try to save it from the weight of its own history. I have always been particularly confused by some feminists’ desire to reignite old stories with female characters or else reinvent female characters from days yore. We have so few new stories that delve into current female experience, that taking the time to further empower these older works seems to actually reinforce the notion that literature is a man’s world, and that the most women can do is amend these staple stories, rather than writing new works of their own.
Tim Burton’s new version of Alice in Wonderland is in some ways a feminist dream. It contains a screenplay written by a woman, Linda Woolverton, who strives to provide her audience with a self-actualized Alice, an Alice who is a warrior, rather than a princess. In this new chapter, Alice is 19 years old and at the mercy of a decidedly anti-feminist Victorian age, in which her main option in life is marrying an unimaginative bore of a Duke who, his mother warns Alice, has “digestive issues.” Rather than heed the sage advice of her mother, Alice does not don a corset, but rather begins chasing a real life rabbit she seems to remember from her dreams. She falls deep down the rabbit hole where she ends up in “Underland”, welcomed by several talking animals, all of whom want her to be the champion who fights the terrifying Jabberwocky and, in doing so, defeat the evil Red Queen.
Perhaps on its own this would actually be a fantastically good story. The problem is, it bares little or no relation to the actual text of Alice in Wonderland, which is not a fantasy or action-adventure novel, but a small and clever little book, filled with imaginative puzzles, rhymes, word games and mathematical problems, much more akin to a female version of The Phantom Tollbooth or Harold and The Purple Crayon than Star Wars or Lord of The Rings. The original Alice was neither a princess nor a warrior; she was a little girl. The book is actually refreshingly free of gender stereotypes. Alice is portrayed as smart and imaginative, filled with wonder at the world around her, but the focus is never so much on Alice per say, as it is on the world itself. In some ways, the wonderful thing about Alice in Wonderland is that it provided girls with a story which centered around their perspective of a fantasy world, but could ultimately be relatable to a little boy as well. By drawing more attention to the gender norms of Victorian England, Woolverton actually creates issues of sexism which never existed in the original edition.
This decision by Woolverton and Burton is a shame for a variety of reasons. First, because there is nothing interesting or controversial about showing that Victorian women were dealt a tough hand, and as such, there doesn’t seem to be a compelling reason to force this particular trope onto this particular story. Second, it is reductionist. Why is it we have to see a woman play the role of a classic warrior in order to view her story as important enough to necessitate a big blockbuster movie? Lastly, it simply obscures the small joys that come from reading the original work. Many of Tim Burton’s films effectively capture the bizarre and otherworldly language of childhood; Alice in contrast seems like a composite of typical CGI images, chase scenes and the requisite action sequences that pop out of the screen, but fail to leave any sense of haunting after the credits roll on.
In the end, I find myself yearning for visions of female agency which are neither critiques of a patriarchal past, nor visions of an equally patriarchal future, wherein women are only valued if they are seen as tough and warrior like as their male predecessors. Perhaps Carroll’s original story worked because it wasn’t about what it meant to be a woman at all. Instead, it was about a particular girl and her particularly curious adventures into a world of nonsense so unique there still hasn’t been a film version which has really done it justice.