The Time I Opened for Bon Iver: On Allowing Failure to Flourish

Swinging wide the door to The Local Store in Eau Claire, Wisc., it was hard not to feel like a rock star. Not because I am one (What’s the opposite of a rock star?), but because, due to the store’s back-to-back scheduling of events, I was opening for Bon Iver.

All right, that’s not entirely true. Probably not even a little true. What I mean to say is that a song I’d written—the only song I’ve ever written—was being debuted alongside a few other first-time songwriters’ attempts directly before a pre-release listening party for Bon Iver’s latest album, i,i.

My elevation to opening-act status was not made at Bon Iver front man Justin Vernon’s request, but rather, what I imagine was his horror. Nothing like a tune from B.J. “What’s a Flugelhorn?” Hollars to get the party started.

I’d have never written the song in the first place were it not for a solicitation from a local middle school chorus teacher, who gathered together a few writers and promised to set our words to music. Initially, I’d demurred (“I have too much respect for music,” I’d said), but she insisted. And so, we writers momentarily set aside our novels and our essays and tried to determine what a song was supposed to sound like. I had some vague notion, of course, but given my own defection from music following a two-week stint of fifth-grade trumpet, I was feeling vulnerable in a new medium.

And that vulnerability felt glorious.

What a rare gift to be given permission to fail. Or rather, to spin it more optimistically, to “succeed to a lesser extent.” While writing the song, I’d set the bar so low that even I could limp over it. The product hardly mattered. What mattered was the process; specifically, my attempt at utilizing the fundamental skills I employed in other literary genres and applying them to lyrics.

The resulting song—thanks to the talented chorus teacher—wasn’t half-bad. In fact, I’d be lying if I said I didn’t feel a fleeting hint of pride. Not for what I’d done, but for what we’d done together. We took a risk and the risk paid off.

In the hour or so leading up to Bon Iver’s listening party, 40 of us took our seats in The Local Store for a sing-along from our selection of newly-composed songs. As the program neared its end, I turned to spot a roomful of fans anxiously awaiting the Bon Iver listening party, fans that hadn’t intended to hear a B.J. Hollars’s original but may have heard one anyway. And lived to tell of it.

The idea that an amateur like me can share a “stage” alongside a Grammy-Award winning band like Bon Iver (even if only by way of a listening party) speaks to one of art’s greatest gifts: its potential for inclusion. From finger painting to a gallery exhibition at The Museum of Modern Art, it’s possible to carve out space for us all. And sometimes, if the scheduling works in your favor, even the same space.

That night, while leaving The Local Store, I watched the Bon Iver fans descend upon the room we’d just vacated. Within minutes, they’d be encircled around a record player in that darkened room, heads bowed as they did that rarest of 21st-century things: sat silently together. The scene was repeated at 59 other locations worldwide, thousands gathering for a first listen of Bon Iver’s latest effort.

Within hours, reviews for i.i. began pouring in. Most were positive, with the exception of a scathing two out of five-star review from The Guardian’s Ben Beaumont-Thomas, who called the album a “misfire,” claiming Vernon’s melodies were “uninspired” and accusing the musician of, on a few occasions, “squat[ting] in his comfort zone.” As for Vernon’s lyrics, Beaumont-Thomas remarked that the songs “feature a few lucid pleas for understanding…amid acid-addled sermonizing.”

Thank goodness he wasn’t there for my opening act, I thought.

For me, Beaumont-Thomas’s final line left the deepest cut, the reviewer arguing that if Vernon is “trying to build Bon Iver into a mini-utopia of shared values,” then “he needs to be a stronger leader.”

To critique i.i on its artistic merit is one thing, but to critique Vernon’s well-known effort to create and support a robust community of artists and musicians—a community, I’ll add, in which even a guy like me can occasionally sneak in the back door—seems a rather low blow. If Beaumont-Thomas means—as I think he does—that Vernon’s so-called “burgeoning hippy paradise” has corrupted his musical vision, then I suppose we can agree to disagree. But for my part, I prefer to think of Vernon’s community of artists as something larger than any album, more vital than any song, and more empowering and inspiring than any review.

While it’s true that my songwriting “career” began and ended with a single song, that song wouldn’t have been written at all were it not for the chorus teacher’s willingness to populate her own “mini-utopia” with imperfect “songwriters” like me. In doing so, she reminded me that vulnerability can serve as rocket fuel for the muse. It doesn’t guarantee greatness, but it guarantees growth.

As all artists know, it’s easy to create the safe thing, but it’s courageous to take the risk. Which is why the art that I admire most invites failure at least as often as success. Leave it to the critics to tell us whether or not we pulled it off. “While they are deciding,” as Andy Warhol famously said, “we’ll make even more art.”

Image credit: Wikimedia Commons/Daniel J. Ordahl.

The Long, Winding Road to Publication

In 1999, I got the phone call every writer dreams of when my agent rang to tell me that William Morrow had made an offer on my first novel, In Open Spaces. The advance was very modest, but I didn’t care. I had been trying to get published for almost 10 years and hadn’t even managed to place a short story yet. Like many writers, I’d had my share of close calls, the most significant one being when I’d had an internship at the Atlantic Monthly and the fiction editor there, C. Michael Curtis, read the first few chapters of In Open Spaces and liked it enough to offer to help me find a publisher. The fact that someone of his stature couldn’t find someone to publish the book was one of many indicators of how challenging the publishing world can be. But eventually, I met the right person and my book found a home.

I’ve met enough writers in the 20 years hence to know that my story is not all that unusual, although I would challenge anyone to match my record of seven different editors for a first novel. About the time the novel was supposed to come out, Morrow was swallowed up by HarperCollins. My editor was laid off, and I was left in limbo for three and a half years, passed from one editor to another—one of whom called to tell me how much she loved memoirs—wondering whether it would ever see the light of day.

But it did, and it actually made a small splash, receiving a starred review from Publishers Weekly, and several other goods reviews, including one in The New York Times. After all those many years of wondering, the reception made me feel vindicated, to the point that I developed a bit of an attitude. I often refer to this as my “Everything I Have to Say Is Absolutely Fascinating” phase. Harper paid for a book tour, and I was convinced this was going to be my life.

I had apparently learned nothing from the parade of editors that left the company while In Open Spaces hung in the balance. I sent my editor at Harper one of the other novels I had written, but he was less than enthused. It was a complete departure from the themes of In Open Spaces, which is historical fiction based on my homesteader grandparents in Eastern Montana. So he asked whether I had anything similar. I had just started working on a sequel, so I sent him the first few chapters.

He was thrilled, and made me a pretty good offer ($20,000) for the book that would eventually become The Watershed Years. I was happy, but decided to push for a couple of things. The first book came out as a quality paperback, and like most writers, I had hoped for a hardcover, so I asked whether he thought that was possible. I also asked if they could raise the advance a bit.

He told me that he thought he could get me those things once the book was finished, but that I would have to turn down this initial offer in the meantime. I talked it over with my agent, who recommended I take him up on it, so I turned down the offer. Of course that editor left the company before I finished the novel. They passed The Watershed Years along to an editor unfamiliar with my work, who turned it down.

I was naturally very disappointed, but I also just assumed that my track record for In Open Spaces would open doors to another publisher. What I didn’t realize is that once you’re dropped by a major publisher, it’s almost impossible to move up to that level again if you are not a Big Name. My first book had sold more than 10,000 copies, which was good for a first novel, but not enough to put me into the category of a sure thing.

So for the next 15 years, I had to scramble to find a publisher for each and every book I finished. I was fortunate enough to land some very good agents, including one at Writer’s House, and one at Brandt and Hochman. But none of them ever got me a book deal. One of the unusual things about my career is that I’ve now sold seven books and no agent ever found the publisher for any of them. It was always a friend—or it was me.

I ended up working with whoever would have me, in most cases regional publishers in Montana. And I have nothing negative to say about any of those people. For the most part, they were dedicated and attentive, much more so than anyone at Harper ever was. But the downside is that your books never get the kind of exposure and distribution that a major publisher provides. As much as I enjoyed working with these smaller houses, I would have gone back to Harper in a second, and I tried to get my foot back in their door many times.

Oddly enough, after one of my books started selling fairly well, the publisher that was distributing it asked if I could get in touch with Harper about obtaining the rights to In Open Spaces. I figured that would be easy since the book was only selling a few hundred copies a year. But to my surprise, Harper declined; the book was still doing well enough that they wanted to hang onto the rights.

Meanwhile, my books continued to get great reviews. Two of them were finalists for the High Plains Book Award for fiction. But that made very little difference when it came to finding someone to publish my next book. So in desperation, I launched a Kickstarter campaign to self-publish two novels for which I hadn’t been able to find a publisher. That campaign raised a healthy amount, and during the process, a Facebook friend, Steven Gillis, founder of Dzanc Books, asked me to send him the novel that would eventually become Cold Country. I thought little of it because I was accustomed to getting my heart broken. But to my surprise, Michelle Dotter, the editor for Dzanc, contacted me with an offer to publish Cold Country.

I have given a lot of thought to those 15 years, and what I learned from that huge mistake of turning down the offer from HarperCollins. I’ve wondered why I would have been so willing to subject myself to being treated like a commodity, as the major publishers tend to do, rather than working with people who value your work for what it is. And one thing became clear. It’s not the money, although that certainly helps. It’s more a matter of being taken seriously, of having your efforts validated. It’s about avoiding that feeling of meeting writers you admire and having them dismiss you because you’re an unknown author. I can’t even begin to count how many times I’ve experienced this, and it’s an awful feeling.

Eventually, as an artist, you have to listen to that small voice in your head—the one that reminds you about the starred review and the review in The New York Times. That you are still that same writer.

Michelle Dotter proved to be an amazing editor. She came up with a brilliant solution for a narrative problem that had plagued me for years on Cold Country. And just last week, the first review came in, from the notoriously tough journal, Kirkus Reviews. It was a very good review. And I am ecstatic. And it could mean nothing. This could be the only review I get. But it is the first national review I’ve had for any of my books since In Open Spaces, and that alone has meant the world to me.

I will always aspire to greater things. More exposure. Tours paid for by the publisher. All of that is seductive. But, in the end, I needed to figure out if I could live with being a regional, mid-list writer. Is it possible to be happy as someone who has a small, loyal following? As a matter of fact, yes it is.

Image credit: Unsplash/Jesse Bowser.

My Chernobyl

☢ ☢ ☢ a tv show

In an early scene of the recent HBO mini-series Chernobyl the local Pripyat town council is called to a meeting. By this point in Soviet history, these meetings of official Communist Party functionaries—like those portrayed in the scene—served no purpose beyond employing whatever means necessary to save the Party embarrassment from another ideological failure. The set-up is borderline comic, appropriately so, with a tableful of actors portraying crusty, back-slapping Soviet nomenklatura—honest to goodness bad guys—all too ready to swallow the demonic proposal offered by the crustiest Party hack in the room: Shut down the town. Nobody in or out. Cut the phone lines. We can’t allow a general panic. Particularly, the old man insists, since there’s nothing to panic about.

I wondered, though, whether Ukrainians and Belarussians watching Chernobyl wouldn’t guffaw their borscht out their noses at another “Hollywood” attempt to dramatize their aching history. More cultural hash, devoid of nuance, stuffed with comic book Soviet citizens. Had HBO screwed up, casting Anglophone actors who wouldn’t know a Ukrainian from a Taresian from the Delta Quadrant? The actors around that table were not the unflappable, taciturn eastern Slavs I’ve spent half my life among. Not even close.

I was pretty sure of myself: In giving life to Chernobyl, HBO, in a gloriously unintentional blast of irony, had birthed a mutant. A flop. It would sink like a pebble in a pond. Too windy. Far too nuanced for the 280-character generation. And anyway, ancient history. 1986? Pre-internet. People wouldn’t care. For proof, look to the five million Kyivites living within 80 miles of the Chernobyl dead zone, our city hyped by an endless string of millennial puff-pieces about “Kyiv: The New Berlin!”—how bad can the damage be, really? If nothing else, the series would fail because as an internet troll once scolded me: Ukraine is irrelevant. By writing about it I was just promoting American hegemony—a CIA acolyte, a baby boomer stooge pining for the Cold War and looking to disparage Marx.

An assertion that was, of course, as ignorant as it was beside the point. I love Marx. But the exchange did provide a delightfully ironic rendering of what happens when an ideologue bastardizes an otherwise worthy piece of technology—be it the internet or a nuclear reactor—to serve parochial interests.

Back on point: Admittedly, the Chernobyl series faced challenges. People would rather forget. Nuclear physics is hard, and conversely, easy to ignore. Ukraine’s new president, a comedian by trade (there is no joke in this sentence), is currently prodding the Ministry of Culture to turn the Exclusion Zone into a “Tourism Magnet,” and I wish I were kidding about the formulation he chose. Ukrainians who earn their bread and board in the cultural sector have largely adopted an exasperated pose toward the subject of Chernobyl—it’s boring. Insignificant. Ukraine has so much more to offer. A sentiment that oddly recalls Cousin Eddy in National Lampoon’s Christmas Vacation describing “the Yak Woman” at a local carnival: “She’s got these great big horns growing right out above her ears. Ugly as sin but a sweet gal, and a helluva good cook.” Chernobyl casts Ukraine as Europe’s Yak Woman. And if radioisotopes have anything to say—and they do—it will continue to serve in that role for the next 10 millennia, give or take.

☢ ☢ ☢ the history

Here’s the gist: In the early morning of April 26, 1986, Reactor #4 at the Chernobyl Nuclear Power Plant in Soviet Ukraine exploded—the worst nuclear disaster in European history. It came about because a young Ukrainian engineer, Sasha Akimov, did exactly what his boss told him to do and punched the malfunctioning reactor’s big red OFF button.

There’s more to the story—and the HBO series tells it well, with clarity that few physics professors could match—but that punched button would force the permanent evacuation of 350,000 people, kill tens of thousands more, embitter tens of millions, contaminate the greater part of Europe with random showers of radioactive fallout, poison sheep in Britain and Sami tribes in Norway, take a solid $225 billion bite out of an already flailing Soviet economy, and provide a fatal kick to the groin for the USSR.

In the 33 years since, the accident has also resulted in a shitstorm of mendacity, double dealing, and unprecedented opportunity for fiscal corruption in a state celebrated the world over for its genius-level capacity for graft fueled by disinformation. It has produced enough scientific and sociological studies to level a forest, grist for Ukraine’s competing hard-right political factions, and helped fashion at least two nation-states that suffer from chronic, somatically mutated socio-political cynicism.

☢ ☢ ☢ at home in Kyiv

By accident of history I am better positioned to spot the little flaws in the Chernobyl production—the rare anachronism, the even rarer info-dump passing itself off as dialogue and the very-non-Slavic staginess in some of the acting. An example of the latter: In one scene the male lead—in what is an otherwise knee-buckling performance by British actor Jared Harris—stands alone at a bar, knocking back jigger after jigger of vodka with nary a zakuska (munchies, hors d’oeuvre) in sight. Clueless that a true Soviet man, uncharacteristically deprived of zakusky, would take a deep snort of his own sweaty wrist after each gulp.

Yet, I kept watching long enough to confirm a thing or two, namely: goddamn HBO. They don’t make flops. My initial skepticism was wrongheaded. Chernobyl, the mini-series, is good. In places, great. A series to rewatch, if not by me. All it took was a single long crane shot filmed on a street I know well.

The scene in question shook me—wrong word—sent me into a sobbing fit that scared the hell out of my sons, three and five. It was filmed not three blocks from our flat. The Soviet penchant for cookie cutter architecture surely helped in the scouting for locations that resemble those in the ghost town of Pripyat and 1980s Kyiv. My neighborhood had provided one.

Military-drab personnel trucks with the word LIUDY (“people inside”) spray-painted in capital letters on the tailgate pull onto Kyiv’s Kostiantynivska Street. Soldiers fan out along the street to begin the work of conscripting some of the 700,000 volunteers it would take to restore the devastated area to something resembling order. Kostiantynivska is bisected by tram tracks and lined on either side by modernist apartment blocks. When it came on screen, I knew the place immediately—my sons’ kindergarten is in the left of the shot. And there, on the right side, hangs the mistake: a single plastic-aluminum balcony extension for a top-floor flat. If Google street view is dated correctly, that balcony went up during the past four years. It certainly wasn’t there in 1986.

An ugly anachronism, but one that reminds me that in life, as in art, the past, present, and future will meld any damn way they please and there’s nothing we can do about it. Nothing. Particularly when Chernobyl-related relevancies—strontium-90, caesium-137, and plutonium-239 and its 24,000-year half-life—come into play. The spectrum of radioisotopes produced by the 60-ish metric tons of uranium that spewed from the Chernobyl reactor core and were carried by prevailing winds that dropped nuclear fallout across the breadth of Europe before it could be controlled, well, those chunks of burning stardust have a different concept of time.

Yet, Kyiv is home. Safe as Chernobyl milk. My family stays in part because we lack viable options. Also, partly because, despite attempts by cynicism, that relentless bitch, to seduce me, I will show that I am tougher than her. Or at least tougher than I was before I started allowing her room in my heart.

Chernobyl made me mad—not at all what I expected when I sat down to watch in preparation for this essay. Full disclosure: Chernobyl—the accident, not the HBO series—and I have some shared history.

☢ ☢ ☢ an ancient history

I used to be a pastor, a priest, a performer of ancient Christian ritual. As such, I, like most clergy, kept a book called a Pastoral Agenda—a ledger of official sacerdotal function. You get baptized, married, anointed, or buried on my watch and the relevant names, dates, Scriptural text, and attending circumstances go in the book. An earthly spreadsheet with heavenly data.

One crisp autumn day in 2001, I entered a name and the attending circumstances in my leather-bound, gold-leaf embossed Agenda that ended up being the last entry I would ever make as a clergyman. Before I left the Church, taking off the cloth for good, more weddings, funerals, etc. would follow, though they’re mostly a blur. They’re definitely not in my Agenda. I have no conscious memory of it happening, but it’s clear that I stopped writing things down that day.

About two-and-a-half years after that final entry, I would finish my parish work in Ukraine and return to the United States. Six months later I would be released from a psychiatric hospital, now tagged as suicidal, with PTSD, and on full disability: an unholy trinity that puts a hellacious crimp in your job prospects in God’s green America. I did what any sane person would do: I went back to the country that had unmade me. The place that had confronted and continues to confront my demons with its own.

That last entry in the book records the day I spent with a mourning family in a small village in far western Ukraine—day three of a traditional Orthodox Christian funeral. I led the procession from the home to the church. Sang the liturgy. Led the procession from the church to the cemetery. Officiated at the internment. We had a bit of a scare at the church doors where pallbearers traditionally kneel three times, lowering the coffin to the ground before crossing the threshold. That day the pallbearer—only one was needed—nearly stumbled. In the end he managed not to fall, and the Igloo cooler-sized coffin he was carrying was delivered safely to the sanctuary.

The funeral was for a two-year-old boy dead of acute juvenile myeloid leukemia. The 39th funeral I had conducted in Ukraine for a cancer victim under the age of six.

The drive back from that Carpathian village was gorgeous, but I was in a rush because I’d been invited to a talk with a member of UNSCEAR—the U.N. Scientific Committee on the Effects of Atomic Radiation—meeting with local pediatricians. I disliked her on sight. When she opened her mouth it only sealed the deal. She spoke with the dismissive assurance typical to the breed, telling the room: There is no credible evidence of widespread malign effect on public health from radiation released during the Chernobyl catastrophe. I worked with pediatricians who had, five years running, determined that 100 percent of the children they treated tested positive for hypothyroidism—thousands of children annually. Ukrainian children born a generation after Chernobyl. Their mothers had been girls when it blew up. Their results, these doctors were told, were anecdotal. It was intimated that their research would not be considered for UNSCEAR reporting “as is.”

The dreams can be rough. Dead-eyed parents standing by a grave as the shoebox that holds the desiccated corpse of their little one is lowered into the black soil. The child whose family was so poor that the only coffin they could afford was made of particle board covered in felt. The dead boy’s godfather served as pallbearer. It was raining hard as we walked from the church to the grave and the box began to split apart in his hands. He gripped it tighter before eventually dropping to his knees and weeping in a way that has me praying for senility. Perhaps one day I will unhear him. Or unsee the vision cauterized into my brain of a little girl, pinched and skeletal in her coffin, and the crisscrossing indentations made by the mortician’s stitches holding her tiny mouth shut.

☢ ☢ ☢ the half-life we’re living

In these latter days, Chernobyl adds little to my existence beyond the 1.5-percent pension tax I pay every quarter—certainly an upgrade from toddler funerals and the attendant demons that HBO refused to keep locked in their cage. And there are demons. There are more lurking in this story than the series could ever begin to tell: the State-enforced abortions and the pregnant women crossing borders to avoid them, the ungodly spike in juvenile cancers, the crushing infertility rates, the 31 years it took to finally put a stable cover over the reactor, and, God help us, even profiteering bankers, those nuggets of human toxin that surpass all understanding.

Perhaps the defining phenomenon I draw from Chernobyl is the understanding that there is no limit to the evil we will do to one another. Though perhaps I stand as proof that there is a limit to how much evil the average person can stand. Something the HBO series captured well by centering its story around Valery Legasov, the actual Soviet physicist and inorganic chemist who drove the creation of a team of 700,00—700,000!—first responders by sheer force of will. (You have to admire Soviet maximalism.) The same ragged crew of the unwashed whose daily micro-acts of defiance understood tyranny as a way of life and not just vocabulary for a hyperbolic political tweet. Legasov, though he possessed all knowledge of nuclear fission, and though he spoke the truth to Soviet power in the tongues of angels, could not find sufficient love in his heart and hanged himself in his flat.

Here, close to the core, we have it better than most. Chernobyl hunkers nearby, a daily reminder of the lessons we ignore to our peril. That governments lie. That their noble-sounding intentions will involve, without fail, practical human cost. There is no truth, no nobility, no heart in them. The lie is their native language and murder their craft. Embitterment their true policy. They call some men free while enslaving others. And they take this turn—some sooner, some later—because they are made up of us. We swallow the lie as we feed it. John Le Carré put it well: “Communism. Capitalism. It’s the innocents who get slaughtered.”

Don’t misunderstand, the broader lessons of Chernobyl—if that’s not too quaint—are as unimpeachable as they are immutable: Rogue technology is off its leash; we shit where we eat and the earth groans, indicating it’s had its fill; our institutions, our best ideas, are obsolete the day they are minted. And yet I can’t help but think that these concerns, though disturbing, are altogether predictable phenomena on the spectrum of evil produced by a benthic species with a penchant for deep hostility and murder in its genes.

Svetlana Alexievich, 2015 Nobel Laureate for Literature, told me a couple of years back during an interview for The Millions: “I cannot cover a war anymore. Cannot add to that storehouse of bad dreams. Instead I’m trying to talk to them…about love. But this is hard for us…every story about love inevitably turns into a story of pain. Ours is not a happy culture.” And despite the prevailing timbre of this story, I am not sure I completely agree with her. Chernobyl has given Ukrainians an advantage: the ability to recognize what James Joyce called “the radiance in all things.” They have seen the world as it is. The lie in all its bold potential. They have seen a generation of their children reduced to so much insignificant and unidentifiable particulate, seen those children dismissed as statistically insignificant, and yet they have endured. Who needs happiness when you have hope? Finally, when nothing is as it seems what else is there but hope?

☢ ☢ ☢

I’m sitting on a bench outside my church. Too crowded in there. Too many random nuclei bumping and jostling. Too much heat being generated. This little congregation has an unexpectedly outstanding choir and this Sunday they are singing the Rachmaninoff liturgy—a rare treat. A tram rattles down the block, the same model of tram that’s been traveling along this road for at least the last 33 years. A young woman exits the sanctuary. She is big pregnant and her belly makes it difficult for her to bow and cross herself three times before the church doors. Out and down through the narrow windows float the words of St. John Chrysostom from deep antiquity—let us now lay aside all earthly care—to my ear the spiritual cantus firmus that fueled Rachmaninoff as he labored to compose this otherworldly music precisely as the Russian Empire was beginning its meltdown. The pretty woman smiles at me as she passes.

Image Credit: Oleksandr Khomenko.

The Lion and the Eagle: On Being Fluent in “American”

“How tame will his language sound, who would describe Niagara in language fitted for the falls at London bridge, or attempt the majesty of the Mississippi in that which was made for the Thames?” —North American Review (1815)

“In the four quarters of the globe, who reads an American book?” —Edinburgh Review (1820)

Turning from an eastern dusk and towards a western dawn, Benjamin Franklin miraculously saw a rising sun on the horizon after having done some sober demographic calculations in 1751. Not quite yet the aged, paunchy, gouty, balding raconteur of the American Revolution, but rather only the slightly paunchy, slightly gouty, slightly balding raconteur of middle-age, Franklin examined the data concerning the births, immigration, and quality of life in the English colonies by contrast to Great Britain. In his Observations Concerning the Increase of Mankind, Franklin noted that a century hence, and “the greatest Number of Englishmen will be on this Side of the Water” (while also taking time to make a number of patently racist observations about a minority group in Pennsylvania—the Germans). For the scientist and statesman, such magnitude implied inevitable conclusions about empire.


Whereas London and Manchester were fetid, crowded, stinking, chaotic, and over-populated, Philadelphia and Boston were expansive, fresh, and had room to breathe. In Britain land was at a premium, but in America there was the seemingly limitless expanse stretching towards an unimaginable West (which was, of course, already populated by people). In the verdant fecundity of the New World, Franklin imagined (as many other colonists did) that a certain “Merry Old England” that had been supplanted in Europe could once again be resurrected, a land defined by leisure and plenty for the largest number of people. Such thoughts occurred, explains Henry Nash Smith in his classic study Virgin Land: The American West as Symbol and Myth, because “the American West was nevertheless there, a physical fact of great if unknown magnitude.”

As Britain expanded into that West, Franklin argued that the empire’s ambitions should shift from the nautical to the agricultural, from the oceanic to the continental, from sea to land. Smith describes Franklin as a “far-seeing theorist who understood what a portentous role North America” might play in a future British Empire. Not yet an American, but still an Englishmen—and the gun smoke of Lexington and Concord still 24 years away—Franklin enthusiastically prophesizes in his pamphlet that “What an Accession of Power to the British Empire by Sea as well as Land!”

A decade later, and he’d write in a 1760 missive to a one Lord Kames that “I have long been of opinion, that the foundations of the future grandeur and stability of the British empire lie in America.” And so, with some patriotism as a trueborn Englishmen, Benjamin Franklin could perhaps imagine the Court of St. James transplanted to the environs of the Boston Common, the Houses of Parliament south of Manhattan’s old Dutch Wall Street, the residence of George II of Great Britain moved from Westminster to Philadelphia’s Southwest Square. After all, it was the Anglo-Irish philosopher and poet George Berkeley, writing his lyric “Verses on the Prospect of Planting Arts and Learning in America” in his Providence, Rhode Island manse, who could gush that “Westward the course of empire takes its way.”

But even if aristocrats could perhaps share Franklin’s ambitions for a British Empire that stretched from the white cliffs of Dover to San Francisco Bay (after all, christened “New Albion” by Francis Drake), the idea of moving the capital to Boston or Philadelphia seemed anathema. Smith explained that it was “asking too much of an Englishmen to look forward with pleasure to the time when London might become a provincial capital taking orders from an imperial metropolis somewhere in the interior of North America.” Besides, it was a moot point, since history would intervene. Maybe Franklin’s pamphlet was written a quarter century before, but the gun smoke of Lexington and Concord was wafting, and soon the idea of a British capital on American shores seemed an alternative history and a historical absurdity.

Something both evocative and informative, however, in this counterfactual; imagining a retinue of the Queen’s Guard processing down the neon skyscraper canyon of Broadway, or the dusk splintering off of the gold dome of the House of Lords at the crest of Beacon Hill overlooking Boston Common. Don’t mistake my enthusiasms for this line of speculation as evidence of a reactionary monarchism, I’m very happy that such a divorce happened, even if less than amicably. What does fascinate me is the way in which the cleaving of America from Britain affected how we understand each other, the ways in which we become “two nations divided by a common language,” as variously Mark Twain or George Bernard Shaw have been reported as having waggishly once uttered.

Even more than the relatively uninteresting issue that the British spell their words with too many u’s, or say weird things like “lorry” when they mean truck, or “biscuit” when they mean cookie, is the more theoretical but crucial issue of definitions of national literature. Why are American and British literature two different things if they’re mostly written in English, and how exactly do we delineate those differences? It can seem arbitrary that the supreme Anglophiles Henry James and T.S. Eliot are (technically) Americans, and their British counterparts W.H. Auden and Dylan Thomas can seem so fervently Yankee. Then there is what we do with the early folks; is the “tenth muse,” colonial poet Anne Bradstreet, British because she was born in Northampton, England, or was she American because she died in Ipswich, Mass.? Was Thomas More “American” because Utopia is in the Western Hemisphere, was Shakespeare a native because he dreamt of The Tempest? Such divisions make us question how language relates to literature, and how literature interacts with nationality, and what that says vis-à-vis the differences between Britain and America, the lion and the eagle.

A difficulty emerges in separating two national literatures that share a common tongue, and that’s because traditionally literary historians equated “nation with a tongue,” as critic William C. Spengemann writes in A New World of Words: Redefining Early American Literature, explaining how what gave French literature or German literature a semblance of “identity, coherence, and historical continuity” was that they were defined by language and not by nationality. By such logic, if Dante was an Italian writer, Jean-Jacques Rousseau a French one, and Franz Kafka a German author, then it was because those were the languages in which they wrote, despite them respectively being Swiss, Czech, and Florentine.

Americans, however, largely speak in the tongue of the country that once held dominion over the thin sliver of land that stretched from Maine to Georgia, and as far west as the Alleghenies. Thus, an unsettling conclusion has to be drawn; perhaps the nationality of Nathaniel Hawthorne, Herman Melville, and Emily Dickinson was American, but the literature they produced would be English, so that with horror we realize that Walt Whitman’s transcendent “barbaric yawp” would be growled in the Queen’s tongue. Before he brilliantly complicates that logic, Spengemann sheepishly concludes, “writings in English by Americans belong, by definition, to English literature.”

Sometimes misattributed to linguist Noam Chomsky, it was actually the scholar of Yiddish Max Weinreich who quipped that a “language is a dialect with an army and a navy.” If that’s the case, then a national literature is the bit of territory that that army and navy police. But what happens when that language is shared by two separate armies and navies? To what nation does the resultant literature then belong? No doubt there are other countries where English is the lingua franca; all of this anxiety over the difference between British and American literature doesn’t seem so quite involved as regards British literature and its relationship to poetry and prose from Canada, Australia, or New Zealand for that matter.

Sometimes those national literatures, and especially English writing from former colonies like India and South Africa, are still folded into “British Literature.” So Alice Munro, Les Murray, Clive James, J.M. Coetzee, and Salman Rushdie could conceivably be included in anthologies or survey courses focused on “British Literature”—though they’re Canadian, Australian, South African, and Indian—where it would seem absurd to similarly include William Faulkner and Toni Morrison in the same course. Some anthologizers who are seemingly unaware that Ireland isn’t Great Britain will even include James Joyce and W.B. Yeats alongside Gerard Manley Hopkins and D.H. Lawrence as somehow transcendentally “English,” but the similar (and honestly less offensive) audacity of including Robert Lowell or Sylvia Plath as “English” writers is unthinkable.

Perhaps it’s simply a matter of shared pronunciation, the superficial similarity of accent that makes us lump Commonwealth countries (and Ireland) together as “English” literature, but something about that strict division between American and British literature seems more deliberate to me, especially since they ostensibly do share a language. An irony in that though, for as a sad and newly independent American said in 1787, “a national language answers the purpose of distinction: but we have the misfortune of speaking the same language with a nation, who, of all people in Europe, have given, and continue to give us thefewest proofs of love.”

Before embracing English, or pretending that “American” was something different than English, both the Federal Congress and several state legislatures considered making variously French, German, Greek, and Hebrew our national tongue, before wisely rejecting the idea of one preferred language. So unprecedented and so violent was the breaking of America with Britain, it did after all signal the end of the first British Empire, and so conscious was the construction of a new national identity in the following century, that it seems inevitable that these new Federalists would also declare an independence for American literature.

One way this was done, shortly after the independence of the Republic, is exemplified by the fantastically named New York Philological Society, whose 1788 charter exclaimed their founding “for the purpose of ascertaining and improving the American Tongue.” If national literatures were defined by language, and America’s language was already the inheritance of the English, well then, these brave philologists would just have to transform English into American. Historian Jill Lepore writes in A Is for American: Letters and Other Characters in the Newly United States that the society was created in the “aftermath of the bloody War for Independence” in the hope that “peacetime America would embrace language and literature and adopt…a federal, national language.”

Lepore explains the arbitrariness of national borders; their contingency in the time period that the United States was born; and the manner in which, though language is tied to nationality, such an axiom is thrust through with ambiguity, for countries are diverse of speech and the majority of a major language’s speakers historically don’t reside in the birthplace of that tongue. For the New York Philological Society, it was imperative to come up with the solution of an American tongue, to “spell and speak the same as one another, but differently from people in England.” So variable were (and are) the dialects of the United States, from the non-rhotic dropped “r” of Boston and Charleston, which in the 18th century was just developing in imitation of English merchants to the guttural, piratical swagger of Western accents in the Appalachians, that this lexicographer hoped to systematize such diversity into an unique American language. As one of those assembled scholars wrote, “Language, as well as government should be national…America should have her own distinct from all the world.” His name was Noah Webster and he intended his American Dictionary to birth an entirely new language.

It didn’t of course, even if “u” fell out of “favour.” Such was what Lepore describes as an orthographic declaration of independence, for “there iz no alternativ” as Webster would write in his reformed spelling. But if Webster’s goal was the creation of a genuine new language, he inevitably fell short, for the fiction that English and “American” are two separate tongues is as political as is the claim that Bosnian and Serbian, or Swedish and Norwegian, are different languages (or that English and Scots are the same, for that matter). British linguist David Crystal writes in The Stories of English that “The English language bird was not freed by the American manoeuvre,” his Anglophilic spelling a direct rebuke to Webster, concluding that the American speech “hopped out of one cage into another.”  

Rather, the intellectual class turned towards literature itself to distinguish America from Britain, seeing in the establishment of writers who surpassed Geoffrey Chaucer or Shakespeare a national salvation. Melville, in his consideration of Hawthorne and American literature more generally, predicted in 1850 that “men not very much inferior to Shakespeare are this day born on the banks of the Ohio.” Three decades before that, and the critic Sydney Smith had a different evaluation of the former colonies and their literary merit, snarking interrogatively in the Edinburgh Review that “In the four quarters of the globe, who reads an American book?” In the estimation of the Englishmen (and Scotsmen) who defined the parameters of the tongue, Joel Barlow, Philip Freneau, Hugh Henry Brackenridge, Washington Irving, Charles Brockden Brown, and James Fenimore Cooper couldn’t compete with the sublimity of British literature.

Yet, by 2018, British critics weren’t quite as smug as Smith had been, as more than 30 authors protested the decision four years earlier to allow Americans to be considered for the prestigious Man Booker Prize. In response to the winning of the prize by George Saunders and Paul Beatty, English author Tessa Hadley told The New York Times “it’s as though we’re perceived…as only a subset of U.S. fiction, lost in its margins and eventually, this dilution of the community of writers plays out in the writing.” Who in the four corners of the globe reads an American novel? Apparently, the Man Booker committee. But Hadley wasn’t, in a manner, wrong in her appraisal. By the 21st century, British literature is just a branch of American literature. The question is how did we get here?

Only a few decades after Smith wrote his dismissal, and writers would start to prove Melville’s contention, including of course the author of Benito Cereno and Billy Bud himself. In the decade before the Civil War there was a Cambrian Explosion of American letters, as suddenly Romanticism had its last and most glorious flowering in the form of Ralph Waldo Emerson, Henry David Thoreau, Hawthorne, Melville, Emily Dickinson, and of course Walt Whitman. Such was the movement whose parameters were defined by the seminal scholar F.O. Matthiessen in his 1941 classic American Renaissance: Art and Expression in the Age of Emerson and Whitman, who included studies of all of those figures saving (unfortunately) Dickinson.

The Pasadena-born-but-Harvard-trained Matthiessen remains one of the greatest professors to ever ask his students to close-read a passage of Emerson. American Renaissance was crucial in discovering American literature as something distinct and great in its own right from British literature. Strange to think now, but for most of the 20th century the teaching of American literature was left for either history classes, or the nascent (State Department funded) discipline of American Studies. English departments, true to the very name of the field, tended to see American poetry and novels as beneath consideration in the study of “serious” literature. The general attitude of American literary scholars about their own national literature can be summed up by Victorian critic Charles F. Richardson, who, in his 1886 American Literature, opined that “If we think of Shakespeare, Bunyan, Milton, the seventeenth-century choir of lyrists, Sir. Thomas Browne, Jeremy Taylor, Addison, Swift, Dryden, Gray, Goldsmith, and the eighteenth-century novelists…what shall we say of the intrinsic worth of most of the book written on American soil?” And that was a book by an American about American literature!

On the eve of the World War II and Matthiessen approached his subject in a markedly different way, and while scholars had granted the field a respectability, American Renaissance was chief in radically altering the manner in which we thought about writing in English (or “American”). Matthiessen surveyed the diversity of the 1850s from the repressed Puritanism of Hawthorne’s The Scarlet Letter to the Pagan-Calvinist cosmopolitan nihilism of Moby-Dick and the pantheistic manual that is Thoreau’s Walden in light of the exuberant homoerotic mysticism of Whitman’s Leaves of Grass. Despite such diversity, Matthiessen concluded that the “one common denominator of my five writers…was their devotion to the possibilities of democracy.” Matthiessen was thanked for his discovery of American literature by being hounded by the House Un-American Activities Committee over his left-wing politics, until the scholar jumped from the 12th floor of Boston’s Hotel Manger in 1950. The critic’s commitment to democracy was all the more poignant in light of his death, for Matthiessen understood that it’s in negotiation, diversity, and collaboration that American literature truly distinguished itself. Here was an important truth, what Spengemann explains when he argues that American literature is defined not by the language in which it is written (as with British literature), but rather “American literature had to be defined politically.”

When considering the American Renaissance, an observer might be tempted to whisper “Westward the course of empire,” indeed. Franklin’s literal dream of an American capital to the British Empire never happened. Yet by the decade when the United States would wrench itself apart in an apocalyptic civil war, ironically America would become figurative capital of the English language. From London the energy, vitality, and creativity of the English language would move westward to Massachusetts in the twilight of Romanticism, and after a short sojourn in Harvard and Copley Squares, Concord and Amherst, it would migrate towards New York City with Whitman, which if not the capital of the English became the capital of the English language.  From the 1850s onward, British literature became a regional variation of American literature, a branch on the latter’s tree, a mere phylum in its kingdom.

English literary critics of the middle part of the 19th century didn’t note this transition; arguable if British reviewers in the mid part of the 20th century did either, but if they didn’t, it’s an act of critical malpractice. For who would trade the epiphanies in ballad-meter that are the lyrics of Dickinson in favor of the arid flatulence of Alfred Lord Tennyson? Who would reject the maximalist experimentations of Melville for the reactionary nostalgia of chivalry in Walter Scott? Something ineffable crossed the Atlantic during the American Renaissance, and the old problem of how we could call American literature distinct from British since both are written in English was solved—the later is just a subset of the former.

Thus, Hadley’s fear is a reality, and has been for a while. Decades before Smith would mock the pretensions of American genius, and the English gothic novelist (and son of the first prime minister) Horace Walpole would write, in a 1774 letter to Horace Mann, that the “next Augustan age will dawn on the other side of the Atlantic. There will, perhaps, be a Thucydides at Boston, a Xenophon at New York, and, in time, a Virgil at Mexico, and an Isaac Newton at Peru. At last, some curious traveler from Lima will visit England and give a description of the ruins of St. Paul’s.” Or maybe Walpole’s curious traveler was from California, as our language’s literature has ever moved west, with Spengemann observing that if by the American Renaissance the “English-speaking world had removed from London to the eastern seaboard of the United States, carrying the stylistic capital of the language along with it…[then] Today, that center of linguistic fashion appears to reside in the vicinity of Los Angeles.” Franklin would seem to have gotten his capital, staring out onto a burnt ochre dusk over the Pacific Palisades, as westward the course of empire has deposited history in that final location of Hollywood.

John Leland writes in Hip: The History that “Three generations after Whitman and Thoreau had called for a unique national language, that language communicated through jazz, the Lost Generation and the Harlem Renaissance…American cool was being reproduced, identically, in living rooms from Paducah to Paris.” Leland concludes it was “With this voice, [that] America began to produce the popular culture that would stamp the 20th century as profoundly as the great wars.” What’s been offered by the American vernacular, by American literature as broadly constituted and including not just our letters but popular music and film as well, is a rapacious, energetic, endlessly regenerative tongue whose power is drawn not by circumscription but in its porous and absorbent ability to draw from a variety of languages that have been spoken in this land. Not just English, but languages from Algonquin to Zuni.

No one less than the great grey poet Whitman himself wrote, “We American have yet to really learn our own antecedents, and sort them, to unify them. They will be found ampler than has been supposed, and in widely different sources.” Addressed to an assembly gathered to celebrate the 333th anniversary of Santa Fe, Whitman surmised that “we tacitly abandon ourselves to the notion that our United States have been fashion’d from the British Islands only, and essentially form a second England only—which is a very great mistake.” He of the multitudinous crowd, the expansive one, the incarnation of America itself, understood better than most that the English tongue alone can never define American literature. Our literature is Huck and Jim heading out into the territories, and James Baldwin drinking coffee by the Seine; Dickinson scribbling on the backs of envelopes and Miguel Piñero at the Nuyorican Poets Café. Do I contradict myself? Very well then. Large?—of course. Contain multitudes?—absolutely.

Spengemann glories in the fact that “if American literature is literature written by Americans, then it can presumably appear in whatever language an American writer happens to use,” be it English or Choctaw, Ibo or Spanish. Rather than debating what the capital of the English language is, I advocate that there shall be no capitals. Rather than making borders between national literatures we must rip down those arbitrary and unnecessary walls. There is a national speech unique to everyone who puts pen to paper; millions of literatures for millions of speakers. How do you speak American? By speaking English. And Dutch. And French. And Yiddish. And Italian. And Hebrew. And Arabic. And Ibo. And Wolof. And Iroquois. And Navajo. And Mandarin. And Japanese. And of course, Spanish. Everything can be American literature because nothing is American literature. Its promise is not just a language, but a covenant.

Image credit: Freestock/Joanna Malinowska.

How to Write the Perfect Five-Paragraph Essay

It was my job for a time to corral college freshmen, feed them books and then coax from them fits of insight five paragraphs at a time. This I did imperfectly. I asked them to think about their lives to concoct a narrative or descriptive piece from those thoughts. Or I’d place a book or a film in front of them and guide them down a twisting path of analysis. We’d end the semester by debating some thorny societal issue; they’d develop an opinion, search for credible sources to back up their views and then attempt to forcefully argue the point. This is all standard. Pages and pages of reading for me. Many late nights with students’ words. Hand cramped from marking grammar issues. Mind numbed from the repeated phrases, the In today’s society; from the malapropisms, the now and days. As I am teaching and grading these essays, I am writing my own fiction, the stories that would become my collections Insurrections and The World Doesn’t Require You. I wondered about the form of the five-paragraph essay and its place in my own work. One never sees a five-paragraph essay in the wild. Could it hold its own in a mature piece of writing? Constraints can lead to creativity, right? Isn’t that what poetic forms are about? This is how I arrived at the novella, “Special Topics in Loneliness Studies,” which ends my second collection. The constraints imposed by the five-paragraph essay could, I figured, be a space to dramatize all that is contradictory or absurd about academia.

In my own fiction, I usually dream freely, writing without form, mostly, hoping the words, like water, find their own shape. In my students’ work, though, I, like many academics engaged in teaching Composition, imposed a rigid structure: five paragraphs consisting of an introduction with a thesis statement at the end, three body paragraphs, and a summary conclusion. It’s a heavily maligned form, I know. Writer John Warner, whose book Why They Can’t Write is subtitled: Killing the Five-Paragraph Essay and Other Necessities, insists it warps students’ idea of a well-formed essay and we should “kill it dead.” Even while teaching it and arguing that the form allows weaker writers unfamiliar with structure and grammar to find their way, I had my reservations. To think this through further, I figured, I’d introduce the form to my fiction. First, I’d write a story consisting of nothing but five-paragraph essays, taking us through a semester in a student’s and a professor’s life. It would be a longer story, though not anything resembling a novella; the essays would begin in a relatively milquetoast way though full of colorful malaprops and misspellings that winked at double meanings. As the student grew in confidence she would also grow more verbose and the narrative would become surreal and loopy until she was in open revolt against the structure of the essay, the professor, and society itself.
This provided some challenges that made themselves obvious before I could even write the first word. How does the professor’s voice get into the story? What is this class about and why does it make this student want to rebel? Most importantly, what the heck is this student writing about? Essays, student essays in Comp. 101 at least, don’t come from nowhere. They come from prompts carefully drawn up by professors to achieve some educational purpose. Those educational purposes are laid out in the course syllabus. Before I could write this student’s essays, before I could write a prompt, I needed to write a syllabus. This would be a Freshman Composition class, I knew. There I could exploit any number of contradictions and conflicts—between the high intellectual nature of the academic and the utilitarian nature of the course; between bored student and starry-eyed professor. It would be tragic. It would be hilarious.


When I started drawing up the syllabus, I thought of the way the realities of the classroom impose a loneliness onto the academic, particularly low-paid, overworked contingent faculty members. The words Professor or Dr. appended to the beginning of a name have the ring of prestige, but the miniscule rewards—low pay, lack of stability and time because of the mountains of papers to grade—often make getting through day-to-day life difficult and frustrating, even dangerous if you have no health benefits and your overwork is eroding your health. The phrase, Special Topics in Loneliness Studies, flitted through my head as a title and in making the syllabus the professor character formed. He would be a lonely man obsessed by loneliness and, like any good academic, he would use his intellect to solve his solitude. A doomed project if I’ve ever heard one. When I heard the professor’s voice I needed to know why he was so lonely. Some of it was in his syllabus, but to really understand it I needed context that called for traditional narration. I watched the story grow, not just into sections like a traditional short story, but into chapters. The email is both the bane and lifeblood of the college campus these days. As the professor reached out to his students and peers via electronic mail, I wrote those for him and his peers and charges. But what did the professor tell his students in class? Back in the real world, I spent many a class hour standing in front of PowerPoint presentations, talking and watching students flash pictures of the screen like paparazzi. I snapped ridiculous pictures of my childhood toys and used the photos to create the professor’s lecture slides. To you and me they’d be absurd, to him the pictures are as serious as commitment, as serious as sitting in solitude to think through your life’s path.
Through the process of getting to know the characters and watching the piece expand from a long short story into a novella, I realized I had to abandon my original conceit, the constraint of a series of five-paragraph essays that were to bring me boundless creativity. While the narrative had prompts for free writes and free writes from the student, both further revealing the professor and the student, the five-paragraph essay had not yet made an appearance. The narrative no longer needed three of them to mark time. The narrative now took place in an academic year, not a semester. I must say, I wasn’t eager to write the five-paragraph essay I knew the book needed. I had written enough from the student’s perspective that I knew her. Her final class essay had to represent a maturity of her voice. But the five-paragraph essay is often thought of as facile and stale, the opposite of mature, fully realized work. As Warner writes in an Inside Higher Ed essay: “The 5-paragraph essay is indeed a genre, but one that is entirely uncoupled from anything resembling meaningful work when it comes to developing a fully mature writing process.” I consulted books, doing the research the student would have done, inhabiting her, and then I wrote, following the rules and structures of the five-paragraph essay with the precision I had always hoped my students would, creating for her and for the narrative a piece of insight in five paragraphs.
This piece was produced in partnership with Publishers Weekly and also appeared on publishersweekly.com.

Image credit: Unsplash/Lachlan Cormie.

The Most Difficult Story I Ever Wrote

The story “Foxes,” from my collection Black Light, took me more than a dozen years to write. It was the last story I turned in to my editor at Vintage in 2018, though it was the very first one I started, back in my first semester of Columbia’s MFA fiction program in 2005. In those days, the story was called, “The Teeth,” and later, “The Tent,” and later still, “Foxes Know How Near the Hunters.” Some components have remained the same across every iteration: a troubled child brings their troubled mother into a living room tent they’ve fashioned from barstools, sheets, and blankets. The child holds the mother captive in the flashlit space, sometimes with stories told aloud, sometimes with shadow puppets cast on the walls of the tent. The child has always had bad teeth. The mother has always spoken in a formal tone, has always been a little afraid of the child. But why was the mother afraid? And how exactly were these characters troubled? I wasn’t always sure.
There were other big, crucial elements I had yet to settle. In early drafts the child was first a boy, then two boys, then one mute boy, then two mute girls, and finally, years later, the right girl: smart, compelled by gore, “an excitable kid prone to rushed speech…happiest describing her dark-spattered worlds.” The child solidified and I shifted to her parents. Though the mother was the narrator, there was a lot I still didn’t understand about her. She’d always been unreliable and secretive, but it wasn’t until a 2009 draft that her alcoholism became apparent. And where was the girl’s father? Did it matter? I didn’t know until 2012, when I worked on him for two years—his infidelity, his idiotic sense of humor, his affinity for hunting.
Finding my way into voice is always the part that takes me the longest, but usually once I’m there, the story comes in a steady reveal. “Foxes” was different—it stayed murky at every turn. Usually, that’s an indication that the story should be put down, or at least put in a drawer, but for whatever reason I was okay with the not-knowing, even as it went on for years and years. I couldn’t seem to figure the story out, but I couldn’t forget about it either. It was a troublemaker, something that kept me up nights, and I couldn’t bring myself to kill it.
In 2017, I “finished” it. That’s in quotations because it’s a lie—I knew it wasn’t really finished, that there was something very wrong with it, though I couldn’t say what. But by that time, I’d gotten impatient. I had a whole collection I felt ready to show people, so I buried “Foxes” in the middle of the manuscript and ignored my doubts. I ended up signing with a terrific agent who helped me understand the shape of my collection. “Foxes” escaped cuts and deep revisions while other stories had not. Maybe it was okay! After we sold the book, I worked with my brilliant editor on revisions. We went through a few rounds, and she didn’t seem to have any major issues with “Foxes” either. So why did it make me feel sick every time I thought about it?


In 2018, I was staying in a guest house in Austin so I could focus on my last round of revisions. Though I’d “finished” days before, I couldn’t bring myself to send the final email. I had 48 hours until I had to come home, and I decided to go through that monster one more time, just in case. I printed out the story, tore each section apart, and spread the little scraps all over the floor. In that version, the version that was almost final, the narrative is mostly locked in the mother’s mouth. We have little glimpses of the daughter’s bloody story, but it is heavily filtered through the mother’s voice, taken out of context with no real parallel to the actual traumatic events occurring in the daughter’s life (abandonment by her father, the mother’s subsequent drinking problem, etc.). Suddenly, seeing the disparity between the pieces of the mother’s story versus the daughter’s story, I knew what was missing. I lined up the parts written in the daughter’s voice next to each other on the floor. There were a few random images of violence, but no consequential narrative. From the earliest drafts, the idea had been that the mother is tuning in and out of the daughter’s story, so what the reader gets is fragmented. But that was a cop out! The daughter needed a full, rich narrative to run alongside her mother’s story. One couldn’t take precedence over the other, simply because it came from the mind of a child. “Foxes” was a story about storytelling after all—and the way narratives grant control. I opened a new document on my computer and wrote the daughter’s account straight through—a warped, gruesome fairytale about a knight who wants desperately to return to his daughter, the vicious enemies pursuing him, the dangerous animals hunting him in the deep, dark woods.
Another critical thing had happened to me over the years since I started this story: Between the time of my first draft in 2005 and the 2018 one, I’d had two sons. Suddenly the language of children, full of creepy comments and odd connections, was accessible to me. Kids were in my house, in my lap, booming into my eardrums, asking a thousand questions every day, making a million bizarre, unbidden observations. Perhaps I hadn’t written anything in the daughter’s voice all those years ago because I hadn’t known how. But now, two days before my final deadline, I easily lifted the cadence and rhythms and strange word choices from my sons. Their speech seeped into the girl’s mouth. I knew I needed to show the daughter wrenching the narrative back from her mother, and I knew she had to do it in her own words. Once I figured out her story, my story finally—finally—felt whole.
This piece was produced in partnership with Publishers Weekly and also appeared on publishersweekly.com.

Five Books About Bad-Ass Moms in the Apocalypse

American parents are infamously obsessed with our kids’ well-being. If you’ve had the experience of raising a child, you’ve probably also had the experience, at least once, of lying awake at night contemplating the particular obligations and terrors of parenting in a world that—while not literally threatened by monsters or aliens or imminent collapse—often feels, well, shall we say challenging. Unsafe. Threatened.

John Krasinski once described his monster-apocalypse film, A Quiet Place, as a “love letter” to his kids. It’s not every horror movie that’s described by its director as a love letter, but we live in interesting times. All good stories about the future touch upon the world we inhabit now, and A Quiet Place isn’t really a story about aliens or the end of civilization. It’s a story about trying to be a good person—a good parent—under impossible circumstances. Being tested and discovering what you’re made of.

A Quiet Place is one example of great storytelling about parenting during the end of the world as we know it—but there are others. As a reader, you could say that one of the few real perks of parenting in scary times is that storytellers have risen to the occasion, creating science fiction and dystopian future narratives that show how humanity resists and prevails—and sometimes even triumphs.

These stories often have one thing in common: A bad-ass mother.

Dystopian fiction is full of strong mothers, and for good reason. These characters show the way forward and reframe global conflicts in deeply human terms—making these stories less about How can we go on? and more about How can we survive? To be strong enough to survive, and to keep your kids alive and safe in a world that’s collapsing: That’s a true heroine.

Some bad-ass apocalypse moms are so iconic you’d have to sleep through the actual apocalypse not to know their names: Sarah Connor in the Terminator series, Ellen Ripley in the Alien films, June/Offred in The Handmaid’s Tale. The following list offers a look at some other heroines of science fiction and dystopian worlds who are strong enough to pull others to safety, even as the world around them implodes.

1. Essun from The Fifth Season by N.K. Jemison

In the first pages of the first novel in Jemison’s incredible, award-winning Broken Earth trilogy (now in development for a series by TNT), Essun discovers her son’s broken, dead body—just as a geological cataclysm strikes, threatening civilization as she knows it. Drawing on the hidden powers that have sustained her since her own fractured childhood, Essun picks herself up from unimaginable tragedy to search for her daughter in a world teetering on the brink of collapse. As she travels, she serves as a kind of surrogate mother for a child, who may hold the secret not only to finding Essun’s missing daughter, but to understanding the destruction that threatens to overwhelm them.

2. Lauren Oya Olamina from Parable of the Talents by Octavia Butler

The 1998 sequel to Butler’s classic sci-fi novel Parable of the Sower finds its bad-ass mother character violently torn from her infant daughter, in a future California in which natural resources are scarce, armed communities survive in walled-in neighborhoods, and religious zealots overpower and enslave anyone who doesn’t follow their philosophy. Nevertheless, this collapsed world is in the process of remaking itself thanks to Lauren Olamina, a charismatic visionary whose prophetic writings suggest a radical future for humankind. Olamina is eventually reunited with her daughter after years of enslavement and struggle—only to discover that she and her daughter can’t make peace. While it’s a heartbreaking story about a complicated mother-daughter dynamic, Parable of the Talents is also eerily prescient: Its fanatical, unhinged, autocratic president character leads on the promise to “make American great again.”

3. Julian from The Children of Men by P.D. James

In this dark (but often darkly funny) 1992 bestseller, which inspired the 2006 Alfonso Cuaron film of the same name, a global fertility crisis has vaulted an authoritarian government into power. (Funny how that keeps happening in sci-fi stories written by women.) A young dissident named Julian, miraculously pregnant, helps devise and execute a plan to destabilize the violent regime, with assistance from a group of freedom fighters. Julian’s faith and idealism are tested by a dangerous race for safety as well as by conflicts within her group of revolutionaries, but her clarity of vision and unfaltering bravery keeps the group from splintering, and keeps the ending surprisingly hopeful.

4. Dr. Louise Banks from Arrival, based on the “The Story of Your Life” by Ted Chiang

Aliens arrive in giant, inscrutable structures, and no one knows how to communicate with them or whether they intend to destroy Earth or save it. Obviously, your first move is to call in a linguist, but major bonus points if she happens to be a bad-ass mother—even if she’s not yet aware that motherhood is in her future. In the film and in the short story that inspired it, Dr. Banks decodes the atemporal, emotional language of the strange visitors from another planet, changing her own life and saving her world and her own sense of self in the process. If there’s a better metaphor for parenting a toddler, I don’t even want to know what it is.

5. Cedar Songmaker from Future Home of the Living God by Louise Erdrich

Species around the world—including humans—have begun to devolve, and the world itself is rapidly changing as a result. Fertile and pregnant women are imprisoned and endure forced labors and pregnancies—yes, this is another science fiction novel by a female author in which a fertility crisis prompts a radical and violent regime into power. Meanwhile, Mother, a sort of Siri gone evil, slips through the wires of personal computers and technology to monitor everyone and everything. Cedar Hawk Songmaker, a resourceful young woman whose own origins are a mystery she feels driven to solve, struggles to conceal her pregnancy from the authorities, in the process discovering family secrets that force her to reevaluate her own sense of self. Cedar’s bravery and odd, poignant sense of humor make her a heroine worth following through the dangerous and strange future depicted in these pages.

The Horror, The Horror: Rereading Yourself

Here is a secret about writers, or at least most writers I know, including me: we don’t like to read our published work. Conversations I’ve had over the years with author friends have tended to confirm my own feelings on the matter—namely, that it is very nice and fortunate and rewarding to get something published, but you don’t ever especially want to read it again. One reason for this is the fact that, by the time you’ve finished revising, copyediting, and proofreading a book, you have already reread it effectively a hundred million times before publication. Another reason: there is something deeply nerve-wracking about the fact of the thing you’ve written, mutable for so long, now being fixed and frozen forever, with no possibility of rewriting whatever mistakes you might—and will—find upon rereading. Among the countless, loving tributes in the wake of her recent death, one memorable Toni Morrison anecdote recounted her habit of taking a red pencil to her published works as she read from them; Toni Morrison, of course, was one of the few authors who might have reasonably expected reprintings and possible future opportunities for correcting errata. Toni Morrison was Toni Morrison.

The rest of us have to live with the reality of what we’ve written and managed to get out in the world, and so there’s a self-protective instinct to look away from it. But when the case of author’s copies of my new novel, The Hotel Neversink, arrived—and with it, the usual reticence to crack one open—I found myself wondering about this impulse: is it good? Maybe not. A writer could go through their whole life accumulating work and publications without ever, in earnest, going back and looking at what they’ve done with a reader’s eye. And if you never revisit your old work, you may never fully understand how, or if, you’ve changed as an artist and person. With a sense of great reluctance and apprehension, I therefore decided to sit down and fully reread my first novel.

The novel in question is The Grand Tour, which I wrote from 2013 to 2014, while I was in graduate school at Cornell. I revised it over the following year, during which time I was lucky enough for it sell to Doubleday, which published it almost exactly three years ago, in August, 2016. For what it’s worth, it received good reviews from outlets like Kirkus and Publishers Weekly, and was generally well-received by readers, although not readers who were expecting an uplifting read. One of the highlights of publication came in the form of a hostile email from a very old woman telling me how much she hated my main character, Richard. And she wasn’t wrong to hate him. Richard is quite hateable. In an era of nice characters, and an ever-increasing expectation of “relatability” from readers, The Grand Tour features the kind of anti-heroic ’60s/’70s protagonist sometimes played in films by Jack Nicholson, the kind who no longer curries any cultural favor whatsoever.

But that’s the point of Richard Lazar: he doesn’t curry any favor in his fictional world, either. He is washed-up and alcoholic and socially radioactive, having lived in the desert for ten years following the collapse of his second marriage. When he receives word that his Vietnam memoir has grazed the bestseller list, he is uniquely unprepared to capitalize on this fleeting moment of success. Nonetheless, his publisher trundles him out on a promotional tour, where he meets a young fan named Vance, providing the basis for what is essentially a coming-of-age road novel, (a bildungsroadman?).

The book’s first chapter sets the tone. Richard comes to after blacking out on a flight, is rude to a flight attendant and hostile to Vance, who meets him at the airport with a homemade sign. He proceeds to get drunk before the reading, do a predictably bad job, get drunker at the faculty dinner, and hit on the Dean’s wife. As he’s driven back to his hotel room by the disillusioned Vance, he tells the kid not to bother with writing.

I knew Richard sucked, but this time around, I found myself disliking him a lot more than I did while writing or revising the novel. Richard is a version of the Unhappy White Guy protagonists I grew up reading, and that I attempted to classify in this essay. His specific lineage is the hard-drinking hard-luck variety found in books by the Southern writers I loved in my twenties: Messrs Hannah, Brown, and Crews—Barry, Larry, and Harry. Ultra-masculine, drunk and doomed, these writers and their antiheroes exerted an outsized influence on The Grand Tour. I suppose it speaks well of my personal growth that I’m less (read: not) enamored with this type of character the way I was seven years ago. In a way, this disenchantment feels healthy and generally axiomatic: books take a long time to write and publish, and perhaps they should always feel a little dusty and outmoded by the time they come out.

To the novel’s credit, I think, it is aware of Richard’s awfulness, and, to an extent, Vance’s. It is not the kind of book that misjudges its protagonist, although it might misjudge the extent to which, for a reader, the comedy of complete assholishness can justify having to spend 300 pages with a complete asshole. That said, this time around, I still found it to be a pretty funny book with lots of amusing passages, for instance this one, describing Richard’s purchase of a house in foreclosure after the 2008 real estate bubble:

[His advance] was enough, as it turned out, to buy a nearby house in foreclosure, in a superexurban neighborhood called The Bluffs. There was no bluff visible in the landscape, so perhaps the name referred to the cavalier attitude local banks and homeowners had taken toward the adjustable-rate mortgages that subsequently emptied the neighborhood. It turned out they were giving the things away—all you had to do was show up at auction with a roll of quarters and a ballpoint pen.

Horrible or not, Richard is a successfully drawn character, if one with many problematic antecedents. Vance is drawn convincingly, as well, although his chapters feel slightly less successful to me, simply for the fact that he’s a less funny character. He’s nineteen, stuck at home, depressed and painfully earnest—in other words, the perfect earnest foil for Richard’s cynical lout. This odd couple set-up does feel formulaic, although it also works, as formulas tend to do. Formulaic, as well, though less successfully so, are a clutch of chapters representing Richard’s memoir in the book. On a reread, the appearance of these Vietnam chapters felt somewhat unwelcome to me, and not just because of being typeset in italics. They lean a little too heavily on skin-deep source material like Full Metal Jacket and The Things They Carried. Here comes the gregarious, rebellious grunt. There goes the gruff sergeant and patrician lieutenant. I did do some actual research about troop movements and munitions and local geography, but the whole thing feels a bit stale, though maybe any Vietnam story published in 2016 would.

What does not feel stale at all this time around is the introduction of a third perspective, Richard’s estranged daughter, Cindy, a casino croupier and malcontent, who takes over the middle third of the book. On this read, Cindy strikes me as the most successful aspect of The Grand Tour. She’s the most original character, and she structurally provides respite from the bad male behavior that otherwise dominates the proceedings. Richard and Vance are, after all, two sides of the same coin, i.e. the coin of male ego and its attendant problems. That Vance’s ego is subdued and wounded makes it no less irritating.

The novel’s awareness of these men as problematic and tiresome, and the section in which Cindy more or less destroys them—Vance sexually, Richard emotionally—elevates (I hope) the book above similar novels that simply find their male hero’s antics fascinating. Cindy makes her gleeful exit, disappearing into a street fair crowd, in one of the novel’s most memorable passages:

The Friday-night crowd swelled around her as she neared downtown—in front of the façade of a large art deco building, a stage was set up on which a bunch of paunchy white dudes mangled “Whipping Post.” People pushed past and she rejoiced in it, becoming part of the throng, the multitude … she wondered what someone watching her from above, looking at her face, might think. Would they know anything, be able to divine something about her, her life, or her mistakes? No. No one knows anything about anyone, she thought, and for the second time that day was struck by the feeling that she could start over—that nothing, in fact, would be easier. Denver, why not? She dragged her suitcase into the hot jostle, for the moment immensely pleased.    

The remainder of the novel (SPOILER) finds Vance getting sent to Riker’s Island (!) and all of Richard’s proverbial chickens coming home to roost. A lot happens in these last chapters, probably too much. This is, I think, an unavoidable function or symptom of the book itself trying to do too much. One thing that struck me on this reading is just how much there is going on. It’s a common feature of the debut novel, in which an author takes decades of reading and thinking about novels and compresses it all into their first try, as though it will also be their last. Some of it works here, and some of it doesn’t. If I were to give critical advice to the person who wrote this, I might tell him to think about losing at least one of the component parts—maybe the Vietnam sections—and to also not feel like he has to conclude all of the proceedings with quite such a bang. The tour, I might suggest, could end on a more muted note. The final pages do manage this quietness, and it’s a relief after the sound and fury that precede them.

All of which is to say, I think it’s… pretty good? It’s entertaining and funny, and the writing is solid and sometimes excellent. That sounds like an unreliable claim, given my obvious bias, but would it convince a reader of my objectivity if I also say that I found the book to be weirdly old-fashioned and in certain places very contrived? Ultimately, it’s an uneven novel held together by its humor and prose. I believe if I hadn’t written The Grand Tour and had just happened across it, I would have finished and enjoyed it, with reservations. And if I’d written a review, I might have been generous with this first-time author, adding to everything I’ve said here that I looked forward to what he comes up with next.

Image credit: Unsplash/Rey Seven.

Marks of Significance: On Punctuation’s Occult Power

“Prosody, and orthography, are not parts of grammar, but diffused like the blood and spirits throughout the whole.” —Ben Jonson, English Grammar (1617)

Erasmus, author of The Praise of Folly and the most erudite, learned, and scholarly humanist of the Renaissance, was enraptured by the experience, by the memory, by the very idea of Venice. For 10 months from 1507 to 1508, Erasmus would be housed in a room of the Aldine Press, not far from the piazzas of St. Mark’s Square with their red tiles burnt copper by the Adriatic sun, the glory and the stench of the Grand Canal wafting into the cell where the scholar would expand his collection of 3,260 proverbs entitled Thousands of Adages, his first major work. For Venice was the home to a “library which has no other limits than the world itself;” a watery metropolis and an empire of dreams that was “building up a sacred and immortal thing.”

Erasmus composed to the astringent smell of black ink rendered from the resin of gall nuts, the rhythmic click-click-click of movable type of alloyed lead-tin keys being set, and the whoosh of paper feeding through the press. From that workshop would come more than 100 titles of Greek and Latin, all published with the indomitable Aldus Manutius’s watermark, an image filched from an ancient Roman coin depicting a strangely skinny Mediterranean dolphin inching down an anchor. Reflecting on that watermark (which has since been filched again, by the modern publisher Doubleday), Erasmus wrote that it symbolized “all kinds of books in both languages, recognized, owned and praised by all to whom liberal studies are holy.” Adept in humanistic philology, Erasmus made an entire career by understanding the importance of a paragraph, a phrase, a word. Of a single mark. As did his publisher.

Erasmus’s printer was visionary. The Aldine Press was the first in Europe to produce books made not by folding the sheets of paper in a bound book once (as in a folio), or four times (as in a quarto), but eight times, to produce volumes that could be as small as four to six inches, the so-called octavo. Such volumes could be put in a pocket, what constituted the forerunner of the paperback, which Manutius advertised as “portable small books.” Now volumes no longer had to be cumbersome tomes chained to the desk of a library, they could be squirreled away in a satchel, the classics made democratic. When laying the typeface for a 1501 edition of Virgil in the new octavo form, Manutius charged a Bolognese punchcutter named Francesco Griffo to design a font that appeared calligraphic. Borrowing the poet Petrarch’s handwriting, Griffo invented a slanted typeface that printers quickly learned could denote emphasis, which came to be named after the place of its invention: italic.

However, it was an invention seven years earlier that restructured not just how language appears, but indeed the very rhythm of sentences; for, in 1496, Manutius introduced a novel bit of punctuation, a jaunty little man with leg splayed to the left as if he was pausing to hold open a door for the reader before they entered the next room, the odd mark at the caesura of this byzantine sentence that is known to posterity as the semicolon. Punctuation exists not in the wild; it is not a function of how we hear the word, but rather of how we write the Word. What the theorist Walter Ong described in his classic Orality and Literacy as being marks that are “even farther from the oral world than letters of the alphabet are: though part of a text they are unpronounceable, nonphonemic.” None of our notations are implied by mere speech, they are creatures of the page: comma, and semicolon; (as well as parenthesis and what Ben Jonson appropriately referred to as an “admiration,” but what we call an exclamation mark!)—the pregnant pause of a dash and the grim finality of a period. Has anything been left out? Oh, the ellipses…

No doubt the prescriptivist critic of my flights of grammatical fancy in the previous few sentences would note my unorthodox usage, but I do so only to emphasize how contingent and mercurial our system of marking written language was until around four or five centuries ago. Manutius may have been the greatest of European printers, but from Johannes Guttenberg to William Caxton, the era’s publishers oversaw the transition from manuscript to print with an equivalent metamorphosis of language from oral to written, from the ear to the eye. Paleographer Malcolm Parkes writes in his invaluable Pause and Effect: An Introduction to the History of Punctuation in the West that such a system is a “phenomenon of written language, and its history is bound up with that of the written medium.” Since the invention of script, there has been a war of attrition between the spoken and the written; battle lines drawn between rhetoricians and grammarians, between sound and meaning. Such is a distinction as explained by linguist David Crystal in Making a Point: The Persnickety Story of English Punctuation: “writing and speech are seen as distinct mediums of expression, with different communicative aims and using different processes of composition.”

Obviously, the process of making this distinction has been going on for quite a long time. The moment the first wedged-stylus pressed into wet Mesopotamian clay was the beginning of it, through ancient Greek diacritical and Hebrew pointing systems, up through when Medieval scribes began to first separate words from endless scripto continua, whichbroachednogapsbetweenwordsuntiltheendofthemiddleages. Reading, you see, was normally accomplished out loud, and the written word was less a thing-unto-itself and more a representation of a particular event—that is the event of speaking. When this is the guiding metaphysic of writing, punctuation serves a simple purpose—to indicate how something is to be read aloud. Like the luftpause of musical notation, the nascent end stops and commas of antiquity didn’t exist to clarify syntactical meaning, but only to let you know when to take a breath. Providing an overview of punctuation’s genealogy, Alberto Manguel writes in A History of Reading how by the seventh century, a “combination of points and dashes indicated a full stop, a raised or high point was equivalent to our comma,” an innovation of Irish monks who “began isolating not only parts of speech but also the grammatical constituents within a sentence, and introduced many of the punctuation marks we use today.”

No doubt many of you, uncertain on the technical rules of comma usage (as many of us are), were told in elementary school that a comma designates when a breath should be taken, only to discover by college that that axiom was incorrect. Certain difficulties, with, that, way of writing, a sentence—for what if the author is Christopher Walken or William Shatner? Enthusiast of the baroque that I am, I’m sympathetic to writers who use commas as Hungarians use paprika. I’ll adhere to the claim of David Steel, who in his 1785 Elements of Punctuation wrote that “punctuation is not, in my opinion, attainable by rules…but it may be procured by a kind of internal conviction.” Steven Roger Fischer correctly notes in his A History of Reading (distinct from the Manguel book of the same title) that “Today, punctuation is linked mainly to meaning, not to sound.” But as late as 1589 the rhetorician George Puttenham could in his Art of English Poesie, as Crystal explains, define a comma as the “shortest pause,” a colon as “twice as much time,” and an end stop as a “full pause.” Because our grade school teachers weren’t wrong in a historical sense, for that was the purpose of commas, colons, and semicolons, to indicate pauses of certain amounts of time when scripture was being aloud. All of the written word would have been quietly murmured under the breath of monks in the buzz of a monastic scriptorium.

For grammarians, punctuation has long been claimed as a captured soldier in the war of attrition between sound and meaning, these weird little marks enlisted in the cause of language as a primarily written thing. Fischer explains that “universal, standardized punctuation, such as may be used throughout a text in consistent fashion, only became fashionable…after the introduction of printing.” Examine medieval manuscripts and you’ll find that the orthography, that is the spelling and punctuation (insomuch as it exists), is completely variable from author to author—in keeping with an understanding that writing exists mainly as a means to perform speaking. By the Renaissance, print necessitated a degree of standardization, though far from uniform. This can be attested to by the conspiratorially minded who are flummoxed by Shakespeare’s name being spelled several different ways while he was alive, or by the anarchistic rules of 18th-century punctuation, the veritable golden age of the comma and semicolon. When punctuation becomes not just an issue of telling a reader when to breathe, but as a syntactical unit that conveys particular meanings that could be altered by the choice or placement of these funny little dots, then a degree of rigor becomes crucial. As Fischer writes, punctuation came to convey “almost exclusively meaning, not sound,” and so the system had to become fixed in some sense.

If I may offer an additional conjecture, it would seem to me that there was a fortuitous confluence of both the technology of printing and the emergence of certain intellectual movements within the Renaissance that may have contributed to the elevation of punctuation. Johanna Drucker writes in The Alphabetic Labyrinth: The Letters in History and Imagination how Renaissance thought was gestated by “strains of Hermetic, Neo-Pythagorean, Neo-Platonic and kabbalistic traditions blended in their own peculiar hybrids of thought.” Figures like the 15th-century Florentine philosophers Marsilio Ficino and Giovanni Pico della Mirandola reintroduced Plato into an intellectual environment that had sustained itself on Aristotle for centuries. Aristotle rejected the otherworldliness of his teacher Plato, preferring rather to muck about in the material world of appearances, and when medieval Christendom embraced the former, they modeled his empirical perspective. Arguably the transcendent nature of words is less important in such a context; what difference does the placement of a semicolon matter if it’s not conveying something of the eternal realm of the Forms? But the Florentine Platonists like Ficino were concerned with such things, for as he writes in Five Questions Concerning the Mind (printed in 1495—one year after the first semicolon), the “rational soul…possesses the excellence of infinity and eternity…[for we] characteristically incline toward the infinite.” In Renaissance Platonism, the correct ordering of words, and their corralling with punctuation, is a reflection not of speech, but of something larger, greater, higher. Something infinite and eternal; something transcendent. And so, we have the emergence of a dogma of correct punctuation, of standardized spelling—of a certain “orthographic Platonism.”  

Drucker explains that Renaissance scholars long searched “for a set of visual signs which would serve to embody the system of human knowledge (conceived of as the apprehension of a divine order).” In its most exotic form this involved the construction of divine languages, the parsing of Kabbalistic symbols, and the embrace of alchemical reasoning. I’d argue in a more prosaic manner that such orthographic Platonism is the well-spring for all prescriptivist approaches to language, where the manipulation of the odd symbols that we call letters and punctuation can lend themselves to the discovery of greater truths, an invention that allows us “to converse even with the absent,” as Parkes writes.  In the workshops of the Renaissance, at the Aldine Press, immortal things were made of letters and eternity existed between them, with punctuation acting as the guideposts to a type of paradise. And so it can remain for us.

Linguistic prescriptivists will bemoan the loss of certain standards, of how text speak signals an irreducible entropy of communication, or how the abandonment of arbitrary grammatical rules is as if a sign from Revelation. Yet such reactionaries are not the true guardians of orthographic Platonism, for we must take wisdom where we find it, in the appearance, texture, and flavor of punctuation. Rules may be arbitrary, but the choice of particular punctuation—be it the pregnant pause of the dash or the rapturous shouting of the exclamation mark—matters. Literary agent Noah Lukeman writes in A Dash of Style: The Art and Mastery of Punctuation that punctuation is normally understood as simply “a convenience, a way of facilitating what you want to say.” Such a limited view, which is implicit for either those that advocate punctuation as an issue of sound or as one of meaning, ignores the occult power of the question mark, the theurgy in a comma. The orthographic Platonists at the Aldine Press understood that so much depended on a semicolon; that it signified more than a longer-than-average pause or the demarcation of an independent clause. Lukeman argues that punctuation is rarely “pondered as a medium for artistic expression, as a means of impacting content,” yet in the most “profound way…it achieves symbiosis with the narration, style, viewpoint, and even the plot itself.”

Keith Houston in Shady Characters: The Secret Life of Punctuation, Symbols, and Other Typographical Marks claims that “Every character we type or write is a link to the past;” every period takes us back to the dots that Irish monks used to signal the end of a line; every semicolon back to Manutius’s Venetian workshop. Punctuation, as with the letters whom they serve, has a deep genealogy, their use places us in a chain of connotation and influence that goes back centuries. More than that, each individual punctuation has a unique terroir; they do things that give the sentence a waft, a wisdom, a rhythm that is particular to them. Considering the periods of Ernest Hemingway, the semicolons of Edgar Allan Poe and Herman Melville, and Emily Dickinson’s sublime dash, Lukeman writes that “Sentences crash and fall like the waves of the sea, and work unconsciously on the reader. Punctuation is the music of language.”

To get overly hung up on punctuation as either an issue of putting marks in the right place, or letting the reader know when they can gulp some air, is to miss the point—a comma is a poem unto itself, an exclamation point is an epic! Cecelia Watson writes in her new book, Semicolon: The Past, Present, and Future of a Misunderstood Mark, that Manutius’s invention “is a place where our anxieties and our aspirations about language, class, and education are concentrated.” And she is, of course, correct, as evidenced by all of those partisans of aesthetic minimalism from Kurt Vonnegut to Cormac McCarthy who’ve impugned the Aldine mark’s honor. But what a semicolon can do that other marks can’t! How it can connect two complete ideas into a whole; a semicolon is capable of unifications that a comma is too weak to do alone. As Adam O’Fallon Price writes in The Millions, “semicolons…increase the range of tone and inflection at a writer’s disposal.” Or take the exclamation mark, a symbol that I’ve used roughly four times in my published writing, but which I deploy no less than 15 times per average email. A maligned mark due to its emotive enthusiasms, Nick Ripatrazone observes in The Millions that “exclamation marks call attention toward themselves in poems: they stand straight up.” Punctuation, in its own way, is conscious; it’s an algorithm, as much thought itself as a schematic showing the process of thought.

To take two poetic examples, what would Walt Whitman be without his exclamation mark; what would Dickinson be without her dash? They didn’t simply use punctuation for the pause of breath nor to logically differentiate things with some grammatical-mathematical precision. Rather they did do those things, but also so much more, for the union of exhalation and thought gestures to that higher realm the Renaissance originators of punctuation imagined. What would Whitman’s “Pioneers! O pioneers!” from the 1865 Leaves of Grass be without the exclamation point? What argument could be made if that ecstatic mark were abandoned? What of the solemn mysteries in the portal that is Dickinson’s dash when she writes that “’Hope’ is the thing with feathers –”? Orthographic Platonism instills in us a wisdom behind the arguments of rhetoricians and grammarians; it reminds us that more than simple notation, each mark of punctuation is a personality, a character, a divinity in itself.

My favorite illustration of that principle is in dramatist Margaret Edson’s sublime play W;t, the only theatrical work that I can think of that has New Critical close reading as one of its plot points. In painful detail, W;t depicts the final months of Dr. Vivian Bearing, a professor of 17th-century poetry at an unnamed, elite, eastern university, after she has been diagnosed with Stage IV cancer. While undergoing chemotherapy, Bearing often reminisces on her life of scholarship, frequently returning to memories of her beloved dissertation adviser, E.M. Ashford. In one flashback, Bearing remembers being castigated by Ashford for sloppy work that the former did, providing interpretation of John Donne’s Holy Sonnet VI based on an incorrectly punctuated edition of the cycle. Ashford asks her student “Do you think the punctuation of the last line of this sonnet is merely an insignificant detail?” In the version used by Bearing, Donne’s immortal line “Death be not proud” is end stopped with a semicolon, but as Ashford explains, the proper means of punctuation, as based on the earliest manuscripts of Donne, is simply a comma. “And death shall be no more, comma, Death thou shalt die.”

Ashford imparts to Bearing that so much can depend on a comma. The professor tells her student that “Nothing but a breath—a comma—separates life from everlasting…With the original punctuation restored, death is no longer something to act out on a stage, with exclamation points…Not insuperable barriers, not semicolons, just a comma.” Ashford declares that “This way, the uncompromising way, one learns something from this poem, wouldn’t you say?” Such is the mark of significance, an understanding that punctuation is as intimate as breath, as exulted as thought, and as powerful as the union between them—infinite, eternal, divine.

Image credit: Wikimedia Commons/Sam Town.

Nine Things You Didn’t Know About the Semicolon

My husband and I fell in love, in part, over discussions of the semicolon, a woman told me last year. I’m afraid of it, students tell me every year. Love, fear, or outright hate—the semicolon can elicit them all. People have always had strong feelings about the semicolon, and its history testifies to its ability to touch hearts—or nerves. Here are a few things about its past that you might not know.
1. It’s young. Well, not young compared to you and me—but relative to the rest of our punctuation mainstays, the 525-year-old semicolon is a spring chicken. The period dates all the way back to the 3rd century B.C., although it began as a dot placed at the tippy-top of the end of a sentence and didn’t drift down to its current position until the 9th century. Commas and colons—in concept, at least—trace their origins back as far as periods, but their original forms were also simple dots, suspended at different elevations. They didn’t unfurl into their present shapes until much later, with the comma reinvented in the 12th century as a slash that slowly slid down below the baseline of the text into its modern form; and the colon began to turn up in its current incarnation in the late 13th century.
The semicolon would take a couple more centuries to join the party. It debuted in 1494, in an Italian book called De Aetna. The publisher of the book, Aldus Manutius, believed readers and writers would find a use for a break midway between the quick skip of a comma and the patient pause of a colon; and so, out of these two marks, he created the chimera we know as the semicolon, with its colon head and comma tail.
2. It might be poisonous. A Dutch writer known as Maarten Maartens (the pen name of Jozua Marius Willem van der Poorten Schwartz), now not exactly a household name, was tremendously popular as an English-language writer during the late 19th and early 20th centuries. One of Maartens’s books, The Healers, features a scientist who develops an “especial variety of the Comma” called Semicolon Bacillus, with which he manages to kill several lab rabbits.
3. It has a long history as a courtroom troublemaker. A semicolon that slipped into the definition of war crimes in the Charter of the International Military Tribunal threatened to derail the prosecution of captured Nazis, until a special protocol was created to swap it for a comma. This wasn’t the semicolon’s first or last brush with the law. It was notorious among early-20th century Americans for interrupting liquor service in Boston for six years, after a semicolon snuck into a regulatory statute during retranscription. More sinisterly, semicolons (and indeed all manner of punctuation marks) have been implicated in many appeals cases in which a defendant has been sentenced to death.


4. In spite of its fraught history, legal scholars still succumb to its charms. In the 1950s, Baltimore judge James Clark found an innovative way to ensure his trial transcripts were correct and to add some drama to his court’s proceedings by reading punctuation aloud during sentencing: “Ten years in the penitentiary,” he might say, pausing to let the convicted person blanch in terror at receiving the maximum allowable sentence. “Semicolon,” he’d then continue, and after another pause, finally: “sentence suspended.” The judge hoped the shock of a tough sentence followed by a generous reprieve would help reduce recidivism. Suspending sentences in this fashion earned Clark the nickname, “The Semicolon Judge.”
5. It hasn’t always been bound by rules. For most of the history of the English language, punctuation was a matter of taste. Writers relied on their ears and their instincts to judge where best to mark a pause. But then, with the spread of public schooling in the 1800s, savvy teachers saw a market for a new class of books that would make grammar a teachable science. Perversely, instead of making people more confident in choosing a punctuation mark, rules seem to have had the opposite effect, conjuring up confusion and consternation. Gradually, proper punctuating came to be seen as the province of the elite, although the best writers still followed their own star: “With educated people, I suppose, punctuation is a matter of rule,” Abraham Lincoln mused; “with me it is a matter of feeling. But I must say that I have a great respect for the semi-colon; it’s a very useful little chap.”
6. In the 19th century, the semicolon was all the rage. Back when Lincoln was sprinkling semicolons into his speeches, rules for the semicolon allowed more possibilities for its use than we have today and, perhaps as a result, it was extremely popular. It was so popular, in fact, that colons (and parentheses) became puncti non grati; semicolons were gobbling them up. Some grammar books simply stopped giving rules for those unpopular punctuation marks, while a journal aimed at school teachers and administrators proclaimed that when it came to the colon, “we should not let children use them.” One grammarian, troubled by the notion that colons were now contraband, urged writers to defend them against the encroaching semicolon, forlornly noting that colons were “once very fashionable.”

7. You could bet on a semicolon. “Semicolon is the best,” proclaimed the Chicago Daily Tribune in 1902. They didn’t mean the punctuation mark, however: a horse named Semicolon had a long and successful racing career in the 1890s and the early 1900s. Based on his wins, his younger brother, Colonist, sold for $3,500 (around $100k adjusted for inflation). Mirroring the relative popularity of semicolons and colons at that time period, Colonist seems not to have matched Semicolon’s winning record.
8. It’s…a woman? Or at least not a heterosexual man? Criticisms of the semicolon—and there have been many—are often couched in peculiarly gendered terms. Ernest Hemingway, Cormac McCarthy, and Kurt Vonnegut avoided them, with the latter describing them as “transvestite hermaphrodites representing absolutely nothing.” In an attempt to explain why so many macho writers avoid it, Vanity Fair editor James Wolcott mused that perhaps it was fear of looking “poncy.” “‘Real’ writing is butch and cinematic,” he explained, “so emphatic and declarative that it has no need of these rest stops or hinges between phrases.” Grammar pundit James J. Kilpatrick had a full-on misogynist meltdown over the semicolon, calling it “shy,” “bashful,” “gutless,” “girly”—and therefore “useless.” Of course, the semicolon can indeed be used to show traits stereotyped as feminine or effete, like hesitancy and delicacy (which are no bad things, incidentally); but it can also come down like a hammer, curt and decisive. How lucky for us writers that the semicolon doesn’t yield to pressure to behave in only one way just because guys like Hemingway expected it to.
9. It’s probably not going to go extinct. Newspaper columnists and pundits have been giving it six months to live since at least the 1970s. But no matter how much its function has shifted over time, no matter how many rules are piled on top of it, and no matter how many people rail against it, as long as there are those of us who find it beautiful and useful, it will survive.
This piece was produced in partnership with Publishers Weekly and also appeared on publishersweekly.com.