Utopia (Penguin Classics)

New Price: $12.00
Used Price: $1.49

Mentioned in:

The Trump Administration Has Finally Brought Us to Dystopia

1.
Donald Trump was hardly into his first full calendar year as president before a chorus of critics and pundits began to use the word “dystopian” to describe his administration and the social milieu that it seemed to precipitate. In March 2017, novelist John Feffer wrote, “Unpredictability, incompetence, and demolition are the dystopian watchwords of the current moment, as the world threatens to fragment before our very eyes.” Months later, Entertainment Weekly ran an article with the hypertext title, “How the Trump era made dystopia cool again.” The A.V. Club and Vulture both proposed that we had reached “peak dystopia.” Writing for The New Yorker, Jill Lepore described our era as “A Golden Age for Dystopian Fiction.” (Not of, but “for.”) In early 2018, when the internet was briefly galvanized by talk of Oprah Winfrey running against Donald Trump in 2020, Family Guy creator Seth MacFarlane described that potential contest as “troublingly dystopian.”

What a curious, discomfiting situation we find ourselves in when the buzzword à la mode is 130 years old, and the literary genre we once relied on to explicate life behind the Iron Curtain is now apparently reflective of contemporary America. But what exactly is it about the Trump administration that makes us reach for such specific literary terminology? Is it the sudden resurgence of white supremacy and fascist sympathies in the American heartland, providing a speculative path toward American authoritarianism? Perhaps, but neither racism nor fascism are requirements of the genre. Are we terrified that this administration will instigate a world-ending nuclear conflict with North Korea, and/or Russia—and/or a devastating economic war with China, and/or Europe? If so, the relevant literary genre would be apocalyptic, not necessarily dystopian. Or do we say “dystopian” hyperbolically—reflecting our anxieties about a nightmarish social sphere of distress, confusion, and disorientation? That might be better described as surreal, or absurd. Are we alarmed by the hard pivot away from professionalism, decency, and decorum? Issues like these are more at home in the novel of manners, such as Pride and Prejudice. Or are we simply dismayed and alarmed by the convergence of an outrageous, semi-competent administration and a general mood of anti-intellectualism? That would be a job for satire. Trump himself—bumbling, bombastic, egoic, unaware, unpredictable, unread—would be more at home as the quixotic protagonist of a picaresque, or as a delusional child king in a fairy tale.

It is my suspicion that we call some things “dystopian” for the same reason we sometimes abuse correct usage of “gothic,” “ironic,” or “Kafkaesque”: We like the sound of it, and we enjoy invoking its vaguer associations. But if we’re going by conventional definitions, it is arguable that there was nothing specifically or egregiously dystopian about the Trump administration until last April, when the administration announced a new “zero tolerance” policy on illegal border crossing, becoming the first White House in memory to implement a standing procedure for separating migrant children from their parents, even as they attempted to surrender themselves legally in a plea for sanctuary.

Dystopia is a rich, heterogeneous, and dynamic category of film and literature. However, when we look at the most successful, enduring works of this genre, we find the same institution caught in the crosshairs of various fictional totalitarian regimes, again and again: the independent and autonomous nuclear family.

2.
Dystopian fiction was preceded by utopian fiction, beginning in 1516 with Thomas More’s novel Utopia. (The synthetic Greek toponym “Utopia” was simply More’s joking name for his setting—an invented South American island—as the word literally means “no place,” or “nowhere.”) Utopian novels were immensely popular in 19th-century England, as humanist philosophies and medical and industrial technologies at the tail end of the Enlightenment combined to suggest a better and brighter tomorrow. Theoretically, a fruitful Eden was almost within reach. Yes, dystopia is commonly described as the opposite of utopia, but this obscures a common trope in which dystopic future societies are presented as the aftermath (or consequence) of failed attempts to bring about an actual utopia.

Perhaps the precursor to dystopian fiction is Fyodor Dostoevsky’s anti-utopian novel Notes from the Underground, published in 1864. Dostoevsky’s skeptical narrator monologues at length on the preposterousness of the idea that science and Western philosophy were ushering in a radical new era of human progress: “Only look about you: blood is being spilt in streams, and in the merriest way, as though it were champagne.” Dostoevsky’s intention, partly, was to deride and pick apart Nikolai Chernyshevsky’s utilitarian, materialist novel, What Is to Be Done?, in which characters make grand, romantic statements about the joyful founding of an eternal, collectivist utopia. Dostoevsky’s Underground Man sees two major flaws in this thinking. First, if given the opportunity to submit to rational prescriptions for a better life, people would rather be free to suffer. Second, idealism—when taken too seriously—tends to breed dissociation, distortion, and interpersonal alienation.

Today we associate a handful of qualities with the concept of dystopia: governmental overreach, unnatural social configurations, paranoia, state-driven propaganda, digitally panoptic surveillance, and other alienating technologies. However, none of these characteristics are intrinsic to the genre, just as dystopian fiction isn’t necessarily satirical or allegorical, regardless of the popularity of Black Mirror. Dystopia is such a diverse and mutable canon overall that there are no essential commonalities—with one possible exception: a significant distortion of family relations.

Nearly all landmark works of dystopian fiction feature an oppressive governmental order that interferes with what we might term the “natural” process of family-making: choosing a partner and raising a family freely and relatively unencumbered by external power structures. This is observed from the outset in the seminal dystopian novel We, by Yevgeny Zamyatin, published in Russia in 1921. Set in the walled-off, hyper-rational future society One State, in which sexual liaisons are overseen by the government, the conflict in We is precipitated by a moment of illicit flirtation, and the principal transgression upon which the plot later hangs is an unlicensed pregnancy.

In Aldous Huxley’s Brave New World, fetuses gestate in artificial wombs and are raised by the state. Here, too, an illegal pregnancy is a major plot point, and the word “father” is an epithet. In 1984, George Orwell’s Oceania allows marriage but prohibits divorce, as well as non-procreative sex. Winston Smith’s central offense is his illegal affair with Julia, and it is her whom he must betray to restore his safety and good standing. In Ray Bradbury’s Fahrenheit 451, Guy Montag’s unhappy, alienating marriage is the consequence of an illiterate, spiritually unwell society. In Lois Lowry’s The Giver, infants are not raised by their biological mothers but are assigned to families—if they are not summarily euthanized. Even in the bubblegum dystopia The Hunger Games, the action commences with Katniss’s motherly intervention on her little sister’s behalf, sparing her from certain death, allowing her to continue to have a childhood.

The most influential dystopian novel of this moment is Margaret Atwood’s The Handmaid’s Tale, thanks to the Hulu miniseries adaptation starring Elisabeth Moss, previously a different sort of feminist icon in AMC’s Mad Men. In Atwood’s novel, a near-future United States is replaced by an Old-Testament Christian theonomy in which healthy young women are forced to bear children for high-status men and their infertile wives. This feature of Atwood’s world-building can’t exactly be chalked up to pure fantasy crafted in the welter of creative genius. To borrow a phrase, we’ve seen this before. In an essay for Glamour last year, Jenae Holloway writes that she is “frustrated and jealous that [her] white feminist allies are able to digest The Handmaid’s Tale through the lens of a fictitious foreboding”—in other words, that the show does not strike them as it strikes her: with a sense of “déjà vu.” Holloway’s essay reminds us that an even cursory look into slavery in the Americas reveals separations of children from parents, forced adoptions, and rape as standard to the experience. Breaking up families is not simply a systematic and normalized aspect of state control; it is a requirement to maintain the system itself.

Historically, human slavery may have been a relatively limited phenomena in Atwood’s Canada; however, indigenous families were routinely shattered by administrative bodies between 1944 and 1984, including 20,000 children in the “Sixties Scoop” alone. Conventionally, the non-academic reader or viewer only associates these phenomena with science fiction when the writer works in this palette explicitly—Octavia Butler’s novel Kindred and her short story “Bloodchild” come to mind—but once one considers the potential for reverberations of chattel slavery in literary dystopias, one begins to see them everywhere: in Kurt Vonnegut’s story “Harrison Bergeron,” wherein a teenage übermensch is taken from his parents, who later witness his televised execution; in Shirley Jackson’s “The Lottery,” where citizens of an agrarian community cannot protect their spouses or children from ritualized public execution; and most obviously Ursula K. Le Guin’s “The Ones Who Walk Away from Omelas,” which depicts a perfect society enabled by the unending agony of a single imprisoned, tortured child.

None of this is to say that participation in a family is categorically “natural,” or what legitimizes one’s existence. The world has more than enough space for people who abstain from family-making. Nor does this observation require us to attempt to define what a family is. What is important is to note that our most successful, compelling, and enduring literary dystopias consistently present antagonists to the nuclear family dynamic. They create rigid legal frameworks around everything from sexual union to rearing of children. This is the dreaded commonality at the root of mainline dystopian fiction: the simple formula, “government authority > family independence.”

Whether you were raised by biological or adoptive parents, older siblings, or more distant relatives—or by a foster parent, or some other surrogate or legal guardian—what you share with the vast majority of humans is that you were once the object of a small, imperfect social unit responsible for your protection and care. This is the primary social contract, based not on law or philosophy, but on love and trust. For better and worse, our bonds to our families pre-exist and preponderate the accident of our nationality. Accepting this truth may be the first test of a legitimate state. It is the illegitimate, insecure regime that seeks to disrupt and broadly supersede the imperfect moral authority of reasonable, well-intended parents—in all of their many forms and situations.

3.
In separating migrant families seeking amnesty, President Trump brought us into dystopia at last. It is a small comfort that he clearly knew from the outset that this action was morally untenable. He told reporters that he “hated” the policy of family separation, claiming that it was “the Democrats’ fault,” the repercussion of a do-nothing Congress. In reality, neither Barack Obama nor George W. Bush separated migrant children from their parents as a standard practice. There is no law or settlement that requires detained families to be broken up, and the general legal consensus was that if Trump were being honest—if family separation had actually been an unwanted, pre-existing policy—he could have ended it, overnight, “with a phone call.”

As usual, executive dissimulation instigated bizarre performances lower down the chain of command: On June 18, DHS Secretary Kirstjen Nielsen held an extraordinary press conference in which she denied the existence of an official family-separation policy while simultaneously arguing for its legitimacy. Nielsen’s denials were particularly astonishing as two months before her press conference, Attorney General Jeff Sessions announced—publicly and on camera—the instigation of family separation as a deterrent to improper border crossings. In fact, the DHS had already published guidelines explaining the system of family separation and admitted to detaining approximately 2,000 migrant children. The truth was that the institution of a heartless, zero-tolerance border policy was a calculated effort led by administration strategist Stephen Miller, who was also a key architect of the travel ban in 2017. Writing for The Atlantic, McKay Koppins characterizes Miller’s push for this policy as overtly xenophobic and intentionally inhumane, designed to appeal to Trump’s base while also sowing chaos among his opponents.

To our nation’s credit, outrage was abundant and came from all corners. Evangelist Franklin Graham said that family separation was “disgraceful.” Laura Bush wrote that the policy was “cruel” and that it broke her heart. Even former White House Communications Director Anthony Scaramucci described the policy as “inhumane” and “atrocious.” Governors from eight states announced they would withdraw or deny National Guard troops previously promised to help secure the Southwest border. Even Ivanka Trump, who has yet to be accused of hypersensitivity, allegedly asked her father to change course on family separations at the border. Condemnation also came from both houses of Congress, with Senate Republicans vowing to end family separations if Trump did not. On June 20, after repeatedly claiming that only Congress could end family separations at the border, Trump reversed course, signing an executive order that would ostensibly keep migrant families together during future detentions. Technically, this order allowed family separations to continue as a discretionary practice, until the ACLU brought a lawsuit before Judge Dana Sabraw of the Federal District Court in San Diego, who issued an injunction that temporarily halted family separations and required all separated migrant children be reunited with their parents within 30 days—a requirement that was not met.

As far as steps down a slippery slope toward totalitarianism go, Trump’s “zero-tolerance” border policy has been significant. Nearly 3,000 migrant children were traumatically separated from their parents, with some flown across the country. In Texas, children were routed to a detention facility in a converted Walmart Supercenter in Brownsville and a tent-city detention center near the border station in Tornillo—where summertime temperatures regularly approach 100 F. Some migrant children and babies were kept in cages—a term the administration resisted but could not deny, just as the smiling image of Donald Trump in the converted Walmart cannot be reasonably considered anything other than gloating propaganda.

For many migrants, significant emotional and psychological damage has already been done. Recently, dozens of female migrants in a Seattle-area detention facility were separated from their children, having to endure hearing them crying through the walls. One such detainee informed U.S. Rep. Pramila Jayapal that she told a Border Patrol agent she wanted to see her children, to which the agent replied, “You will never see your children again. Families don’t exist here.” That same week, a Honduran man named Marco Antonio Muñoz, who had been separated from his wife and toddler after crossing the Southwest border, hanged himself in his Texas holding cell.

4.
Not only has the executive branch of this government launched an assault on the dignity and sanctity of the family; they have simultaneously begun work to erode the permanence of citizenship, through a process of “denaturalization”—an action not attempted since the paranoid 1950s of Joseph McCarthy and the Red Scare. This would transfer the authority to strip citizenship from the court system to law enforcement agencies, such as DHS, or ICE, who would presumably go looking for naturalized Americans who may have misrepresented themselves in some way during their application for citizenship. This situation would subject naturalized citizens to the paranoia and potential exploitation of an East German-like police state, in which they are under warrantless surveillance, threatened by informants, and potentially expugnable for nothing more heinous than a paperwork error. Simultaneously, conservatives such as Tucker Carlson have argued for a referendum on birthright citizenship, the foundation of the equality Americans purport to enjoy. This fits with the administration’s pattern of using diverse methodologies to thwart and rescind legal and illegal residency alike, in what has increasingly come to look like a new front in the multi-pronged effort to alter the racial and cultural demographics of the electorate. This, too, conforms to the genre of dystopia: the existence of a large and oppressed underclass living adjacent to privileged elites, who are sometimes floored to learn that not everyone perceives the status quo as the next-best thing to a true utopia.

If given even tacit approval, policies like separating families at the border will lead to an open season on immigrants—legal residents and undocumented migrants alike—as well as millions of other natural and naturalized citizens who are not both white and perfectly fluent in English. We will see an emboldened expansion of unconstitutional checkpoints at places like airports and bus depots. We will see the normalization of racial profiling. Our children will see their friends taken out of school without warning. They will be disappeared.

But if we’ve read our dystopian literature, we are prepared. To a degree, we are insulated. We can understand this moment in history, and how comforting it must feel to curl up inside the illusory sense of security offered by an impenetrable border, or a leader who boldly intones our weaker ideas and more shameful suspicions, or some fatuous, utopian aphorism about making a nation great again. We will remind ourselves and each other what is at stake. We will remember that the only thing we need to know about utopia is that nobody actually lives there.

Image: Flickr/Karen Roe

I Greet You in the Middle of a Great Career: A Brief History of Blurbs

1.
Chabon. Obreht. Franzen. McCann. Egan. Brooks. Foer. Lethem. Eggers. Russo.

Possible hosts for Bravo’s America’s Next Top Novelist? Dream hires for the Iowa Writers’ Workshop?

Nope — just the “Murderer’s Row” of advance blurbers featured on the back of Nathan Englander’s new effort, What We Talk About When We Talk About Anne Frank. And what an effort it must be: “Utterly haunting. Like Faulkner [Russo] it tells the tangled truth of life [Chabon], and you can hear Englander’s heart thumping feverishly on every page [Eggers].”

As I marvel at the work of Knopf’s publicity department, I can’t help but feel a little ill. And put off. Who cares? Shouldn’t the back of a book just have a short summary? Isn’t this undignified? But answering these questions responsibly demands more than the reflexive rage of an offended aesthete (Nobody cares! Yes! Yes!). It demands, I think, the level-headed perspective of a blurb-historian…

2.
Let’s be clear: blurbs are not a distinguished genre. In 1936 George Orwell described them as “disgusting tripe,” quoting a particularly odious example from the Sunday Times: “If you can read this book and not shriek with delight, your soul is dead.” He admitted the impossibility of banning reviews, and proposed instead the adoption of a system for grading novels according to classes, “perhaps quite a rigid one,” to assist hapless readers in choosing among countless life-changing masterpieces. More recently Camille Paglia called for an end to the “corrupt practice of advance blurbs,” plagued by “shameless cronyism and grotesque hyperbole.” Even Stephen King, a staunch supporter of blurbs, winces at their “hyperbolic ecstasies” and calls for sincerity on the part of blurbers.

The excesses and scandals of contemporary blurbing, book and otherwise, are well-documented. William F. Buckley relates how publishers provided him with sample blurb templates: “(1) I was stunned by the power of [ ]. This book will change your life. Or, (2) [ ] expresses an emotional depth that moves me beyond anything I have experienced in a book.” Overwrought praise for David Grossman’s To the End of the Land inspired The Guardian to hold a satirical Dan Brown blurbing competition. My personal favorite? In 2000, Sony Pictures invented one David Manning of the Ridgefield Press to blurb some of its stinkers. When Newsweek exposed the fraud a year later, moviegoers brought a class action lawsuit on behalf of those duped into seeing Hollow Man, The Animal, The Patriot, or Vertical Limit (Manning on Hollow Man: “One hell of a ride!” — evidently moviegoers are easy marks).

When did this circus get started? It’s tempting to look back no further than the origins of the word “blurb,” coined in 1906 by children’s book author and civil disobedient Gelett Burgess. But blurbs, like bullshit, existed long before the term coined to describe them (“bullshit,” in case you were wondering, appeared in 1915). They were born of marketing, authorial camaraderie, and a genuine obligation to the reader, three staples of the publishing industry since its earliest days, to which we will turn momentarily.

But before hunting for blurbs in the bookshops of antiquity, it’s important to get clear on what we’re looking for. Laura Miller at Salon writes: “The term ‘blurb’ is sometimes mistakenly used for the publisher-generated description printed on a book’s dust jacket — that’s actually the flap copy. ‘Blurb’ really only applies to bylined endorsements by other authors or cultural figures.” Miller can’t be completely right. For the consultants at Book Marketing Limited — and their numerous big-name clients — blurb describes any copy printed on a book, publisher-generated or otherwise, as evidenced by the criteria for the annual Best Blurb Award (ed note: as per the comment below, this is the typical British usage). So much for authorship. The term is often used of bylined endorsements that appear in advertisements. So much for physical location. And if we try to accommodate author blurbs, even Wikipedia’s “short summary accompanying a creative work” isn’t broad enough.

What a mess. In the interest of time I’m going to adopt an arbitrary hybrid definition — blurb: a short endorsement, author unspecified, that appears on a creative work. So Orwell’s example and Manning’s reviews would be disqualified if they didn’t appear on a book or DVD case, respectively. I’ll leave that legwork to someone else, because we’ve got serious ground to cover.

3.
If you needed beach reading in ancient Rome, you’d probably head down to the Argiletum or Vicus Sandaliarium, streets filled with booksellers roughly equivalent to London’s Paternoster Row. But how to know which books would make your soul shriek with delight? There was no Sunday Times; newspaper advertising didn’t catch on for another 1,700 years, and neither did professional book reviewers. Aside from word of mouth, references in other books, and occasional public readings, browsers appear to have been on their own.

Almost. Evidence suggests that booksellers advertised on pillars near their shops, where one might see new titles by famous people like Martial, the inventor of the epigram (nice one, Martial). It’s safe to assume that even in the pre-codex days of papyrus scrolls, a good way to assess the potential merits of Martial’s book would have been to read the first page or two, an ideal place for authors to insert some prefatory puff. Martial begins his most well-known collection with a note to the reader: “I trust that, in these little books of mine, I have observed such self-control, that whoever forms a fair judgment from his own mind can make no complaint of them.” Similar proto-blurbs were common, often doubling as dedications to powerful patrons or friends. The Latin poet Catullus: “To whom should I send this charming new little book / freshly polished with dry pumice? To you, Cornelius!” For those who weren’t the object of the dedication, these devices likely served the same purpose that blurbs do today: to market books, influence their interpretation, and assure prospective readers they kept good company.

Nearly fourteen hundred years passed before Renaissance humanists hit on the idea of printing commendatory material written by someone other than the author or publisher. (Or maybe they copied Egyptian authors and booksellers, who were soliciting longer poems of praise (taqriz) from big-shot friends in the 1300s.) By 1516, the year Thomas More published Utopia, the practice was widespread, but More took it to another level. He drew up the blueprint for blurbing as we know it, imploring his good friend Erasmus to make sure the book “be handsomely set off with the highest of recommendations, if possible, from several people, both intellectuals and distinguished statesmen.” This it was, by a number of letters including one from Erasmus (“All the learned unanimously subscribe to my opinion, and esteem even more highly than I the divine wit of this man…”), and a poem by David Manning’s more eloquent predecessor, a poet laureate named “Anemolius” who praises Utopia as having made Plato’s “empty words… live anew.” What would he have written about The Patriot?

Hyperbole, fakery, shameless cronyism: though it will be another three hundred years before blurbs make their way onto the outside of a book, things are looking downright modern. In the 1600s practically everyone wrote commendatory verses, some of which were quite beautiful, like Ben Jonson’s for Shakespeare’s First Folio: “Shine forth, thou Star of Poets, and with rage / Or influence, chide or cheer the drooping stage, / Which, since thy flight from hence, hath mourned like night, / And despairs day, but for thy volume’s light.” (Interestingly, Shakespeare himself never wrote any — one can only imagine what a good blurb from the Bard would have done for sales.)

It was only a matter of time before things got out of control. The advent of periodicals in the early 18th century facilitated printing and distribution of book reviews, and authors and publishers wasted no time appropriating this new form of publicity. Perhaps the best example is Samuel Richardson’s wildly successful Pamela, an epistolary novel about a young girl who wins the day through guarding her virginity. Richardson made excellent use of prefatory puff, opening his book with two long reviews: the first by French translator Jean Baptiste de Freval, the second unsigned but likely written by Rev. William Webster, which first appeared as pre-publication praise in the Weekly Miscellany, one of Britain’s earliest periodicals.

Hyperbole? “This little Book will infallibly be looked upon as the hitherto much-wanted Standard or Pattern for this kind of writing”; “The Honour of Pamela’s Sex demands Pamela at your Hands, to shew the World an Heroine, almost beyond example…”

Fakery? The book also had a preface by the “editor,” really Richardson himself, which concluded a laundry list of extravagant praise with the following: “…An editor may reasonably be supposed to judge with an Impartiality which is rarely to be met with in an Author towards his own Works.”

Shameless cronyism? De Freval was in debt to Richardson when he wrote his review, as was Rev. Webster, whose Weekly Miscellany was funded partially by Richardson.

All of this sent Henry Fielding over the edge. Nauseated as much by the ridiculous blurbs as the content of the novel, Fielding wrote a satirical response entitled Shamela, which he prefaced with a note from the editor to “himself,” a commendatory letter from “John Puff, Esq.,” and an exasperated coda: “Note, Reader, several other COMMENDATORY LETTERS and COPIES of VERSES will be prepared against the NEXT EDITION.”

While Fielding may have been the first to parody blurbs, it was another literary giant who truly modernized them. A master of self-promotion, Walt Whitman knew exactly what to do when he received a letter of praise from Ralph Waldo Emerson. The second edition of Leaves of Grass is, as far as I know, the first example of a blurb printed on the outside of a book, in this case in gilt letters at the base of the spine: “I Greet You at the Beginning of a Great Career / R W Emerson.” (Emerson’s letter appeared in its entirety at the end of the book along with several other reviews — three of which were written by Whitman — in a section entitled “Leaves-Droppings.”)

Whitman’s move wasn’t completely unprecedented. The earliest dust jacket in existence (1830) boasts an anonymous poem of praise on the cover, and printers had long been in the habit of putting their device at the base of the spine. Nevertheless, the impulse to combine them with a bylined review was sheer genius, and Emerson’s blurb can be read as greeting not only Whitman, but also the great career of its own updated form.

4.
After Whitman there were further innovations. A century ago, fantasy author James Branch Cabell (unsung favorite of Mark Twain and Neil Gaiman) prefigured self-deprecators like Chris Ware by including negative blurbs at the back of his books: “The author fails of making his dull characters humanely pitiable. New York Post.” Or, as Ware put it on the cover of the first issue of Acme Novelty Library: “An Indefensible Attempt to Justify the Despair of Those Who Have Never Known Real Tragedy.” Unlike Cabell’s, Ware’s first negative blurb was self-authored, but those featured on Jimmy Corrigan were not. Marvel Comics followed suit when it issued its new “Defenders.” (A related strategy — Martin Amis’ The Information was stickered “Not Booker Prize Shortlisted.”)

These satirical strategies highlight the increasingly common suspicion, nascent in Fielding’s parody of Richardson, that blurbs just aren’t meaningful. Publishers, however, have evidently concluded that blurbs may not be meaningful, but they sure help move merchandise. Witness the advent of two recent innovations in paperback design: the blap and the blover (rhymes with cover).

The blap is a glossy page covered in blurbs that immediately follows the front cover. In deference to its importance, the width of the cover is usually reduced, tempting potential readers with a glimpse of the blap, and perhaps even accommodating a conveniently placed blurb that runs along the length of the book.

The blover is essentially a blap on steroids, literally a second book cover, made from the same cardstock, that serves solely as a billboard for blurbs. Blovers are not yet widespread, but given the ubiquity of blaps it is only a matter of time. (For an extreme case see The Immortal Life of Henrietta Lacks, where the blover’s edge sports a vertical banality from Entertainment Weekly — “I couldn’t put the book down.” — not to mention the 56 blurbs on the pages that follow.)

Blovers and blaps… what next? For my part, I can see where Orwell, Paglia, and Miller are coming from, and I certainly wouldn’t bemoan the disappearance of blurbs. But not everyone is like me. Some people enjoy glancing at reviews, or choosing a book based on the endorsements of their favorite authors. Blurbs sell books (maybe), and they allow established writers to help out the newbies. Those are good things. And since regulating them is as unfeasible as banning reviews, as long as blovers don’t replace covers I guess blurbs are a genre I can live with. And who knows — one day Murderer’s Row might be batting for me.

Previously: To Blurb or Not to Blurb

Image credit: wikimedia commons

On Repetition

“On Repetition” was delivered as a craft talk at the 2010 Tin House Writers Workshop.

1.
Not long ago, James Wood wrote a review of Geoff Dyer’s Jeff in Venice, Death in Varanasi that struck me as a bit myopic. It wasn’t what Wood said about Jeff in Venice, Death in Varanasi that seemed short-sighted to me – it was what he said about the rest of Dyer’s career. Wood just didn’t get it, he admitted. None of Dyer’s books seemed to fit together – they were all about different things! And they’d all been executed in different ways too, almost as though they weren’t even by the same writer! What’s a critic supposed to do when a writer keeps on trying new things? Read it all? Sheesh! Who’s got time for that? Don’t you people understand deadlines?

I’m picking on James Wood here – and I like James Wood, I think the literary world is vastly richer for James Wood’s voice and presence in it – because he sort of duffed this one. There is a kind of common denominator in Dyer’s work, and tapping into it, I think, is central to coming to an understanding of at least one way to approach the craft of creative nonfiction, and it says something too about the state of literature today.

2.
Also not long ago, Geoff Dyer wrote a review of Don Delillo’s Point Omega that was also myopic. Dyer complained that what Delillo had done in Point Omega had been done before and better, by Delillo himself. This is interesting not just because it’s the exact opposite of Wood’s criticism of Dyer. It’s interesting because it’s a crime – if it’s a crime – of which Geoff Dyer is also guilty. That guy whose books are a problem because they aren’t anything like one another has also made the mistake of saying the same thing over and over. I’m quite sure this accounts for Geoff Dyer’s wide-ranging popularity.

As I see it, Dyer has two modes as a writer. First he has a kind of rakish mode in which he serves himself up as a leaner, wimpier version of James Bond, that post-Empire Brit superspy who shuttles around the world bedding as many women as he can. Truth be told, Dyer’s travel writing can seem a bit like this at times. But of course while James Bond saves the planet again and again – reminding the rest of the world that, while the Empire might be over, and England has surely seen her best days, the world still needs her (which suggests in turn that James Bond is a kind of Frodo Baggins with a tuxedo and a Beretta) – Dyer, by contrast, in his rakish mode, just seems to limp around and hang out and say funny, foolish things and get girls anyway. But the Dyer/Bond parallel is there. Don’t get me wrong. I like Geoff Dyer, and I even like the rakish mode of Dyer. I like it even though it creates arguments every time my girlfriend and I take turns reading Dyer passages back and forth in the bathtub. But it’s also this Dyer mode that is susceptible to repetition. I’m not going to list examples here (and I guess I’m not surprised that Wood didn’t note them), because that’s not what this essay is trying to do, but suffice it to say that Dyer’s guilt over having hiked back across terrain his work had already mapped enabled him to recognize when Delillo was doing the same thing. One can imagine Dyer’s stream of thought: Ah-ha, Delillo, I see you! I see what you’re doing. I do it myself from time to time, though maybe I don’t recognize it until later, and even though I can acknowledge that there might be good reasons why a writer would repeat himself, I’m not, in a spirit of writerly camaraderie, going to let it pass this time. No! Instead, I will make a big fucking deal about it in the New York Times because that’s what James Fucking Wood just did to me.

Perhaps now is a good time to mention that I think the literary world is vastly richer for Geoff Dyer’s voice and presence in it.

3.
All of which adds up to a kind of contradictory set of truths about books and publishing in the abstract: don’t repeat yourself, and don’t write books that are too different from one another. Other writers will pillory you for the first, and publishers will be more than happy to pigeonhole you from the moment you achieve anything like success. Blow out your advance? Great. Now write the same exact book again.

Thinking about books and publishing in the abstract was exactly what I was doing around about 1999, when I was a decade out from my degree at Iowa, had a dozen short stories but no collection published, and the pages of a failed novel sat scattered all over my crappy apartment as though to collect the droppings of a huge collection of homing pigeons that never came home. I was working then as a part-time casino dealer in Atlantic City, and though I’d once turned up my nose at nonfiction, I was now at least trying to turn up my nose at a career as a casino dealer in Atlantic City. After a not inconsiderable effort I had managed to sell an idea for a book of nonfiction. I say I’d been thinking in the abstract because it wasn’t really until I’d signed the contract – nonfiction tending to sell by way of book proposal (the writer is a kind of sub-contractor, perhaps like a plumber who shows up only occasionally and always late once he’s so underbid his competitors that he’s barely making enough to feed himself, let alone be on time) – not until then did it really occur to me that I’d actually have to write a book of nonfiction. This realization manifested itself physiologically as panic, a sudden peculiar sensation all across the body: it felt, instantaneously, as though every piece of myself was being worked on by some occult vibration, that every part of me had begun to jiggle with manic energy, and every cell, every nucleus, every mitochondria, seemed on the brink of imploding like a cathode-ray tube or a dwarf star going supernova. In other words, I fucking freaked out.

In a way, it was good that I lived in Atlantic City at this time. I’d had a number of writer friends, of course, from previous stints in graduate school, but after I went to Atlantic City these relationships had tended to fade, as is perhaps only natural. I say this is good because it meant that I had only one writer friend I could call and fucking freak out to. And I did. This friend had written several books by then, and what I did – working on the theory that previous experience writing books gives one insight as to how the process can and should be embarked upon – was call him and ask, well, so, how do you write a book? My friend didn’t know. My friend had no idea how to write a book. It turned out that he had managed to write several books without ever either acquiring the first thing one should know or formulating any general principle about writing books. Our conversation quickly became a discussion of how on earth he was going to figure out how to write his next book. When I hung up, I was left alone with my book contract and my panic in my empty roost in Atlantic City.

4.
So here’s what I did: I invented the idea of the book.

The book was to be about chess – the game, chess. In Atlantic City, I’d gotten to know an African American chess master named Glenn Umstead, a kind of quirky guy with a difficult personality who was nevertheless one of just forty black men in the history of the world to have achieved chess’s master ranking. That’s sounds pretty straightforward, but saying you’re going to write a buddy story/subculture book – which is pretty much what I said in my book proposal – is a whole lot easier than coming up with a way of actually executing it. I’m exaggerating a bit when I say I invented the idea of the book, but that’s how it felt as I was doing it – it felt as though I was inventing literature wholesale. And that moment when I acquired my essential strategy was recorded in the book itself:

…I wanted to write something about the game. But I still didn’t know what it was.

My relationship with Glenn began to change. Now that I was a lay historian, our bond became a version of the classic conflict between player of the game and student of the game… We were an even odder couple now. He was black and I was white, and we were like chessmen opposed on a board that was the game itself.

From there, the book came not easily but possibly – it was possible now. What I’d learned was that the way to write a book was to let the subject matter tell you how it ought to be written about.

5.
And it turns out that’s the common denominator of Geoff Dyer’s other mode as a writer: the mode when he stops trying to lay girls and gets down to the hard work of reading, writing, and thinking. A couple examples. Dyer’s book-length fret over D.H. Lawrence, Out of Sheer Rage, emphasizes on a number of occasions that its method is lifted from its subject: “If this book aspires to the condition of notes that is because, for me, Lawrence’s prose is at its best when it comes closest to notes.” And in introducing the partially imagined narratives of But Beautiful: A Book About Jazz, Dyer again lets the subject inspire the form he’ll use to examine it:
These episodes are part of a common repertory of anecdote and information – “standards” in other words, and I do my own versions of them, stating the identifying facts more or less briefly and then improvising around them, departing from them completely in some cases. This may mean being less than faithful to the truth but, once again, it keeps faith with the improvisational prerogatives of the form.
There are many examples of this outside of Dyer. One is Andrei Codrescu’s recent The Post Human Dada Guide, which executes a Dadaist encyclopedia of Dada. Another is Jay Kirk’s soon to be released Kingdom Under Glass, which reassambles the facts of the biography of taxidermist Carl Akeley so as to create an Akeley-inspired diorama of his life. But what’s already apparent is that this divining of one’s method from one’s subject is not only a way to make a book seem possible as you approach it, it’s also a way to avoid repetition, to bring to every work the excitement of invention while retaining some essential version of the self: the common denominator of one’s books being not their subject matter, but their organizing intellect, their animating spirit – their author, after all.

6.
Not long after my book about chess appeared and chalked up a handful of prominent, promising reviews, my editor asked me to come to New York. She bought me lunch, chatted me up. We talked about the future. She wanted me to write another book about chess. “Maybe a chess mystery,” she said, jiggling her shoulders in what was either a fair imitation of a stripper twirling her pasties or a hopeful anticipation of the reaction readers might have to the book she proposed. I actually considered this offer for a moment. There is a true story about a famous chess player being called in to assist with a serial killer investigation. But that moment didn’t last long. I realized almost at once that I would simply be repeating myself.

And the truth is, I don’t want to be a writer like that: a writer so imprisoned by their subject matter – chess writer, food writer, religion writer, etc. – that if they ever depart from it, if their publishers ever let them depart from it, you can be pretty sure that their departures will have only that level of appeal, the appeal of something attempting, straining, struggling and probably failing to branch out. I don’t think that’s the ideal literary life. And yet, to reiterate, this is something writers are more or less forever doing – repeating themselves, writing figurative if not literal sequels, trying to please again and again the same readers they pleased once – and other writers who are guilty of the same thing admonish them for it, again and again.

7.
So I have tried to be a little different. I went on to write a Jamesian biography of William James, and I cringed anew when my (new) editor told me that he wished the book had been a bit more like my first. Whatever, dude. From there, I set out to write a history of utopian thought and literature that would stylistically emulate Thomas More’s original Utopia, which blended a kind of analytical discourse with what scholars called “speaking pictures” – narrative.

There were two basic problems with this. First, I had already written about utopian concepts. I had grown up on a street called Utopia Road in a master-planned community, “Utopia Road” was the title of both my MFA thesis and one of my early short stories, and, to be fully honest, there was palpable utopian fascination in both my chess and James books. In other words, I was repeating myself. No, no – worse than that! I was repeating the shit out of myself! The second problem was that Thomas More had been repeating, too. He was repeating Lucian and Plato and Erasmus and Machiavelli. And soon enough, others were repeating More, repeating Utopia. In fact, others repeated Utopia so often that it became its own genre of literature – a genre so powerful that “utopia” not only became a word, it completed the demigod leap from noun to adjective. You’ll probably better appreciate Thomas More’s Utopia if I tell you not that it’s the most influential novel in the history of mankind, but that it’s the only book whose author is known that has its own index entry in the Chicago Manual of Style. It’s pretty damn impressive – and it’s all a function of repetition. Sort of.

And there’s another problem too – a third problem – because thinking about these two modes of Utopia, discourse and narrative, makes it pretty clear that I’ve been unfair to Geoff Dyer, that his two modes, critic and rake, basically fall under this same description. Indeed, it seems to me now that Dyer’s entire career can be understood as a Utopia-like toggling back and forth – sometimes within a single book, sometimes from book to book – between narrative and analytic modes, and this is what James Wood couldn’t see, couldn’t appreciate, and which I came to appreciate only as a function of the panic that set in when I had to stop thinking about books in the abstract and actually write one.

8.
In 1936, James Agee, two years out from a book of poems and “on loan from the Federal Government,” was assigned to write a series of documentary articles about Alabama tenant farmers for Fortune magazine. One can be pretty sure that Agee’s editor had some ideas about what he wanted to print – his readers had certain expectations based on what they’d read in the magazine before, and Agee’s assignment was to repeat that formula. That’s not what he did. Instead, he produced hundreds of pages of wildly poetic, passionate description of a few families from which he had strived to maintain no objective distance at all. The series of articles was promptly canceled; Let Us Now Praise Famous Men was not published until 1941; sales remained dismal until the book was rediscovered in 1960.

What’s relevant about Let Us Now Praise Famous Men for us, in this essay, is that right in the middle of it Agee pauses in his narrative and delivers a lengthy discussion of what he’s trying to do. It is the bit of analytical discourse to which he has toggled from his narrative descriptions of tenant farmer life. As a kind of set piece, this section of Let Us Now Praise Famous Men, written long before Truman Capote and John McPhee and Geoff Dyer, serves as almost a post-facto manifesto of “creative nonfiction.” This manifesto insists on a stark distinction between creative prose and journalism, and in discussing an attempt to describe a hypothetical street it distinguishes Agee’s methodology from “naturalism:”
As nearly as possible in words (which, even by grace of genius, would not be very near) you try to give the street in its own terms: that is to say, either in the terms in which you…see it, or in a reduction and depersonalization into terms which will as nearly as possible be the “private,” singular terms of that asphalt, those neon letters, those and all other items combined, in that alternation, that simultaneity, of flat blank tremendously constructed chords and of immensely elaborate counterpoint which is the street itself.
I take Agee to mean that subjects ought to reveal themselves to you, that the writer’s job, the writer’s craft, is to be attentive to that which shall be rendered. A street will reveal to you the terms, the vocabulary, with which it ought to described just as surely as an abstract concept like William James or taxidermy or chess will proffer its proper strategy after some lengthy period of measured, painful, and above all, literary, meditation. Agee goes on to argue that words necessarily fail, and in so doing he echoes – or rather, anticipates – Dyer’s hope for what a creative use of language and form can bring to a consideration of jazz:
Words cannot embody; they can only describe. But a certain kind of artist, whom we will distinguish from others as a poet rather than a prose writer, despises this fact about words or his medium, and continually brings words as near as he can to an illusion of embodiment. In doing so he accepts a falsehood but makes, of a sort in any case, better art.
9.
Ostensibly, this is an essay about the craft of creative nonfiction. But I think what I’m ultimately trying to say is that it’s dangerous to say too much too definitively about craft in the abstract. If you feel absolutely overwhelmed by a project – that’s good. If you have absolutely no idea how or where to begin – that’s good too. No matter where one is in one’s career, a writer, it seems to me, ought to feel more or less completely at sea as they begin to approach the question or the subject they hope to address. There are two kinds of repetition. There is the kind we find inside our work, the themes that burble up lava-like from our subconscious again and again, and which we cannot resist and should not, I think, criticize in others. And then there is the repetition that ought to be resisted, that which gives us a program, a strategy that can be applied to any subject. This we should criticize in others. Art should never be the result of habit, it should strive eternally for the fresh and the new even when we work in forms we did not invent. Craft, we should vigilantly remind ourselves, means to make something absolutely new where before there was nothing at all.

The Elusive Omniscient

As I was taking notes for a new novel recently, I took a moment to consider point of view. Fatigued from working on one manuscript with multiple first-person limited narrators, and then another with two different narrative elements, I thought how simple it would be, how straightforward, to write this next book with an omniscient point of view. I would write a narrator who had no constraints on knowledge, location, tone, even personality. A narrator who could do anything at any time anywhere. It wasn’t long before I realized I had no idea how to achieve this.

I looked for omniscience among recent books I had admired and enjoyed. No luck. I found three-handers, like The Help. I found crowd-told narratives, like Colum McCann’s elegant Let The Great World Spin. I found what we might call cocktail-party novels, in which the narrator hovers over one character’s shoulder and then another’s, never alighting for too long before moving on.

On the top layer of my nightstand alone, I found Lionel Shriver’s The Post-Birthday World and Jane Gardam’s Old Filth and The Man in the Wooden Hat. The first is a formal experiment in which alternating narratives tell the same story of a marriage—which is really two different stories, their course determined by just one action. The second two give up on shared perspective altogether, splitting the story into separate books. Old Filth tells his story and The Man in the Wooden Hat tells hers. If the contemporary novel had a philosophy, it would be Let’s Agree To Disagree.

It’s tempting to view this current polyphonic narrative spree as a reflection on our times. Ours is a diverse world, authority is fragmented and shared, communication is spread out among discourses. Given these circumstances, omniscience would seem to be not only impossible but also undesirable—about as appropriate for our culture as carrier pigeons. It’s also tempting to assume that if we’re looking for narrative unity, we have to go back before Modernism. We can tell ourselves it was all fine before Stephen Dedalus and his moo-cow, or before Windham Lewis came along to Blast it all up.

No, if omniscience was what I wanted for my next project, I would have to look back further, to a time when the novel hadn’t succumbed to the fragmentation of the modern world.

But try it. Go back to the Victorians or further back to Sterne, Richardson, and Fielding. There’s no omniscience to be found. I suppose I could have spared myself the trouble of a search by looking at James Woods’ How Fiction Works. “So-called omniscience,” he says, “is almost impossible.” It turns out that the narrative unity we’ve been looking for is actually a figment of our imagination. The novel maintains an uneasy relationship with authority—not just now, but from its very beginnings.

Defoe’s Robinson Crusoe is often credited with being the first novel in the English language, published in 1719. The anxieties attendant on that role are evident in the way the book is structured. Not comfortable claiming to be simply an invention, Crusoe masquerades as a true story, complete with an editor’s preface declaring the book to be “a just history of fact; neither is there any appearance of fiction in it.” Defoe originates the James Frey approach to novel-writing, using the pretense of truth as a source of narrative power.

He repeats almost the same phrasing four years later, in Roxana: “The foundation of this is laid in truth of fact, and so the work is not a story, but a history.” The words seem redundant now—truth, fact, foundation, history. It’s a protesting-too-much that speaks to the unsettled nature of what Defoe was doing: telling a made-up story of such length, scope, and maturity at a time when doing so was still a radical enterprise.

But the most interesting expression of the novel’s predicament comes one year before Roxana, in 1722, when Defoe opens Moll Flanders with an excuse: “The world is so taken up of late with novels and romances that it will be hard for a private history to be taken for genuine.” It’s a clever move. Defoe acknowledges the existence of enough novels that you’d think his position as novelist would be secure (the more the merrier), but he insists that he’s doing something different—and then in the same breath assumes our lack of interest and then preempts it by setting up the other novels as tough competition.

Defoe’s pretense of editors, prefaces, and memorandums is the first stage of what I’ll call the apparatus novel, followed a decade or two later by its close cousin, the epistolary novel. Like its predecessor, the epistolary novel can’t just come out and tell a made-up story—never mind tell one from an all-knowing point of view. In Richardson’s Clarissa especially, the limitations of the individual letter-writers’ points of view create an atmosphere of disturbing isolation. As we read through Clarissa’s and Lovelace’s conflicting accounts, we become the closest thing to an omniscient presence the novel has—except we can’t trust a word of what we’ve read.

So where is today’s omniscience-seeking reader to turn? Dickens, don’t fail me now? It turns out that the Inimitable Boz is no more trustworthy in his narration than Defoe or Richardson or the paragon of manipulative narrators, Tristram Shandy. In fact, Dickens’ narrators jump around all over the place, one minute surveying London from on high, the next deep inside the mind of Little Dorrit, or Nancy, or a jar of jam. Dickens seems to have recognized the paradox of the omniscient point of view: with the ability to be everywhere and know everything comes tremendous limitation. If you’re going to let the furniture do the thinking, you’re going to need the versatility of a mobile and often fragmented narrative stance.

And Dickens is not alone in the 19th century. The Brontës? Practically case studies for first-person narration. Hardy? Maybe, but he hews pretty closely to one protagonist at a time. (Though we do see what’s happening when Gabriel Oak is asleep in Far From the Madding Crowd.) Dickens good friend Wilkie Collins (who famously said the essence of a good book was to “make ‘em laugh, make ‘em cry, make ‘em wait”)? The Moonstone is a perfect example of the apparatus novel, anticipating books like David Mitchell’s Cloud Atlas, complete with multiple narrators, various types of discourse, and full of statements that successive narrators correct or undermine.

This isn’t to say that there are no omniscient novels anywhere. Look at Eliot or Tolstoy, to jump cultures, or Austen. Sure, the line on Austen is that she could only write about drawing-room life, but she still writes books in which the narrator knows everything that’s going on in the novel’s world. Pride and Prejudice begins with its famous statement about men, money, and wives, and then easily inhabits the minds of various members of the Bennett family and their acquaintances—not through first-person limited, but through the more detached and stance of a true omniscient narration. Doubtless, readers could come up with other works written from an all-knowing perspective. Friends have suggested books as different as The Grapes of Wrath and One Hundred Years of Solitude as omni-contenders.

All the same, what seems key about the novel is that what we think of as a historical evolution—or a descent from a unified to a fragmented perspective—isn’t an evolution at all. In fact, the novel has always been insecure. It’s just that the manifestation of its insecurity has changed over time. At the outset, it tried to look like a different sort of artifact, a different kind of physical manuscript almost: the novel masked as a diary or a journal—because, really, who knew what a novel was anyway? Later, seeking to convey more intimate thoughts, it took the form of letters, acting like a novel while pretending to be something else, just in case. This is a genre that constantly hedges against disapproval. It’s like a teenager trying not to look like she’s trying hard to be cool. (Novel, who me? Nah, I’m just a collection of letters. I can’t claim any special insight. Unless you find some, in which case, great.)

Omniscience is something that the novel always aspires for but never quite achieves. It would be nice to have the authority of the all-seeing, all-knowing narrator. But we are too tempted by other things, like personality, or form, or the parallax view that is inherent to our existence. This is why, I think, when you ask readers to name an omniscient novel, they name books that they think are omniscient but turn out not to be. Wishful thinking. The omniscient novel is more or less a utopia, using the literal meaning of the word: nowhere.

Appropriately, Thomas More structured Utopia as a kind of fiction, an apparatus novel about a paradise whose exact location he had missed hearing when someone coughed. This was in 1516, two full centuries before Robinson Crusoe, making Utopia a better candidate for First English Novel. But that’s a subject for another day.

[Image credit: Tim]

Drifted toward Dragons: Utopia Today

It may seem that we have drifted toward dragons when a satirist sits at a senator’s desk (Al Franken) and a comedian’s criticisms land so dry they are mistaken for affirmation (Stephen Colbert).  Actually we’re repeating a journey traveled by Sir Thomas More exactly five-hundred years ago.

In 1509, Dutch scholar Desiderius Erasmus was struck by inspiration while horseback on his way to visit More.  The two friends had translated Lucian’s satires together.  Once installed in More’s home, Erasmus penned In Praise of Folly, an attack on the rampant stoicism of the age (think Dick Cheney) and a defense of More’s famous wit.  More was fond of bawdy jokes and puns, and reportedly proud of the fact that his humor was sometimes so arid many didn’t even perceive it.

In 1516, More produced the short novel Utopia, a portrait of a happy island nation whose benevolent ruler advocates communal property, religious freedom, and marital separation.  Utopia spawned an entire genre of literature, and apart from the Bible it’s hard to imagine a book that has proven to be so influential.  Utopia borrows heavily from both Lucian and In Praise of Folly, which makes our current moment the quincentennial of the gestation period (1509-1516) of what is perhaps the most important novel in the history of mankind.

Oddly, the book succeeded only because most people misunderstood it.

More wrote Utopia as a young man.  Erasmus published it, and as he prepared it for press More hustled after blurbs like any budding author.  But even he would have admitted that the initial rollout didn’t go quite as planned.  He had hoped to appeal to an audience that would understand the book’s classical puns as invitation to an ironic interpretation.  (Greek: “Utopia” = “no place.”)  In other words, he wanted to criticize everything to book seemed to stand for.  In actuality, More was a monarchist who defended private property, participated in Lutheran-burning, and later lost his head because he refused to sanction his king’s divorce.

His arid wit backfired this time.  Within More’s lifetime, Utopia was cited as justification for communal property in the Peasant War, and was used as a blueprint for civic organization in towns in southern Mexico.

“This fellow is so grim that he will not hear of a joke,” he complained.  “That fellow is so insipid that he cannot endure wit.”  Once officially a member of the court of Henry VIII, More suggested Utopia be burned.

It was too late.  And given the impact of utopian thought since then – the basic tenets of communism, capitalism, fascism, and socialism all trace back to utopian texts – it’s fair to characterize the last five hundred years of human civilization as a history of not-getting-the-joke of Utopia. That history will repeat if the next five hundred years are best characterized by an affectless viewing of “The Colbert Report.”  The evidence that our world too suffers from a kind of “irony-deficiency” doesn’t stop with satiric news.  The mantra of Oliver Stone’s Gordon Gekko (“Greed is good”) is a witless business plan for many, and mocking recitals of dirty limericks by Andrew Dice Clay (a Jewish comedian) became revival for Italian misogynists who took them for rhyming mission statements.

Of course, the politics now are all reversed.  The funny guys are all on the left; somber cowboys brood stage right.  Were he alive today, Thomas More might feel most at home among neo-Stoics who under the guise of a “real America” plan to secede, plot for overthrow, or hope to coronate Sarah Palin.

Utopia – the un-ironic version of it that proved fruitful in shaping modern democracy – is the victim of all this.  It’s now largely a pejorative term.  Propagandists who currently target “hope” have already succeeded in making “utopia” synonymous with socialist idealism.  They forget that free markets, mutually assured destruction, and peace through superior firepower are each just as easy to link back to utopian tracts.  Utopia is the scope of the plan, not the nature of the product.

In America, it’s particularly tough to escape the influence of that un-got joke.  President Obama offers frequent reminders that the United States is an ongoing experiment.  Our goal, in our founding documents, is to become a “more perfect” union.  Only tin ears remain deaf to the utopian echo.  When our politicians deride one another’s plans as utopian, they forget that plans can be made and criticisms leveled only because we all live in a version of More’s joke.  The far right thinks its views are those of the Founding Fathers, and that the country’s enemies are crazy utopians who would undo democracy.  But the Founding Fathers were utopians to a man.  They railed not against taxes, but against taxes without representation.  Today’s conservative spirit applied to the late eighteenth century would have resisted even those changes.  George W. Bush once described the benevolent dictator as the best form of government, and Cheney’s quest to expand executive power betrayed nostalgia for monarchy.  Conservatives long for a despot like More’s ironically-intended “King Utopus.”

Yet it’s not just irony deficiency that links us to the past.  We’re also becoming more bawdy.  And in this regard, it doesn’t matter whether you’re Dick Cheney on the floor of Congress or Joe Biden at a presidential press conference.

The only thing that perhaps explains why viewers today prefer “The Daily Show” to CNN or Fox is that the same cultural mood that produced In Praise of Folly has come around again.  But now that the politics have reversed we must ensure that the humor is not so subtle it becomes its opposite.  In this regard there is, I dare say, hope.

Not long ago, Jon Stewart conducted a (mostly) sober debate on the financial crisis with a CNBC analyst (and admitted clown).  It was a riveting interview – one in which an absence of artificial poise and stoicism appeared to enable a further depth of insight.

But when the CNBC clown dodged a question with banter, Stewart called him out on it: “This isn’t a fucking joke.”

And no one laughed.

Surprise Me!

BROWSE BY AUTHOR