Human flight began with laundry. In 1777, Joseph-Michel Montgolfier was watching clothes drying over a fire when he noticed a shirt swept up in a billow of air; six years later, he and his brother demonstrated their hot-air balloon, the first manned flying machine. In Paris, a grand monument was planned to honor the balloon flight, but the project stalled in its earliest stages; all that’s left of it is a five-foot clay model in New York’s Metropolitan Museum of Art. At first glance, it’s hard to see that the sculpted clay depicts a balloon at all — the flying machine is encrusted with layers of tiny, winged, chubby-cheeked cherubs, stoking the fire, wafting up in gusts of hot air, hanging on for the ride. Two grown-up angels ride on the scorching clouds toward the top, one blasting a herald’s trumpet, the other (cheeks puffed up like Louis Armstrong) blowing to make the whole contraption move, straining himself as if it would take all of his lung-power to overcome the combined weight of several dozen dangling babies. And here is my best guess at an explanation: when that clay was modeled, a flying, fire-powered, human-carrying sack of fabric was something entirely new, the highest of high-tech. But flying cherubs? They were old — little putti had been soaring across the ceilings of Europe’s churches and palaces for centuries. It’s only in terms of what’s old that the newest technologies make initial sense. Without the help of the old, they’re incomprehensible, which is as good as invisible. It’s not surprising that technology comes into the world wrapped in metaphor. With the help of a metaphor — a flight powered by angels rather than expanding air — something as alien as a flying machine was domesticated into a visual culture that seemed to make solid good sense. That’s how we’ve always communicated progress to one another, even when the results risk looking ludicrous with a few centuries’ hindsight. More than smoothing over progress after the fact, metaphors themselves often drive progress. The insight that turned a balloon into a piece of Baroque art was the same kind of jump that turned a billowing shirt into a flying machine. But if smart figurative thinking can spark and explain new technologies, defective metaphors can do just the opposite. When the words and images we use to familiarize the new become too familiar — when metaphors start to die, or when we forget that they’re only tools — they can become some of the most powerful forces against innovation. It’s not always technical walls that stop change in its tracks. Sometimes, innovation is limited by language itself. When was the last time, for instance, that you used the word “desktop” to refer to the actual surface of a desk? Our desktops are imaginary now — but in the days of the earliest graphical user interfaces, comparing a computer to a piece of office furniture was odd enough that tech companies had to spell it out for us. “First of all,” read one of the earliest Macintosh print ads, “we made the screen layout resemble a desktop, displaying pictures of objects you’ll have no trouble recognizing. File folders. Clipboards. Even a trash can.” In 1984, when the image was still fresh, your computer interface resembled a desktop; now, it just is one. In 1984, the desktop justified the ways of Jobs to man; but soon enough (to mix metaphors just a little) it became a tyrant in its own right. An intuitive image for a screen resting on a desk made little sense for a screen resting in your hand. The mobile desktop didn't fail for lack of trying: as Mike Kuniavsky explains in Smart Things, his book on computing design, one of the clunkiest early mobile operating systems failed because it took the desktop so literally. Start up the Magic Cap OS, which debuted on Sony and Motorola tablets in 1994, and you were faced with an actual desk, complete with images of a touchtone phone and rolodex. To access other apps, you clicked out of the “office,” walked down the “hallway,” and poked into any number of “rooms.” The internet browser was a further trek: out of the office building, down the main street to the town square, and into a diner, where the web was was finally accessible by clicking on a poster. Jump ahead a decade to the iPhone and iPad. To argue that they are so intuitive “because of touchscreens” is to ignore the first step that made their simplicity possible: abandoning a worn-out metaphor. We’d grown so used to desktops, folders, and all the rest, that they’d ceased to remind us of objects outside our computers. And once Apple recognized that the metaphor was dying a natural death, it was clear that the desktop could be discreetly buried. (A bit more tentatively — because there are still quite a few old-school PDA fans — I’d suggest that the awkward handwriting-recognition systems of devices like the Newton and PalmPilot were themselves products of faulty metaphors. A PDA may resemble a pen-and-paper notepad, but it’s hardly meant to work like one.) The awareness that metaphors can inhibit innovation as much as they advance it leads any number of technological misfires to make an odd, new kind of sense. Early cars weren’t simply called “horseless carriages,” they were literally designed to resemble carriages with the horse removed; the Model T, in turn, was one of the first cars to successfully eliminate the carriage metaphor. If driverless cars are ever feasible, we might expect the pattern to repeat itself: early entries modeling themselves on familiar sedans and minivans long after their function is gone, and successful competitors breaking through the metaphor entirely, into shapes we haven’t yet imagined. Why, to take another example, were we so attached to manned spaceflight that we spent decades and billions on space shuttle busywork? One reason: from Captain Kirk to the word astronaut (literally “star sailor”), we’ve been taught to view space exploration through the metaphor of seafaring adventure. Yet the Curiosity rover crew, without resembling swashbuckling sailors in the least, has brought back more knowledge of our solar system than any astronaut to date. Science and math may increasingly be the curriculum’s glory subjects — when’s the last time you heard a politician demanding that schools churn out more classics majors? — but innovation has always demanded just as much verbal creativity, a feeling for the possibilities and limits of words themselves. Innovators need an eye for what George Orwell called “dying metaphors”: not those newly vivid ones (like “desktop” in 1984), nor the dead ones that have stopped reminding us of images at all (like the “hands” of a clock), but the images that have outlived their usefulness. And we need an eye, too, for all the silent biases that creep into tech-talk unawares. As Kuniavsky observes, the metaphor of “cloud” computing suggests an amorphous vapor that “extends beyond our reach and does not have a defined shape or boundary. Events that happen in the cloud may be outside the control of any one person in it.” Does the image of data stored in a cloud lead us to settle for less privacy? Consider the desktop one more time: surely there are powerful economic reasons for the “digital divide,” but hasn’t the desktop metaphor contributed in its own way? From the moment it comes out of its box, your computer presumes that you’re the kind of person who spends most of your time at an office desk. We’re free to write language, images, and anything else with the mushy look of the humanities out of the history of progress. We’re even free, like the state of Florida, to consider charging more for a college education in the comparatively “useless” fields of English and history. But the result might be a generation of would-be innovators even more prone to be unaware of, and trapped in, the dominant metaphors of the day — like the sculptor too busy modeling little angels to give much attention to the miraculous flying machine underneath. Incidentally, Paris finally did get a balloon monument, though it took more than a century; it celebrates the aerial messengers of the Franco-Prussian War. Convincingly weightless even cast in bronze, the hot-air balloon sails up out of a circle of human figures. There’s not a cherub in sight. Image via Metropolitan Museum of Art
I was trying to summarize The Brothers Karamazov for my wife when I realized that I was embarrassing myself. “There’s this rich old man with lots of enemies, and it turns out that someone has bashed his head in. The police have to figure out which of his three sons is responsible for the murder, and so they arrest the oldest son (who’s at a drunken party with his mistress), because he clearly needed the old man’s money to pay off his debts. The oldest son tries to put together a convincing alibi, but at the trial his fiancée rats him out because she’s really in love with his younger brother, and—” “Let me guess. The butler did it.” “Oh. Well, yeah, actually, the butler did do it.” (Spoiler alert?) Now, the fault here clearly lies with me rather than Dostoevsky, because my 30-second plot summary manages to exclude everything that puts The Brothers Karamazov among the world’s great novels (such as the fact that the butler did it, in part, because of an argument about the moral implications of the non-existence of God). But sometimes summaries, even the most reductive and unfair ones, can be revealing. And what a plot synopsis reveals is how Dostoevsky managed to hang a book of profound questions on some of the most hackneyed conventions of fiction: the murder mystery, the love triangle, the courtroom drama. Conventions are what we make of them, and they are entirely different things in the paws of a hack, or the hands of a master. In one, they are rote, paint-by-numbers exercises that satisfy our hunger for the familiar; in the other, they are closer to archetypes that bear remarkable thematic weight. But not every convention can bear the weight of every theme. The conventional knight’s quest or saint’s life might have been dominant literary conceits in another era, but it’s hard to imagine serious fiction making use of them today. And just as conventions go in and out of fashion, they also move into and out of better neighborhoods: up and down the scale that, fairly or not, divides “literary fiction” from “genre fiction.” Today’s literary set-piece becomes tomorrow’s predictable genre exercise — and we can see that process playing out in the sad, but inevitable, decline of the courtroom drama. In the 19th century, Dostoevsky gave over the climax of his magnum opus to a full-blown murder trial with all of the trimmings, from surprise witnesses to chapter-length closing statements to a dramatic reading of the verdict. Nearly a century and a half later, though, the courtroom drama lives in a cultural ghetto. We still love the readymade tension and clash of a good trial, but we largely satisfy that urge in the less reputable precincts of cable TV: an excessive interest in Casey Anthony or Scott Peterson (or worse, Nancy Grace) is usually something to be apologetic about. In fiction, our love of trials is catered to by an entire subgenre of lawyer novels, and by lawyer shows whose verdicts seem to always reflect conveniently on the advocates’ sex lives. Today, it’s hard to imagine a major novel, like The Brothers Karamazov, overcoming that accumulated baggage to make a murder trial its dramatic linchpin. An important reason, I’d suggest, is this: as criminal trials have grown fairer, they’ve also grown less dramatically interesting. The difference lies in the changing possibilities of evidence. What would the trial of Dmitri Karamazov have looked like in a world of DNA testing, security cameras, or cellphone records? What would it have looked like even two decades later, as fingerprinting came into widespread use? Absent such hard evidence, a court would have to focus on “softer” variables: questions of character, psychology, relationships, and memory. Not coincidentally, these are exactly the kinds of questions that interest literary novelists — and a trial was once an ideal forum for exploring them. In a world before modern forensic evidence, a criminal trial was much more like a novel: it was more likely to be an exploration of personality, a contest between two different theories of a human being. The growing sophistication of forensic evidence hasn’t erased those questions from the courtroom, but it has relegated them to the background. Given the choice between a 21st-century court and a 19th-century court, we’d be more confident (though certainly not completely confident) that the former gave accurate verdicts. In the latter, however, we’d find much more scope for the ambiguities and dueling interpretations that are crucial to good fiction. That’s the kind of scope we see in the trial of Dmitri Karamazov for the murder of his father. The case is not so much a “whodunit?” as a “who is he?” And his murder trial is an appropriate climax to the novel because it is a struggle, in the absence of hard evidence that points either way, to construct and compare dueling versions of the rash defendant and the victim, his repulsive father. Both prosecution and defense agree that Dmitri repeatedly threatened his father’s life and, on the night of the crime, broke onto his father’s property with the intention of doing him harm; in the process, Dmitri assaulted a servant with a bronze pestle he had in his pocket. The prosecution claims that Dmitri then forced his way into the house, murdered his father with the pestle, and stole 3,000 rubles his father kept in an envelope. The defense claims that Dmitri approached the house but repented and ran away at the last moment; the murder must have been committed by Smerdyakov, the father’s butler. (Dostoevsky reveals that this version is the true one, though neither side has any inkling that the real murder weapon was a paperweight from the victim’s desk.) The prosecution wants to paint Dmitri as vicious and violent, consumed with hatred for his father and entirely capable of following through on his threats; the defense wants to paint him as a fiery and impulsive young man, but ultimately an honorable and sentimental one, whose threats were no more than drunken boasting. It’s remarkable how little of the evidence and argument brought forward by either side would be relevant to fact-finding in a modern courtroom. Modern trials, too, have a place for character testimony — but certainly not to the extent we see in the Karamazov case. A local doctor testifies that he gave the defendant a present of nuts when he was a neglected child, and recounts the tearful thanks Dmitri later gave him as a grown man. Dmitri’s fiancée testifies that he had once generously saved her family from financial ruin, “and, indeed, the figure of the young officer who, with a respectful bow to the innocent girl, handed her his last five thousand rubles — all he had in the world — was thrown into a very sympathetic and attractive light.” Shortly afterward, though, the fiancée suffers a change of heart — and reveals to the court that Dmitri had sent her a letter, scribbled in a bar, promising to murder his father. The prosecutor takes full advantage of that revelation in summing up Dmitri’s character to the jury: He is a marvelous mingling of good and evil, he is a lover of culture and Schiller, yet he brawls in taverns and plucks out the beards of his boon companions. Oh, he, too, can be good and noble, but only when all goes well with him….But if he has not money, he will show what he is ready to do to get it when he is in great need of it. The defense counsel’s response is to build a counter-narrative of Dmitri: “Gentlemen of the jury, the psychological method is a two-edged weapon, and we, too, can use it.” The defense, in fact, wants to cast the prosecutor as an over-eager crime novelist, guilty of telling without showing: We have, in the talented prosecutor’s speech, heard a stern analysis of the prisoner’s character and conduct….He went into psychological subtleties into which he could not have entered, if he had the least conscious and malicious prejudice against the prisoner. But there are things which are even worse….It is worse if we are carried away by the artistic instinct, by the desire to create, so to speak, a romance. With that point made, the defense attorney calls the jury’s attention to the inconsistencies in this psychological portrait. The prosecutor, for instance, claims that Dmitri flung away the envelope containing the 3,000 rubles and then paused to check on the servant he had assaulted, to determine whether or not he had killed a potential witness. But how, the defense asks, could one man do both? The first step is the panic of an amateur—the second, the calculation of a hardened killer. “Mr. Prosecutor,” the defense attorney demands, “have you not invented a new personality?” In the personality built up by the defense, the incriminating letter was merely “drunken irritability.” The butler Smerdyakov, in fact, looks far more like a murderer: In character, in spirit, he was by no means the weak man the prosecutor has made him out to be….There was no simplicity about him, either. I found in him, on the contrary, an extreme mistrustfulness concealed under a mask of naivete, and an intelligence of considerable range….I left him with the conviction that he was a distinctly spiteful creature, excessively ambitious, vindictive, and intensely envious. Summing up, Dmitri’s counsel claims that the prosecution is so eager to bend the truth because of its outrage at the alleged crime of father-killing. But was the victim—abusive, neglectful, and self-absorbed as he was—a father in anything more than name? "Father"…a great word, a precious name. But one must use words honestly, gentlemen, and I venture to call things by their right names: such a father as old Karamazov cannot be called a father and does not deserve to be. Filial love for an unworthy father is an absurdity, an impossibility. Love cannot be created from nothing: only God can create something from nothing. This line of argument matters in the trial because there simply isn’t anything more substantial on which either side can ground its hopes. But it matters in the novel itself because it restates, in blunt form, questions that Dostoevsky has been asking for 800 pages: what do fathers and sons owe to one another? How much are we bound by our inheritance from our parents? Is selfless, unconditional love ex nihilo humanly possible, or is it an attribute of God alone? All of the questions raised in Dmitri’s trial function in a similar way: the trial is the novel in miniature, the place in which its questions and conflicts are cast in the highest relief. In convicting Dmitri, the jury reaches a verdict that’s both understandable and wrong. But that’s of secondary importance: Dostoevsky is able to plausibly write that dramatically rich trial because the question at its heart—did Dmitri Karamazov murder his father?—can’t be answered by anything other than a series of murkier questions. But what if the characters could answer with certainty? What if it were simply a matter of solving the case by dusting for the right fingerprints? Could Dmitri’s trial, transplanted into our century, possibly bear the weight that Dostoevsky wants it to bear? Actually, we don’t have to speculate. Online, I discovered a new classroom activity [pdf] for high school students: “Integrating Forensics, Civics, and World Literature: The Brothers Karamazov.” The exercise, sponsored by the University of North Carolina, asks students to retry Dmitri in a modern courtroom. Here are some of the guidelines: Whose DNA do we need to collect? 1. DNA on the pestle (from the hair fibers) 2. DNA on the paperweight (from the blood) 3. Fyodor’s [victim’s] DNA (We will have to exhume his body to do this.) 4. Grigory’s [assaulted servant’s] DNA Whose fingerprints do we need to collect? 1. Dmitri’s fingerprints 2. Smerdyakov’s fingerprints (We will also have to exhume Smerdyakov’s body to be able to lift his fingerprints. If his body is too badly decomposed, we will need to look at his thumb prints on his birth certificate if that can be found anywhere) That’s practically all the evidence we need to acquit Dmitri and correctly convict Smerdyakov in his place—evidence that renders irrelevant questions of Dmitri’s upbringing, his drunkenness, his volcanic relationship with his fiancée, his clash with his brothers, his father’s failings, Smerdyakov’s feigned simplicity, and everything else that turns the trial into such a troubling character study. Even if the evidence were somehow inconclusive, the retrial exercise translates the story onto an entirely different plane: not of character, but of brute facts. It reads like a script for CSI: Karamazov. And that’s exactly why the courtroom drama has almost died out as a serious literary form. The growth of what we can know, and the certainty with which we can know it, has cut a good bit of guesswork out of criminal justice. But literature—at least literature that aims higher than CSI — is built on inspired guesswork. Certainty is good for justice; it’s poison for fiction.
This is an old story already, more than three weeks old and no longer newsworthy; but as you’ll see, I’m fastening on an old story to make a point. On February 28th, Jan Berenstain, co-author of the Berenstain Bears books, died, and the news of her death was greeted a few hours later by this from Slate's Hanna Rosin: “As any right-thinking mother will agree, good riddance.” What followed was a Slateishly contrarian take on the books’ humorlessness and gender politics, but the outburst of anger that met Rosin's piece was, unsurprisingly, focused on those two words: “good riddance.” Commenters lined up by the hundreds -- 494 at last count -- to denounce the author’s callousness toward Jan Berenstain’s still-warm body. Rosin herself followed up with an apology the next day: “I admit I was not really thinking of her as a person with actual feelings and a family, just an abstraction who happened to write these books.” So if Jan Berenstain was an abstraction, what were she and her death doing there in the first place? The takedown of her books was funny, pointed, and resonant for any parent force-marched through the same paperbacks for an eternity of bedtimes. The mystery here is what function that “good riddance” -- so mean-spirited and out of keeping with the rest of Rosin's tone -- was playing in the article. Was it the product of a writer’s compulsion to display her cleverness at the expense of someone who can’t answer back? I put the blame on something more mundane: the news hook. For most of those in the opinion-writing business, the news hook is an ugly necessity. In my five years as a political speechwriter, I wrote and placed dozens of op-ed pieces for my bosses -- and each time, the hardest task was arranging a marriage between the piece’s policy agenda and a news hook, the paragraph or two of timeliness that made the policy medicine easier to swallow and was a requirement for publication anywhere. Each time, a little bow of deference to the news cycle, no matter how halfhearted -- it could be a good monthly jobs report or a bad one, an embarrassing slip of the tongue from the other party, or something as predictable as Tax Day -- helped answer the mandatory question: not “why this?” but “why this, now?” Timeliness can be a virtue in political writing -- but taken too far, and invoked too automatically, it can be poisonous. Rosin's gaffe is an extreme example. Given the vehemence of her opinions about the Berenstain Bears books, it’s likely that she'd been incubating them for some time. A week before Jan Berenstain’s death, or a week after, few outlets would have published those thoughts, because few of us would have read them; only in a short window of newsy opportunity were those thoughts considered worth our attention. “Good riddance” has all the callous clumsiness of a writer straining to get relevance out of the way, not because she wants to, but because she has to. It’s easy, and fair, to be angry at Rosin; it’s harder, and more important, to think about the ways our demands as readers make us complicit. Not every artificial news hook thuds as badly as “good riddance.” Many more go unnoticed. But when we insist on timeliness and newsiness, we constrict the range of our reading universe and cut ourselves off from the best, most discursive traditions in nonfiction. Why do we read opinion, or essays, or nonfiction of any kind? Sometimes, we’re in search of a discrete, practical piece of information. But the range of useful nonfiction is much smaller than the range of nonfiction that competes for our attention by pretending to be useful -- by creating a veneer of urgency. We’re encouraged to read a piece on Leonardo on the occasion of a new museum exhibit, even if tickets are sold out; a retrospective on U2’s Achtung Baby on the album’s 25th anniversary, but not its 24th or 26th; a piece taking apart the Berenstain Bears simply because the author is in the news for being dead. The timeliness of those pieces does nothing for us, except provide an illusion of usefulness. Yet an interesting person can be interested in Leonardo or U2 or the politics of children’s books at any time -- and can read about things that are inherently interesting, not accidentally interesting for 15 minutes. When we reward timeliness with the limited currency of our attention, we put ourselves in a tightly circumscribed place in which our intake of information is left up to the whims of the news cycle. And abdicating decisions about what we know to an abstraction like “the news cycle” is a lot like abdicating political decisions to an abstraction like “the market.” We say we value freedom of information, but freedom means, in part, a life that isn’t totally subject to the demands of usefulness. That’s why earlier generations made an important distinction between the “useful arts” and the “liberal arts” -- liberal as in liberty. The useful arts are what we need to make a living. The liberal arts are those we can learn for their own sake, to the extent that we are free from the pressures of making a living -- or, in a more democratic sense, the arts that free us, if only temporarily, from the demands of everyday life. To me, the most valuable writers are the ones who give us a sense of what it's like to be free from that pressure -- especially the workaday pressure of getting to the point. Term papers, memos, and news wire reports have to get to the point -- but why should we allow that expectation to dominate the rest of the written universe? Last week, I was looking for some information on Giambattista Piranesi, an Italian artist famous for his etchings of Roman ruins. I found that Aldous Huxley had written a 1949 essay on him, so I pulled it up -- but found that it took Huxley 2,093 words to come around to his announced subject. Some writing is a PowerPointed business meeting; following the twists and turns of Huxley’s argument is like being treated to a long, unhurried talk over drinks. First we are at the top of a college staircase in London, where Huxley and Albert Schweitzer are visiting the remains of the philosopher Jeremy Bentham, stuffed and placed in a wooden box on permanent public display. Then there is Schweitzer’s observation that Bentham “was responsible for so much less harm” than his more ambitious contemporaries in philosophy. Then Huxley discusses with the reader the one harmful exception in that legacy, Bentham's unnatural passion for logical and physical tidiness. We are a thousand words in, with no sign of the essay’s topic. Next is a conversation about tidiness as an inspiration for tyranny, and about how Bentham’s love of efficiency found its sinister apogee in his design for the Panopticon, a circular prison in which every inmate is locked in solitary confinement under constant surveillance from the circle's center. Where prisons were once chaotic madhouses (described for several hundred more words), Bentham's model helped turn them into efficiently dehumanizing machines, which have slowly metastasized through every other level of our world, until “every efficient office, every up-to-date factory is a panoptical prison, in which the worker suffers...from the consciousness of being inside a machine.” Then some thoughts on this terror as it was expressed and symbolized by the great prisons of literature, especially Kafka’s, and finally a comparison to the great prisons of art, of which Piranesi's are the most striking example. This etcher of popular tourist scenes spent his free time designing massive, imaginary torture chambers and subterranean vaults of dripping stone. Perhaps, Huxley suggests, he suffered from a lifelong depression; perhaps his brightly-lit, classically-styled work for the tourists was his best attempt at therapy. And Huxley is just getting started on his topic; after more than three op-ed columns’ worth of material, he has at last gotten to the point. By the end, Jeremy Bentham in his wooden box seems far, far away. We might be able to retrace the steps that took us from point A to point Z, but we could hardly have predicted those steps at the outset. This is the fluidity and spontaneity of wide-ranging conversation, lightly controlled by a gifted conversationalist. And when we arrive at the point at last, it’s with relief, and something of the pleasure of a long passage of musical dissonance that finally resolves and comes to rest. Writing that refuses to state its intentions at the outset -- that gives the impression of not even knowing its own intentions at the outset -- might strike us as undisciplined. Sometimes it is. But artfully procrastinating on the point, carefully projecting carelessness, is one of the most rewarding and difficult challenges in writing. It takes enormous craft. We can still see that craft in the inventor and greatest master of the form, Michel de Montaigne. When we credit Montaigne as the originator of the essay, it’s not because he was the first to write in prose on factual topics -- it’s because he turned declamation into conversation. Montaigne spent years training himself not to get to the point, a progression we can follow by reading his early essays side-by-side with his later ones. At the beginning, he simply jumps into his topic with two feet. In one of his first essays, “On Custom,” he states his thesis immediately: “In truth, custom is a violent and treacherous schoolmistress.” The rest of the essay only elaborates and adduces evidence for that thesis. A half-decade of practice later, a new, unhurried calmness has come over Montaigne’s work. His essay “On Cannibals” begins with a quote from an ancient Greek king on the dignity of certain barbarians; the relevance of this quote won't become apparent for pages. Next, Montaigne offhandedly mentions that he has just spoken with a sailor who has spent 10 years in the New World. The New World -- this thought sets Montaigne off on a long and learned tangent, asking himself whether any of the classical writers, Solon or Plato or Aristotle, had knowledge of continents across the ocean, and mentioning what he himself has learned about geology from watching his native river change course over twenty years. This reminds him of the sailor again -- can such a man's story (about what, we don't know yet) be trusted? Montaigne seems to think so, because simple men are less observant but speak plainly, whereas some who are too smart for their own good puff up their tales to exaggerate their own importance, with any number of bad consequences for an accurate understanding of the world. What exactly was the sailor's story? It seems to have something to do with the inhabitants of the Americas, because Montaigne reflects for a time on their supposed savagery, concluding that “everyone gives the title of barbarism to everything that is not in use in his own country.” The rest of the essay is an early attempt at anthropology-by-hearsay. It finally comes out that the natives sometimes indulge in cannibalism, as the essay's title suggested, but otherwise seem to live in a golden age of tranquility, contentment, and communal property. “On Cannibals” is a founding document of the noble savage myth; but it’s so persuasive because it doesn't seem to have an agenda at all. Again and again, as Montaigne’s writing reached maturity, he would wander through his internal library in just this way. The cumulative impression is that no one essay really has a beginning or an end -- every end is simply an opportunity for another digression. As these digressions through literature, science, history, anecdote, and memory pile up, we sense that we are dealing not with a narrative, but with a network: no fact is an island; every point is linked to every other point in Montaigne's mind by an endless array of invisible threads. Of course, that’s how we all think. But it is not how we all write. Montaigne was unique in finding a written expression for the way conversations evolve organically, the way thought has a shadowy logic of its own; it’s why his essays are such a different animal from the essays we’re assigned in school, so different that they shouldn't even bear the same name, and why we often feel more at home in them. He did it by being studiedly haphazard. And his achievement matches that of his contemporary, Shakespeare, who took years of experimentation to make his soliloquies sound less like declaimed speeches and more like overheard thought. A Montaigne essay, like a Shakespeare soliloquy, gives us the impression that we are in the presence not of a disembodied, opinion-spouting voice, but of a real person. Long after those essays lost their relevance, long after the second-hand reports from the Americas and meditations on 16th-century French politics ceased to be news, they have maintained their appeal because they are a personality embodied. And the foremost trait of that personality is freedom: freedom to take up and turn over absolutely any subject in human experience, on any prompting or none; to follow any tangent simply because it catches his eye; to begin and end a continent apart, or simply to trail off; to know for the simple sake of knowing. In Montaigne’s day, that freedom was the privilege of an aristocrat. Today, unless we trade it away for a mess of relevance, it’s the birthright of anyone with a high school education and an Internet connection. Illustration by Dominick Rabrun.
1. It’s a commonplace that beautiful art can, and often does, come from ugly souls: Caravaggio knifing a man in a Roman alley; Wagner writing Jew-hating tracts alongside his operas; Schopenhauer pushing an old lady down a flight of stairs. But what about the reverse? What about the hypocrisy of living well? That’s the charge leveled earlier this spring, on a conservative National Review blog, against the Nobel-winning British playwright Harold Pinter—and I want to dwell on that criticism because, as ridiculous as it seems on its face, I think it can lead us in a roundabout way to a better understanding of what it means to take art seriously. In her National Review piece, Carol Iannone cites an article about the late writer’s remarkably loving relationship with his wife—evidently three decades of not going to bed angry—and contrasts it with the much bleaker world Pinter painted in his plays during those years: “While Pinter was enjoying his high-level marriage of refined intellectual equals in the British upper class, he was inflicting on his servile public a dark vision of obscure miseries, casual cruelties, inarticulate vulgarity, strangled miscommunications, and menacing silences in sordid rooming houses.” I’m no expert on Pinter, so for the moment I just want to take that harsh characterization of his work, fair or not, as a given. What interests me is the way Iannone goes on to justify it: “We shouldn’t imbibe the bleak visions of many modernist works (especially by left-wing writers), visions based not on life but on willed projections of darkness and despair.” There are a lot of assumptions here that go completely unjustified: that we should be “especially” wary of left-wing writers; that writing has to be “based on life” in some unspecified way in order to succeed. But the basic claim is this: we wouldn’t want to spend any time at all in the world of Pinter’s plays, we wouldn’t want to willingly take on that amount of darkness, when we could spend time somewhere brighter. And the fact that somewhere brighter exists is proven by the writer’s own life. We might call that view conservative PC. And my first reaction was to dismiss it out of hand: “What’s wrong with a play about despair? Even if a play is nothing but cruelty and vulgarity, a play isn’t a world. I can spend an hour or two with something depressing and despairing, because I also know that there’s plenty of uplifting art for when I feel like being uplifted. The fact that the writer lived a good life—the fact that I can live a good life—is actually a point in favor of bleak, dark plays. I can watch them secure with the fact that that’s not all there is.” What struck me about my reaction, however, was just how much it had in common with the defense against artistic PC from the other side. How often have we seen a movie or a TV show criticized for, say, a negative or stereotypical portrayal of women? And how often have we heard an instant response like this? “This movie (or show or book) isn’t portraying women—it’s portraying individual characters. You may not like them, but it’s unfair to make them carry the weight of an entire worldview about women, or about anything else. A movie isn’t a world.” The argument here isn’t simply one over politics, over liberal elites or gender roles; the argument is between two different ways of reading. One is a sort of deliberate tunnel-vision: it asks us to fully inhabit a work, to treat if for the time we’re there as a self-contained world. The other view places a much lighter burden on artists: it tells us in the audience that it’s fine to watch with one eye, and to keep the other eye on the “real world”; and when we can remind ourselves that there’s always a world outside of what we’re watching, the artist’s choices carry a good deal less weight. What the second view is really promising us is art without responsibility—or at least with much less responsibility. That’s exactly why it’s so instinctively appealing. But, by stopping us from becoming fully involved with what we’re reading, watching, or hearing, it also carries a high cost—one I’m not convinced is worth paying. 2. The term world-building, when we use it at all, is usually reserved for thick, Tolkeinesque fantasy books: world-building means inventing imaginary continents with their own geographies and landmarks and kingdoms. I’d argue, though, that all art is engaged in world-building—and that it can be accomplished as successfully in 14 lines as in 500 pages. Here, for instance, is a world without spring: Those hours, that with gentle work did frame The lovely gaze where every eye doth dwell, Will play the tyrants to the very same And that unfair which fairly doth excel: For never-resting time leads summer on To hideous winter and confounds him there; Sap check’d with frost and lusty leaves quite gone, Beauty o’ersnow’d and bareness every where: Then, were not summer’s distillation left, A liquid prisoner pent in walls of glass, Beauty’s effect with beauty were bereft, Nor it nor no remembrance what it was: But flowers distill’d though they with winter meet, Leese but their show; their substance still lives sweet. That’s Shakespeare’s Fifth Sonnet. The claim is that time will destroy the beauty of the poem’s subject, just as winter strips the leaves from trees, and the only defense is to bottle up and save “summer’s distillation”—in this case, as it turns out, by conceiving an heir. The sonnet’s urgency comes from the fact that it ends in winter: it is a world where spring, regeneration, and rebirth are all impossible. In The Art of Shakespeare’s Sonnets, the great critic Helen Vendler explains the poem’s power, and why it’s dependent on this cold ending: In both quatrains, no possibility is envisaged other than a destructive slope ending in confounding catastrophe. Since Nature is being used as a figure for human life (which is not reborn), the poem exhibits no upward slope in seasonal change. It cannot be too strongly emphasized that nothing can be said to happen in a poem which is not there suggested. If summer is confounded in hideous winter, one is not permitted to add, irrelevantly, “But can spring be far behind?” If the poet had wanted to provoke such an extrapolation, he would by some means have suggested it. Even though Vendler talks about what we are and are not “permitted” to see in a poem, this kind of reading is much more than an artificial convention or an English professor’s trick. It’s the kind of reading that is compelled by great world-building—by art that is so convincing or so powerful that we barely stop to think that it’s artificial. Just as one can make it through The Lord of the Rings without seriously reflecting that there are no such things as elves, one can make it through this sonnet without seriously reflecting that there is no such thing as a world that ends in winter. Just as importantly, the kind of blinkered reading that Vendler argues for is our contribution to making a poem or a book or a film “work,” a contribution that is easier the more compelling the work we’re dealing with. If we read through the Fifth Sonnet constantly reminding ourselves of the artificiality of its world—repeating to ourselves at the end of every line, “of course spring comes after winter”—the experience of reading it starts to fade. Without immersion in its world, we can still admire the rhymes and meter and metaphors from a distance, but we are also shut out from them. The poem loses whatever power it had over our emotions; it stops to “work” in the same way. This immersion, or tunnel-vision, is really just a kind of suspension of disbelief, maybe the most fundamental kind. Just as it’s hard to fully experience Hamlet without temporarily believing in ghosts, it’s hard to fully experience this sonnet without disbelieving in spring. It’s hard to fully experience any work without, at least temporarily, treating it as a world. 3. From that perspective, we can’t mentally protect ourselves from a uniformly bleak play by recalling that there are other, happier plays or other, happier possibilities for our own lives; the point of the play, if it works as theater, is to ask: “What if the world were like this?” Or take a TV series like The Wire, which paints the failure and breakdown of public institutions from police to schools to unions. To treat the series as a world is to understand that it’s passing a judgment not just on Baltimore, the city in which it’s set, but on cities and institutions in general, along with the men and women who run them. We can’t shield ourselves from those conclusions by remembering that there is, say, a well-run town somewhere in Scandinavia. Or rather, we can—but only at the price of trivializing what we’re watching, reducing it to a forgettable entertainment. In fact, it’s those of us who put the greatest responsibility on art who are most willing to take seriously its power over us: to shape the way we see the world, and the way we act in it. It’s not surprising that the godfather of this view—Plato, who famously called poetry morally corrupting—was one of the most gifted writers who ever lived, as well as (by some accounts) a former poet himself: in other words, a man who knew the power of literature so directly that he came to fear it, arguably too much. Taking a strong view of artistic responsibility doesn’t tell us what that responsibility has to look like. It doesn’t compel us, like Plato, to expel poets from the city. It doesn’t mandate that all of our art be uplifting. It doesn’t tell us where to draw the line between the kind of bleakness that’s bracing and the kind that’s just degrading. It doesn’t commit us to a view of the gender roles we want our movies and TV shows to embody. It doesn’t commit us to a particular ideology at all. It is the beginning of those arguments, not the end of them. It simply tells us that we can’t sidestep those arguments by protesting that it’s just a play, just a movie, just a book, just one entertainment among many. Or rather, we can—but in the process, we also admit that those plays, movies, and books can’t really move us, at least not enough to care about the way in which they’re moving us. And to admit that is to flatten the distinction between those entertainments that really are forgettable, and the art that, with our cooperation, successfully creates worlds. The more compelling the world, the greater the obligation that it be one worth living in. (Image: M31, the Andromeda Galaxy (now with h-alpha) from astroporn's photostream)
1. Maybe you’re young enough to remember Blue’s Clues, or old enough to have a little one hanging on the mystery-solving adventures of Steve and Blue as you read this. If, by any chance, Blue’s Clues happens to be on in the background, try this experiment: watch and see how long the camera holds on a single shot. You will, by design, be waiting a long time. The child psychologists who helped create Blue discovered that young viewers don’t know what to do with cuts and edits; they understand them as a new scene, not the same scene shot from a different angle, and they’re soon too confused to keep up. So the Blue’s Clues camera almost always holds steady, in a series of long and deliberate takes. On the grown-up channels, the camera can do more—but only because we’ve already learned the complicated visual grammar that makes the camera make sense. Think of the long list of visual cues we take for granted. How do we know, without struggling to process the fact, that a scene shot from three angles by three cameras is the same scene? How can we tell the difference in emotional register between a series of rapid-fire cuts and a single, slow, agonizing take? Who says that a series of short shots often indicates the passage of time? As much as we may take these conventions for granted, as natural as their emotional associations might seem to us, they make sense largely because we’ve had “practice.” Who invented this visual grammar? A film historian might look to pioneering pictures like Battleship Potemkin or Birth of a Nation; but before there was such a thing as a movie camera, it was a writer’s job to juxtapose and jump between images—from a battlefield to Mount Olympus, from medias res to the far past, with resources limited only by imagination and the price of ink. In college, I was lucky enough to take an English class with the novelist Reynolds Price, before he died in January—and one of his most striking arguments was that John Milton, with his instant transitions from Hell to Earth to Heaven, was one of the inventors of the cinematic jump-cut. It was a throwaway comment, but it led me to think that we ought to pay more attention to writers’ tricks of “editing”: not in the usual sense of revision, but in the cinematic sense of transitions from image to image and from scene to scene. I’ve come to believe that writers, as much as filmmakers, are responsible for our visual grammar—that their imaginary jumps, and the thematic use they’ve made of those jumps, have laid the groundwork we take for granted today whenever we watch anything more demanding than Blue’s Clues. If the camera goes somewhere special, the chances are good that a writer’s imagined camera has gone there before—and shaped not just filmmakers’ sense of what’s possible, but the expectations we bring to the screen. We can consider the influence of the writer’s “camera” by looking at one of the most dramatic edits available: zooming out. What can a writer accomplish by playing tricks with distance and scale, sometimes pulling away from the action, leaving the characters neglected in place as the viewpoint pulls back to take in the landscape, or even the whole planet? We’ve all seen dramatic zooms used for effect—but what exactly is the effect, and have writers helped shaped it? I want to start to answer those questions by examining three important—and moving—instances of literary zooming out. I don’t claim that these three authors are responsible cinematic zooming out, but I do think they helped create a lasting set of conventions that give it its power and its emotional meanings. Zooming out relies for that power on the tension between human smallness and human dignity—on the possibility that putting us in cold, “God’s-eye-view” perspective can, against expectations, make us more important. 2. Let’s start, naturally enough, with Milton: the blind poet who, perhaps because he was cut off from the visual world for so long, came up with some of the most inventive and unexpected edits in poetry. Among these, the most stunning—centuries before we had cameras to take the picture or satellites to send it back—is one of the earliest images of Earth seen from space. The place is Book II of Paradise Lost, and the scene is Chaos: not exactly outer space in our sense, but certainly the great trackless void between worlds, “a dark / Illimitable Ocean without bound, without dimension, where length, breadth, & highth / And time and place are lost.” Satan has escaped the gates of Hell and traversed this blind wilderness on his mission to infect our world; and as he reaches the border between Chaos and the created world, he pauses to take stock by the first beams of visible light. The “camera” turns and scans the distance, leaving Satan behind. “Farr off” is Heaven with its jeweled towers—but still so enormous that we can’t tell, from this distance, whether its border is a straight line or an arc. A little further on, the light by which we and Satan see passes on to Earth: hanging in a golden Chain This pendant world, in bigness as a Starr Of smallest magnitude close by the Moon. Later, Milton will catalogue this world’s creation in microscopic detail—but the first time he shows it to us in Paradise Lost, it is small enough to be blocked out by a finger. The sense of insignificance—next to the massive Heaven, next to Chaos—is overpowering. So is the sense of danger: the “pendant world” is literally hanging in the balance. It and all its life, which are set to be corrupted, look like a fragile toy from this distance. And what about Satan? Though the camera seems to have pulled back from him, he’s still the closest object to our viewpoint. Next to Heaven, he is tiny, a nuisance, a perpetual underdog, but he towers over Earth—the theology of the whole poem summed up in an image. But we’ve also just seen Satan at his most courageous, a voyage through Chaos that sees Milton explicitly compare him to the Greek epic heroes. The image of him brooding over Earth from afar is one of our first introductions in the poem to Satanic glamour—a glamour that Milton will whittle down over the course of his epic, but one that reaches its seductive high point here. It’s no surprise that the image of a hovering hero watching over Earth would resurface much later in an entirely positive light—as the iconic image of Superman. Between Earth and Satan, distance and closeness, where does Milton mean for our sympathies to lie? On one hand, “we” are “down there”: our home and (by the poem’s theology) our ancestors are on that shadowed speck, and surely we can be expected to feel some of its danger. On the other hand, “we” are also “here”: our viewpoint is not there on Earth, but alongside Satan’s, and we wouldn’t be human if we didn’t share some of his exhilaration at this moment. That, too, is part of Milton’s point. 3. Either way, it’s a moment of high drama—but what happens when a writer uses a zoom to pull away from drama, at its climactic point? What’s the point of deliberately trading conflict for calm? Toward the end of his huge novel Bleak House, Charles Dickens gives us a long and languid zoom out by night, over London, over the English countryside, and all the way to the sea. But it’s not an exercise in scene-setting, or in picturesqueness for its own sake. It’s a calculated, and almost infuriating, distraction from one of the novel’s turning-points: the murder of Mr. Tulkinghorn, an attorney who has spent hundreds of pages building an elaborate scheme of blackmail, which he has almost seen through to success. Tulkinghorn, coldly self-satisfied as usual, has just returned home after issuing a decisive ultimatum to his blackmail target. On the way in, he’s distracted by the sight of the moon—and so is the story itself, which leaves Earth and zooms into a lyrical passage tracing the progress of the moon across the sky and leaving Tulkinghorn almost forgotten below: He looks up casually, thinking what a fine night, what a bright large moon, what multitudes of stars! A quiet night, too. A very quiet night. When the moon shines very brilliantly, a solitude and stillness seem to proceed from her, that influence even crowded places full of life. Not only is it a still night on dusty high roads and on hill-summits, whence a wide expanse of country may be seen in repose, quieter and quieter as it spreads away into a fringe of trees against the sky, with the grey ghost of a bloom upon them; not only is it a still night in gardens and in woods, and on the river where the water-meadows are fresh and green, and the stream sparkles on among pleasant islands, murmuring weirs, and whispering rushes; not only does the stillness attend it as it flows where houses cluster thick, where many bridges are reflected in it, where wharves and shipping make it black and awful, where it winds from these disfigurements through marshes whose grim beacons stand like skeletons washed ashore, where it expands through the bolder region of rising grounds rich in corn-field, windmill and steeple, and where it mingles with the ever-heaving sea; not only is it a still night on the deep, and on the shore where the watcher stands to see the ship with her spread wings cross the path of light that appears to be presented to only him; but even on this stranger’s wilderness of London there is some rest…. What’s that? Who fired a gun or pistol? Where was it? When the gun goes off in that staccato burst—“What’s that?”—we aren’t there with Tulkinghorn to take the bullet. We’re still in the folds of a lazily sweeping 206-word sentence that takes us from London to the coast and back, everything frozen and watching. There’s far too much effort in those 206 words for them to be a plot contrivance. Yes, the identity of the murderer is supposed to be a mystery; but if that were the only consideration, Dickens only had to narrate the scene from Tulkinghorn’s perspective or keep the killer conveniently in the shadows. Dickens’s transition to the landscape is doing much more work here. For one, it builds the shock of the murder. The long sentence takes us so far away from the action of the story, and is so full of motionless calm, that it almost lulls us into putting Tulkinghorn out of mind—until the shot, heard but not seen, snaps us instantly back. It’s a fitting end for a man who, like this impeccably controlled and cunning lawyer, considers himself untouchable. Instead, he is wrenched out of his reverie in the most violent way possible—and so, in a way, are we. At the same time, is our surprise really as total as his? The long zoom out over the landscape is an investment in surprise, but it also seems designed to build suspense, even dread—based on a nagging sense that the landscape doesn’t belong here, is out of place for a reason we can’t identify until we hear the shots. It is, in other words, an early instance of “It’s quiet—too quiet.” In film, in fact, a long shot at a climactic moment is a cue to worry, not to relax; think of the fishing-boat murder of Fredo in The Godfather II, which is interspersed with lake scenery and shots of his brother watching the killing he ordered from a distance. Mr. Tulkinghorn’s sudden death seems like a distant ancestor of that scene. Should any of this change our thoughts for the victim? In one sense, no: Tulkinghorn was a manipulative and double-dealing man in life—and while no one deserves a pistol-shot between the eyes, few readers have shed a tear for him. But Dickens could also deal out far more grisly and humiliating deaths: one minor character in Bleak House spontaneously combusts. Here, instead, zooming out turns the end elegiac, and if we can’t be moved to feel any injustice over a bad man’s death, maybe we can feel the injustice of a beautiful scene cut off too soon. The stillness “attends”; woods and steeples and ship seem to be waiting for something, and though they cannot possibly know what is about to happen in a London courtyard, Dickens makes us feel that they can—that the local death of a single lawyer, placed in such a wide setting, has much more than a local significance. Finally, remember that we begin the scene by following Tulkinghorn’s gaze up to the sky; his eyes don’t sweep as far as the camera, but at the moment he dies, he is looking at the same moon as we are. For us, the wide world of that 206-word sentence is cut off by a line break; for him, it is cut off permanently. Entirely hateable characters rarely die with that kind of pathos. As much as a death with dignity is possible, Dickens gives one to Tulkinghorn—and he dignifies him by zooming out. 4. It’s this tension between dignity and dwarfing scale that is tackled most directly by the last example I want to look at: the novel Star Maker, by the British writer Olaf Stapledon. Written in 1937, it’s not as well-known as the two other works I’ve looked at, but its influence has arguably been just as strong. It was a landmark work of serious science fiction and held up as an inspiration by writers like H.G. Wells, Jorge Luis Borges, and Arthur C. Clarke, and even physicists like Freeman Dyson; it is an ancestor of science fiction movies and literature that play out across star systems and galaxies. It is, in effect, one book-length, cosmic-scale zooming out: it is the story of a Londoner who finds himself leaving his body, and then floating above the Earth, and then in interstellar space. Throughout this strange novel, our narrator does nothing but observe, searching out traces of intelligence wherever he can find it; slowly he comes across and joins forces with alien minds that have become disembodied in the same way, and as this snowball of consciousness accumulates and rolls through galaxies, the book comes to be narrated by “we,” not “I.” Immaterial and unfixed in time, they watch the histories of entire planets unfold: some are Earth-like, some utterly alien; some pass whole through the stage of “world crisis,” while some destroy themselves. Ultimately planets and galaxies build collective consciousnesses and absorb our narrator; as the end of history approaches, the universe itself becomes self-conscious and takes over the narration—“I” again. Finally, the universe comes face-to-face with the Creator—only to find that its maker is not a loving God, but something of an uncompromising artist, who discards the universe as imperfect and begins again. Across the universe, intelligence winks out, cold and entropy set in, and our original narrator wakes up on Earth again, lying on a hill. And this is, to my mind, the most interesting part of the book. How can you go on after a vision like that—not a vision of warm, mystical comfort, but a vision of unimaginable smallness and rejection? What could the point possibly be, when you have literally seen Earth die? The narrator gathers himself up and zooms out again—but only in imagination this time, and only as far as the circuit of his own planet. He can look at Earth now with the otherworldly objectivity of a man who has lived many lives on many alien worlds, and yet at each stop he is jarred by human suffering, by events that ought to seem trivial, but cannot: “In the stars’ view, no doubt, these creatures were mere vermin; but each to itself, and sometimes one to another, was more real than all the stars.” His view sweeps past England to Europe, where “the Spanish night was ablaze with the murder of cities,” to Germany and its “young men ranked together in thousands, exalted, possessed, saluting the flood-lit Führer,” on to Siberia, where “the iron-hard Arctic oppressed the exiles in their camps,” east to Japan, which “spilled over Asia a flood of armies and trade,” south to Africa, “where Dutch and English profit by the Negro millions…and then the Americas, where the descendants of Europe long ago mastered the descendants of Asia, through priority in the use of guns, and the arrogance that guns breed….” Even though he has learned to think of his home with an alien’s detachment, the features that capture his attention are more than those that can be seen from space. They are the tiny events that pass across the landscape: war, trade, politics. And as the story ends, he believes, or chooses to believe, that he is watching the same crisis through which every world has to struggle, the universal story in miniature—and that everything he sees on Earth is dignified in that light. He looks down the hill to the light from his home, and up to the light from the stars, and concludes: Two lights for guidance. The first, our little glowing atom of community, with all that it signifies. The second, the cold light of the stars…with its crystal ecstasy. Strange that in this light, in which even the dearest love is frostily assessed, and even the possible defeat of our half-waking world is contemplated without remission of praise, the human crisis does not lose but gains significance. Strange, that it seems more, not less urgent to play some part in this struggle, this brief effort of animalcules striving to win for their race some increase of lucidity before the ultimate darkness. C.S. Lewis—who would go on to write his own series of science fiction novels as a rebuttal, in part, to Stapledon—was shocked enough by Star Maker’s unorthodoxies to call it “sheer devil worship.” But its conclusion, as an attempt to hold in one thought our smallness and our importance, reminds me of nothing so much as some lines Lewis would have immediately recognized, which cut between the human and the galactic scale as effortlessly as any of the passages I’ve considered here: When I consider Your heavens, the work of Your fingers, The moon and the stars, which You have ordained, What is man that You are mindful of him, And the son of man that You visit him?
1. What if the best thing art has to offer is freedom from choice? There’s a reason it’s high praise, not criticism, to say that a film or a piece of music or a good novel “sweeps you along.” There’s a selflessness in it: not just the pleasure in pausing the parts of the brain that plan and calculate and select, but in the temporary surrender of investing in someone else’s choices. Good art can be where we go for humility: when we’re encouraged to treat each of our thoughts as worthy of being made public, it can be almost counter-cultural to admit, in the act of being swept along, that someone else is simply better at arranging the keys of a song or the twists of a book and making them look like fate. Freedom from choice is a seductive way of thinking about art—and it’s at the heart of the debate over the cultural value of video games. Video games, for their cultural boosters, promise an art based on choice: an interactive art, possibly the first ever. For their detractors, “interactive art” is a contradiction in terms. Critics can point to video games’ narrative clichés or sloppy dialogue or a faith in violence as the answer to everything; but at base, they seem to be bothered by the idea of an art form that can be “played.” Choice is their bright line. Last spring, Roger Ebert nominated himself to hold that bright line on his blog. And though the 4,799 comments (to date) on his original post weighed in overwhelmingly against his claim that “Video games can never be art,” and encouraged him to back off of that blanket assertion, he summed up as eloquently as anyone the danger posed to narrative by video games’ possibility of limitless choice: If you can go through “every emotional journey available,” doesn’t that devalue each and every one of them? Art seeks to lead you to an inevitable conclusion, not a smorgasbord of choices. If next time I have Romeo and Juliet go through the story naked and standing on their hands, would that be way cool, or what? It’s possible, as Owen Good did, to write off the whole argument as empty, just a chest-thumping proxy war between generations or subcultures: “Art is fundamentally built on the subjective: inspiration, interpretation and appraisal. To me that underlines the pointlessness of the current debate for or against video games as art….Is there some validation the games community seeks but isn’t getting right now?” But then, every argument about art is about validation, about the assignment of prestige—yet those arguments are still worth having, because they can also be about something else. The argument over video games is about finding a place for choice in art, about respectability, and about empathy. And the best way into that argument may come from considering another cultural practice’s struggle for respect in its early days. I think that there’s already been a powerful interactive art, one that has some lessons for video games: psychoanalysis. 2. Too Jewish, too sex-obsessed, too much a cult of personality centered on Freud: before it was a more-or-less respected science, psychoanalysis was widely thought to be all of those things. George Makari’s Revolution in Mind begins the history of the movement that gathered around Sigmund Freud with a clique small enough to fit into a Vienna living room. Freud dreamed of creating a new scientific discipline, one legitimized by university departments, teaching hospitals, international conferences, and state funding; but he couldn’t even secure agreement among the rival scientists, doctors, artists, writers, liberal activists, and sexual libertines all drawn, all for reasons of their own, to his ideas of the unconscious. What exactly was this movement? Was it a school of medicine, a philosophical circle, a budding political party? With its objects of study so hard to pin down objectively, it was hard to say with certainty. Maybe it was art. That, at times, was the opinion of James Strachey, one of the more important figures in legitimizing this strange discipline and winning it an international reputation. James was the brother of the Bloomsbury author Lytton Strachey and a scholar of Mozart, Haydn, and Wagner; he was also one of the first English-speaking psychoanalysts and Freud’s most important translator. It’s because of Strachey that we still use Latin terms like id, ego, and superego, rather than Freud’s more down-to-earth “the It,” “the I,” and “the Over-I” (as a literal translation of his German would have had it). As an aspiring “talk therapist” in 1920, Strachey came from London to Vienna to undergo an extensive training analysis with Freud. He wrote to Lytton that he divided his time equally between the doctor’s office and the opera house—and, intriguingly, he described his sessions as a brand-new kind of aesthetic experience, a kind of private opera in which the therapist was the director and he was the star. Here is what he wrote after 34 hours on the couch: [Freud] himself is most affable and as an artistic performer dazzling…Almost every hour is made into an organic aesthetic whole. Sometimes the dramatic effect is absolutely shattering. During the early part of the hour, all is vague—a dark hint here, a mystery there—; then it gradually seems to get thicker; you feel dreadful things going on inside you, and can’t make out what they could possibly be; then he begins to give you a slight lead; you suddenly get a clear glimpse of one thing; then you see another; at last a whole series of lights break in on you; he asks you one more question; you give a last reply—and as the whole truth dawns on you the Professor rises, crosses the room to the electric bell, and shows you out the door. Why was the experience so aesthetically powerful, so unlike anything Strachey had seen before? In large part, it seems, the difference was that he was a participant in the drama, not merely a spectator—or maybe a participant and a spectator at the same time. There was a story being played out in one-hour increments; but the story was his story. His choices—how to react to each question, what to reveal and what to conceal, how to make sense of and advance the unfolding “plot”—gave the story its shape. And the scene of the real action was internal: “dreadful things going on inside you.” At the same time, though, Strachey realized that this was not a simple exercise in introspection or an outpouring of emotions—it was a guided drama. Freud shaped it as much as his patient did, not by telling a story, but by skillfully arranging a limited number of choices, on the fly, that reliably delivered his patient to the sensation of dawning truth and artistic completion. What’s remarkable is that he did it with such consistency. Strachey wrote about a series of conversations, and yet they all seemed to shape themselves into a classical dramatic arc: premonitions, a mounting pursuit, a crashing climax, and a cliffhanger ending. It’s true that James Strachey, aesthete as he was, brought his own unique preconceptions to Freud’s couch. But it’s also true that the dynamic he identified—the tension between free response and prearranged structure in each session—would, according to Makari’s history, be the cause of some of the growing movement’s greatest schisms. For instance, Freud’s Hungarian disciple Sándor Ferenczi came to advocate “active therapy”: confronting patients directly, turning their answers back on them, setting deadlines for progress, enforcing sexual abstinence, and even, in one case, regulating the patient’s posture during sessions. On the other pole, the Frankfurt analyst Karl Landauer argued for a passive technique that teased out the patient’s resistances to treatment, insisting that the therapist must not “impose actively upon [the patient] one’s own wishes, one’s own associations, one’s own self.” Long after Freud’s movement had achieved the legitimacy he always sought, this tension between hands-on and hands-off treatment would end friendships, partnerships, and careers. 3. Of course, video games are not a kind of psychoanalysis, and psychoanalysis would not make much of a video game (“press ‘A’ to confess sexual feelings for Mother”). Nor has psychoanalysis shaken off its detractors, then or now, by painting itself as an art; as compelling as Strachey’s vision was, there was always more prestige in claiming the objectivity of science. Besides, if it was so pleasurable for Strachey to reflect on his treatment, how serious could his problems have been? Strachey may have idealized his treatment—but he also found the emotional potential in a practice that many of his contemporaries were writing off as a fad. And in doing so, he gave evocative evidence for the possibilities of interactive art. He also, unwittingly, described the reasons why video games seem so promising to so many. They can be an art that makes us both spectators and performers, one that can turn us from passive audience members to partners, flattening the relationship between artist and audience without erasing it altogether. In the private drama he co-created, Strachey was decidedly the junior partner; Freud was such an effective senior partner because he balanced choice with structure, keeping his patient personally invested even as the session stayed fastened to some important dramatic rules. Is it too far a stretch to see this as a type for the relationship between a skilled video game designer and a savvy player? In video games, our story can be anyone’s but our own. But their greatest promise, and greatest advantage over traditional art, is in their power to create empathy—to make someone else’s story feel like our own because we are temporarily living it. We’ve been able to lose ourselves in video game characters ever since the first player sent his Italian plumber off of a ledge and exclaimed “I died!”—not “Mario died!” Today, though, that power of empathy is being put to much more complex uses. In Slate’s annual Gaming Club discussion last month, Chris Suellentrop reflected on “the scene—it feels unfair to call it a level” in the game Heavy Rain that forced him to cut off his own character’s finger: “It felt agonizing, like I was cutting off my own finger. It was one of the most remarkable things I’ve ever experienced, in video games or any other medium.” Similarly, in Extra Lives—both a defense and a criticism of video games’ cultural potential—Tom Bissell spends pages meditating on a drive through the night city in Grand Theft Auto IV, with two bodies hidden in the trunk of his car. It was a dangerous risk undertaken to advance his character’s underworld career—and it was also an unsettling choice in which he felt criminally complicit. Interactivity, Bissell writes, “turns narrative into an active experience, which film is simply unable to do in the same way.” Film could not have left him with the same sense of guilt. It might be that empathetic power that ultimately brings the argument over video games’ cultural value to an end. Even in the midst of his criticism, Ebert concludes that the works of art that have moved him most deeply “had one thing in common: Through them I was able to learn more about the experiences, thoughts and feelings of other people. My empathy was engaged.” Perhaps there aren’t any games that engage his empathy, let alone engage it more powerfully than a film. Maybe there never will be: it’s telling that the moments identified by Bissell and Suellentrop as emblematic of video games’ potential tend to revolve around extreme violence. Even Bissell complains that game designers tend to give short shrift to narrative and characterization and line-by-line prose—and maybe we shouldn’t expect that to change. Given the amount of capital most sophisticated games demand—let alone the capital it would take to finance one complex enough to count as truly interactive—maybe we should expect them to settle permanently into the niche of the summer blockbuster. But even among the big-budget films, there are those that challenge us and move us. Video games, in principle, don’t have to be any different—and the potentially interactive art they hold out promises to move us with the intensity of Strachey’s light breaking in. When and if that happens, we shouldn’t expect art to be washed away in a flood of endless choices, with characters going through the story “naked and standing on their hands.” The most interesting argument won’t be about video games’ status as art, but about how to build an art that most effectively balances choice and structure, openness and narrative. Every increment toward freedom could mean deeper involvement and deeper empathy, at the cost of shapelessness; every increment toward structure could mean more captivating stories and characters—even as we find it harder to imagine ourselves into their shoes. The debate between the virtues of freedom and control, articulated almost a century ago by Freud’s disciples, will play out again, inconclusively, in a realm they could have barely imagined.