Like we did last year, we thought it might be fun to compare the U.S. and U.K. book cover designs of this year’s Morning News Tournament of Books contenders. Book cover design never seems to garner much discussion in the literary world, but, as readers, we are undoubtedly swayed by the little billboard that is the cover of every book we read. Even in the age of the Kindle, we are clicking through the images as we impulsively download this book or that one. I’ve always found it especially interesting that the U.K. and U.S. covers often differ from one another, suggesting that certain layouts and imagery will better appeal to readers on one side of the Atlantic rather than the other. These differences are especially striking when we look at the covers side by side. The American covers are on the left, and clicking through takes you to a page where you can get a larger image. Your equally inexpert analysis is encouraged in the comments.
I happened to notice recently, in my daily online wanderings, that the nominees have been announced for "The Seventh Annual Weblog Awards." As usual, the organizers have listed a couple dozen categories, and as usual the same handful of blogs, more or less, are in the running. Many of the usual suspects are there, Boing Boing, PostSecret, Dooce, Gizmodo, Instapundit, Daily Kos, Lifehacker, and the rest - blogs that are now big business, some of which are owned by big businesses.The omission of "literary bloggers" from this long list of nominees naturally seemed glaring to me, having had a front row seat for the last four or so years as an amorphous and very loosely affiliated movement of bloggers has greatly expanded the realm of literary discourse in the U.S. and elsewhere. And though there has sometimes been an unhealthy "us against them" mentality between bloggers and professional critics, in many ways this friction has melted away as critics have become bloggers themselves and as a number of talented bloggers have begun to invade the book pages, providing a pool of talent and a new voice to book review sections that were shrinking and stultified.This is a big deal. Bloggers have helped create a new literary discourse that benefits readers, writers, and critics - a place where reading and discussing books for pleasure can augment the sometimes joyless drudgery that newspaper criticism has become. (Note how Jerome Weeks, now of book/daddy, jumped from his regular newspaper gig: "So it'll be a relief to read for pleasure again. One reason it's particularly appealing these days is that it's so counter-culture -- so counter to our prevailing techno-bully rapid-response profit-margin mindset.").Yet we need those sometimes bullying newspapers. As Kassia wrote in a post in the early days of the LBC, "Books don't have endless windows opening for them." This sentiment was echoed in an Orlando Sentinel essay by movie critic Roger Moore late last year: "Reviewers, in general, are canaries in the print journalism coal mine, the first to go. Classical music, books, visual arts and dance are dispensed with, or free-lanced off the bottom-line. That's happened everywhere I've ever worked." But as the big windows close, and criticism sections shrink or disappear, hundreds of smaller windows have opened.In Kassia's LBC essay, she went on to write, "It's interesting to me that readers are leading the charge to discover and promote new, often overlooked fiction. Traditional avenues of literary coverage are necessarily limited in scope, even with the Internet." I have come to believe, and I hope people agree with me, that book blogging is more than just a hobby. I say this not in a self-promotional or self-aggrandizing way (so many others are better book bloggers than I), but looking at how the public discourse about books has changed over the last few years. So, the truth is, having thought about it, I'm not disappointed that not a single book blog - not even some of the best (TEV, Ed, Bookslut, Conversational Reading... I could go on and on) - was singled out for recognition by the Weblog Awards. Litblogs have somehow gone too far down the path of assimilation to be considered for such distinctions, I think. Book blogs and traditional book criticism have intermingled sufficiently that they are now, except in a few remaining dusty corners, one.My declaring it doesn't make it so, but perhaps now, the us versus them mentality between the bloggers and the professional critics is mostly behind us. Which is good, because there are so many more books still to write about.
I. In the 1990s, a scourge swept across the world of entertainment. It threatened the livelihoods of those in the creative industry and presented a world where the average person, dwelling in obscurity, could be plucked from the masses and made a star. It was equal parts thrilling and horrifying. No, I'm not talking about the internet, I'm talking about its cultural predecessor, reality television. Reality TV was supposed to devour television. It was going to make writers and actors irrelevant, and single-handedly lower the national reading level by two full grades. Reality television became shorthand for stupidity and quickly found a place as a scapegoat for one side or another of the culture war. These shows, with their cameras hidden and seen, were Orwellian nightmares come to life, Jean Beaudrillard essays in pixelated form. They were the beginning of the end of the world. Except that they weren't. They didn't really do any of the things they were feared to do. And yet, though their overall presence on the airwaves is a fraction what it was at their peak, their influence remains enormous. We can say this now, from our perch in the shiny new decade. We've largely moved on to other fascinations, other distractions. We're scapegoating Twilight now, and we're all terrified of the internet. Or we're terrified of Twilight and scapegoating the internet. Paris Hilton has moved on to Twitter. We've all moved on to Twitter. But it wasn't too long ago when none of this seemed possible. It was a time before Lost, before The Wire, before the end. It was the glory days of reality television, and it all started on a cable network that had hours to fill, and little money with which to fill them. II. MTV wanted to make a soap opera. Like all the new cable networks, they had to fill the hours. America, it turned out, had an insatiable appetite for television, and the new cable networks were struggling to keep up. Some of them turned to re-runs of programs that had been modest hits in their original network incarnations -- the My Two Dads and Eight Is Enoughs of the world -- while others made cut-rate game shows and aired Just One of the Guys four times a day. MTV had tried a few different things to kill time -- most notably, a twenty-year experiment in which they showed music videos in their entirety -- but had finally settled on a strategy of appealing to youth culture: the eternal fountain of disposable income. MTV's dilemma, however, was that, while it recognized that a soap opera would likely be popular and would round out its lineup of oversexed game shows and quasi-journalistic news programs, they lacked the funds to produce such a show. Their solution was brilliant -- they'd simply make a show without actors or writers -- two of the most expensive parts of any decent soap opera. The result was The Real World, whose premise was neatly summed up in its introductory statement: "This is the true story of seven strangers picked to live in a house and have their lives taped to find out what happens when people stop being polite and start being real." That I can remember this sentence, awkward though it may be, with greater ease than I can The Pledge of Allegiance is testament to the incredible success of The Real World. Not only is it the longest running program in MTV's history (the network recently renewed the program for a 26th season), it created an entire category of programming and influenced some of the most successful shows on television today. III. The first two seasons of The Real World contain the seeds of all reality television, as well some elements that would find their way into today's most successful scripted programming. At first glance, the first season of The Real World appears to be a collection of random, diverse twenty-somethings thrown together in Manhattan. A closer look reveals that all of the cast members, from model/actor wannabe Eric Nies to writer/journalist Kevin Powell, aspired to a career in entertainment or the arts. The casting logic of the show was fairly simple: find some young people willing to try this experiment in exchange for some exposure. In this way, the cast member's situation wasn't unlike that of today's bloggers and vloggers -- they worked for free in exchange for an audience, presumably with the hope that the experience would translate into a career. For some it did; for others, not so much. The first season of The Real World relied heavily on the pressures of their various careers for dramatic tension. We saw the characters balancing the time commitments of practice, rehearsal and performance with their newfound quasi-family unit back at the loft, a situation the young audience for the show could begin to appreciate. This balancing act -- with help from some racial tension -- blew up infamously when Kevin missed a group dinner meeting and was threatened with expulsion from the loft and the show. In the end, Kevin remained, but one could see that this episode, easily the most dramatic of the season, would not be an isolated incident in future iterations of the show. Season two of The Real World is, arguably, the single most important season of any TV show of the last twenty years. It is one of those watershed moments that happens once or twice a generation. The first season of The Sopranos was such a moment. The third season of Mad Men, one could argue, was another. The second season of The Real World is so important because it revealed the flaws in the show's premise and, more importantly, several ways to work around those flaws. It provided, in a way, the template for all of the major reality TV shows to follow, though one could be forgiven for not realizing it at the time. The second season took roughly the same premise as the first and moved it to Los Angeles, where it played up the aspirational angle a little bit more. Again we saw characters who desired fame and success -- singer Tami, comedian David, country singer Jon -- and again there was a healthy dollop of racial and sexual tension. This volatile mix exploded mid-season when David "assaulted" Tami, pulling a blanket off of her after she repeatedly asked him not to, revealing her in her underwear. For this crime -- something kids at camp do every summer -- David was forced out of the house and off the show entirely. Several aspects of the controversy are worth noting. Firstly, the incident initially appeared to be a joke. While the house was somewhat divided over how serious it was (from where I stand, it's pretty clear that David was trying to be funny and, maybe, a little bit flirty), the general consensus, at first blush, was that it wasn't a big deal. It was only after the issue was rehashed several times in the confessional that each person seemed to realize it as a moment of great import. One could almost see each cast member realizing that this made great drama as the issue built and built. In the end, the producers cited Tami's request for safety and removed David. Secondly, it's no coincidence that the two characters at the heart of the major strife in seasons one and two were both black men. The Real World aimed to be a microcosm of American society, and at least in this respect, it succeeded. Black men would find themselves vilified and ostracized for much of the show's run. While the house may have been split on David's departure, the audience ate it up. Removing him from the show turned out to be the single most interesting thing to happen that season. This speaks to both how dramatic the confrontation and aftermath were as well as to how boring the rest of the show was. No character signified the stagnation of season two more than country singer Jon, who spent nearly every minute of his screentime watching television and drinking Kool-Aid. The producers' disgust with Jon must've been intense. How does one build an aspirational story arc around someone who refuses to do much of anything? If season two hinted at the potential that overt conflict might play on the program, season three confirmed it. When the noxious Puck refused to play nice with his fellow cast members, particularly the saintly AIDS patient Pedro Zamora, he found himself voted out of the house by popular decree. Here, long before the phrase "voted off the island" became a popular idiom, we see the template that reality shows would use for years to come. If people tune in to find out if someone might get booted off the show, what if you kicked someone off every episode? Additionally, season three marks one of the last seasons the cast members would be left to their own devices (Season four's setting in London was interesting enough to generate drama on its own). In subsequent seasons, Real Worlders would be asked to do a variety of tasks, including working with children (a disastrous idea, considering that alcohol was fast becoming a vital component of every RW season) to running a tanning salon (okay, spray tanning salon, but still). The shows may not have lacked for drama, but they needed a scaffolding to hang that drama on, and it would have to come from outside the house. IV. It is difficult to remember how revolutionary that first season of The Real World felt. Here were people, attractive people, yes, but regular folks (something that would become less and less the case as the seasons wore on) living their lives. The emotion on the show seemed real. When characters fought, the scenes became simultaneously difficult to watch and irresistible. There was an untamed, unpredictable quality to these scenes that made them compelling. Something might happen; this was the "real world" after all. (The producers should be given some credit for simply getting out of the way. One has to imagine the network wasn't pleased when the season one cast decided to de facto endorse presidential candidate Jerry Brown by painting the number for his donation hotline on the wall of their loft, and yet they allowed it.) In addition to its unpredictability, the show was a voyeur's dream. These people were fascinating! Watching them do the most basic things -- eat a bowl of cereal or prepare for bed -- felt illicit, like we were privileged to something special and unique. Nobody, it turns out, ate a bowl of cereal exactly like you did. And when they revealed something unique about themselves -- such as Heather B.'s infatuation with NBA all star Larry Johnson ("Larry Johnson is so fine!") -- it was revelatory. Reality TV almost certainly created the now ubiquitous straw man argument "Why do I care what you ate for breakfast today?" That this question is raised about so much that happens online is no coincidence. It's certainly possible that our 90s diet of reality TV validated our own solipsism, which bore fruit during the latter half of the 2000s, when web 2.0 made it possible for us to share our own lives with the world. Whatever the case, the initial infatuation with "reality" didn't last. A few things broke the spell. For one thing, The Real World started to seem less and less real. Cast members knew the experiences of previous Real Worlders, lending the entire show a meta quality that it previously lacked. The first episode of every Real World season now consists mostly of people waiting to discover exactly how awesome the house will be. They also know that each season involves a trip to some fun, exotic locale, and they anticipate these trips, discussing where they might go. This acknowledgment of the conceit is present in any long-running reality show. It can't be that the women of The Bachelor all came up with the phrase "here for the right reasons" on their own, can it? Rather they learned that phrase through watching previous seasons of the show, just as the girls of America's Next Top Model learned to scream "Tyra Mail!" every time the show's producers drop off one of their cryptic missives. In fact, the dialogue of the shows is often so codified as to seem scripted. They may not have employed a writer to produce such gems as "Nobody wants to go home," and "I'm not here to make friends," but the result is the same. For these programs, built around elaborate elimination rituals and repetition of formulas, this self-awareness is both inevitable and even desirable -- if someone follows the show enough to know its every twist and turn, to be able to trace the patterns of the show, then the show must have truly reached a place of importance. It's affirming for the product to be emulated in this manner. And when that emulation includes asserting, repeatedly "This is real, okay?", all the better. For other shows, the effect is less desirable. Certainly The Hills struggled to maintain its veneer of "reality." It was difficult to convince the audience that Lauren Conrad was living anything resembling a normal life, even by the bizarre standards of an affluent LA party girl, when she was simultaneously the Teen Vogue covergirl and an intern at the magazine. It's no wonder that the show's "characters" seem to burn out after a few seasons. It can be difficult to keep up the illusion. At some point, even the people on The Real World began to seem less real. Gone were the mildly overweight, the slightly odd looking. Each cast began more and more to resemble an Abercrombie & Fitch catalog. The show lost its ties to the artistic world (always tenuous at best) and became primarily about clubbing and hot-tubbing. It ceased to be a mirror into the everyday lives of its characters and became more the document of a long vacation. The shift in focus from reality to fantasy isn't unique to The Real World. Reality TV is no longer about reality, not the world that any of us live in, anyway (if it ever was). Most reality TV shows are just game shows containing reality TV elements. Survivor, Big Brother, The Biggest Loser, America's Next Top Model, and The Bachelor are all long game shows in which the contestants play for a prize much larger than anything they might have won on The Price is Right (Indeed, on The Bachelor and The Bachelorette, they compete for a spouse). No game show has made more of The Real World's great revelation than American Idol has: that being real is all well and good, but what people really want is blood (metaphorically speaking). Idol was among the first shows to take the next step of involving the audience in the fate of its cast members, upping the ante just that much in the process. In fact, the show makes entire episodes out of the elimination ceremonies. The only non-game show reality shows left are about people who were most decidedly unreal. Somewhere along the line, somebody decided that we only wanted to watch people do nothing if we'd already watched them do something. Today, the only reality shows that simply follow people around in their daily lives are celebrity-based shows like Keeping Up with the Kardashians (Featuring Kim Kardashian, a celebrity famous for appearing in the 2000s version of a reality show, the internet sex tape). The lone exceptions to this rule are what might be called "anthropological shows," programs that aim to show us a life we will never lead. Jersey Shore, The Real Housewives of Wherever, The Hills, and the myriad shows about bizarre families are exemplar of this. Equal parts curiosity and incredulity attract viewers to these shows. Reality TV has ceased to try to show us normalcy, perhaps because it no longer needs to. Around the time The Real World drifted into the land of fantasy, the internet emerged from its awkward adolescence to become a platform for personal expression that made anyone who so desired into a kind of quasi-reality TV character. One could write an online journal (they called them blogs) or video themselves doing... well, anything. With that kind of capability, reality TV was free to explore the less commonplace aspects of modern existence. Occasionally, the mundane still has the power to amuse -- think about the craze created around The Situation's summertime Jersey Shore regimen of G.T.L. (Gym, Tan, Laundry) -- but it's not like it was. For a few years there, watching people's lives was all we really wanted to do. V. Reality TV still has a massive footprint on television, but all but the biggest hits have moved back to cable, where they help fill the endless hours. That isn't to say that reality TV's influence isn't felt in a variety of programs. The confessional, perhaps The Real World's most important innovation, plays a key role in a new breed of sitcom. The casts of The Office, Parks and Recreation, and several other shows often sit alone in a room and confess their thoughts to the camera in a direct address. These shows revel in the mundane, appropriating the reality of The Real World and adding to it the perfection of scripted drama. They bring back some of the imperfections of the early days of reality TV. It's difficult to say exactly why we retreated from reality television. My own theory is that the watershed moment was the 9/11 terror attacks, a media event that was just a little too real. After we'd seen that, reality was dead, so to speak. We needed something other than ourselves, bigger than ourselves. HBO had already begun the counterrevolution, airing The Sopranos in 1999, and continuing with Six Feet Under before finally reaching its apex with The Wire. These were long-form narratives the likes of which a television audience had never seen. Where television had seemed hopelessly shallow a few years earlier, suddenly it was entering a golden age. Soon the networks were following suit, bringing out a series of expensive, indulgently fantastic dramas, most notably Lost, Heroes and 24. It might seem like a stretch to call the late surge of "quality" scripted dramas a direct reaction to the glut of reality TV that permeated the networks in the late 90s, but it appears to be the case. Television moves in a somewhat cyclical manner, with each new generation proclaiming the death of the sitcom. Perhaps each subsequent generation will proclaim the death of reality TV. If they do, they will be wrong, as the reality shows are proving as durable and adaptable as the sitcom, and it's no surprise that MTV leads the pack in innovation. Just when it looks like The Real World is running on fumes, The Hills emerges from the ashes of Laguna Beach to become a phenomenon. As The Hills wanes and Lauren Conrad decamps the more lucrative world of young adult fiction, Jersey Shore arrives, tanned and fist pumping its way into the zeitgeist. In the world of reality, Ecclesiastes was right: "There is no new thing under the sun." [Image credits: MTV]
● ● ●
1. The book review is dead. At the very least, it’s very obviously dying. Anyway, we can all agree that it should be killed off, because it’s gotten to be irrelevant. If not downright parasitic. (Though maybe it might be salvaged if the average review was a little meaner.) I exaggerate only slightly here. This past August, a pair of meta-critical essays by Dwight Garner in The New York Times and Jacob Silverman in Slate sent everyone who fancied him- or herself a critic — whether institutionalized or not — into a collective fit. It was probably the biggest literary-cultural dustup since the Great MFA Debate of 2010-2011, when Elif Batuman’s London Review of Books article, “Get a Real Degree,” made everyone feel just a little bit bad about the existence of MFA programs. I found it hard to get terribly worked up about literary criticism’s emotional register. For every Laura Miller or Lev Grossman who has foresworn negative reviews, I know that there will be just as many qualifiers for the Hatchet Job of the Year Award to fulfill the angry review quota. For every purchased five-star review, there will be that lady on Goodreads who says that the only good thing about the new Junot Diaz novel is that it taught her the Spanish word sucio. But enough about the State of the Art! I enjoyed all of these essays, but the one thing that struck me was that they were all essentially negative, in the sense that they set out to describe how things were going wrong or why things ought not to be the way that they are. What they didn’t do a very good job of was describing what criticism or book reviewing is, or what it should be. Okay, there were some nice, bold mission statements thrown in there. Here’s Dwight Garner: “What we need more of…are excellent and authoritative and punishing critics.” Agreed. Or Daniel Mendelsohn, in the New Yorker: “the critic is someone who, when his knowledge, operated on by his taste in the presence of some new example of the genre he’s interested in...hungers to make sense of that new thing, to analyze it, interpret it, make it mean something.” Sounds great. Or Richard Brody, again in the New Yorker: “Criticism is the turning of the secondary (the critic’s judgment) into the primary.” Sure, why not. So I think we can all agree that A) the “book review” is a prestigious class of writing that people aspire to write, and B) there is a continuum of, shall we say, critical perceptiveness — what in the pre-everyone-gets-a-trophy age we might call “value” or “quality” — on which the multiple-thousand-word, tightly-argued essays of the New York/London/L.A. Review of Books reside at one end, and the rapid reactions of John Q. Tumblr reside at the other. (By the way, I don’t want to suggest that there is something philosophically corrupt or intrinsically wrong about the latter, or that just because something is edited and not self-published, it is automatically better than a blog post. Advanced degrees, journalistic credentials, and/or getting published in hard copy is not a guarantee that a book review is any good. See, for example Janet Maslin’s misreading of This Bright River.) But what should this excellence and interpretation and maybe a little bit of hard-headedness actually look like, in practice? Why has it been absent? And why does any discussion about literary criticism turn into a giant game of dodging the question, as if the concept of a book review were like the concept of pornography, in that you might not know how to define it, but you’d know it when you see it? In the interest of getting everyone on the same page (book pun!), I thought it would be an interesting exercise to dissect a book review, to pick apart what makes it work or not work, what makes some book reviews great and others — most of them, really — bland and unhelpful and immediately forgettable. Because book reviewing is a genre with its own conventions, just as every murder mystery must start with a body, and every epic fantasy must feature elvish words with too many apostrophes. It’s worth figuring out what those conventions are. 2. In the beginning, there is ego. As George Orwell put it in his essay “Writers and Leviathan”: “One’s real reaction to a book, when one has a reaction at all, is usually ‘I like this book’ or ‘I don’t like it,’ and what follows is a rationalization.” The decision to like or not like a book is where every book review begins. This is what gives the genre its underlying suspense — will Michiko Kakutani like this book or won’t she? — but also its frustrating sense of chaos, because no matter how technically sound or philosophically sophisticated or beautiful a book might be, something minor or tangential can turn off a reviewer so much that he or she decides the book is not good. A lot of book reviews never get past this first stage, and this is where the whole free-for-all of online reviewing can get frustrating. For instance, the Goodreads lady on Junot Diaz, or the people who unironically give one-star reviews to classic literature: all of these reviews consist entirely of the initial response and a subsequent explanation, and no self-reflection about whether there might be more to the book — and to the reviewer’s response — than that initial, emotional decision. If the nauseating chumminess of the publishing world is the Scylla of book criticism, than this kind of reviewerly narcissism is surely its Charybdis. But hopefully, no matter how much reviewers are in love with themselves, they will at least step aside and say a few things about the book. In the case of fiction, its plot, its characters, some of the backstory, and the setting. In the case of nonfiction, the overall narrative or argument of the book, the author’s source material and expertise in the subject matter. This is the next stage in the evolution of a book review, and it is plain nuts and bolts kind of stuff. But it’s so important to do readers this simple courtesy because, unlike an oil filter or a frying pan, the world of literature is so expansive and dependent on authorial decisions and whims that two books within a genre, or a sub-genre, or even a sub-sub-genre, may vary wildly in so many ways. Is the protagonist of this hard science-fiction story an astrobiologist on a generation ship or a detective on an asteroid base? And so on. This is where things start to get complicated. The average paid reviewer gets one scant paragraph in Publisher’s Weekly, maybe four or six in your average major metropolitan daily, to appraise a book. And more often than not, they splurge on summary — often to the exclusion of everything else. So their concluding paragraphs tend to be a little overstuffed, as these recent examples show: But finally, of course, this kind of rigidity exacts its own price, and Natalie can’t avoid paying. Each of the novel’s sections ends with a scene of violence, something Ms. Smith presents as inescapable in northwest London. Some characters die from it, others survive, but none are unscathed. What Ms. Smith offers in this absorbing novel is a study in the limits of freedom, the way family and class constrain the adult selves we make. In England, the margin for self-invention is notoriously smaller than it is in the U.S. — which is one reason why Ms. Smith, with NW, seems more than ever a great English novelist. (Adam Kirsch, review of NW in The Wall Street Journal) There are moments here and there in Telegraph Avenue where Mr. Chabon, himself sounds as if he’s trying very hard “to sound like he was from the ’hood,” but for the most part he does such a graceful job of ventriloquism with his characters that the reader forgets they are fictional creations. His people become so real to us, their problems so palpably netted in the author’s buoyant, expressionistic prose, that the novel gradually becomes a genuinely immersive experience — something increasingly rare in our ADD age. (Michiko Kakutani, review of Telegraph Avenue in The New York Times) The Revised Fundamentals of Caregiving deals with sorrow and disability and all the things that can go wrong in life. But mostly Evison has given us a salty-sweet story about absorbing those hits and taking a risk to reach beyond them. What a great ride. (Ellen Emry Heitzel, review of The Revised Fundamentals of Caregiving in The Seattle Times) In other words, you can see where these reviews are trying to do too much with too little space. Trying to sum up the quality of the prose with a few abstract descriptors. Making a final plea for the cultural relevance of the book. Ending on a gnomic, life-affirming mantra. And all this in fewer than 100 words. The fact that these reviewers are reaching for something beyond what they have space to cover is, to me, a tacit admission that there is more to be done here; that saying “Here is the plot of the book, and here is a pile of adjectives to show how much I (dis)liked it” just barely scratches the surface of what book criticism can cover. But if you’ve already done all that and you still feel that readers ought to take away one more big idea — what do you do? 3. Matt Taibbi hated The World is Flat and Hot, Flat, and Crowded. He hated their titles. He hated their premise. Hated their predictability, their utter lack of real insights, and most memorably of all, hated their language. In his reviews of Thomas Friedman’s two books, Taibbi tracked dozens of bizarre proclamations and just plain bad writing, from the first confusion between herd animals with hunting animals to his last, triumphant-until-you-think-about-it graph of freedom vs. oil prices, which used four data points selected basically at random to make a point about the march of democracy across the globe. (“What can’t you argue, if that’s how you’re going to make your point??” wrote Taibbi, two question marks included.) This might make Taibbi sound like a prescriptivist grump, a Grammar Nazi who just happened to find the one guy who was famous enough and bad at writing enough to deserve this kind of thrashing. Except that the reviews do more than that. It turns out that Friedman’s “anti-ear” is actually the most obvious symptom of a larger case of intellectual and moral fraud. In Friedman’s world, the rules of basic logic and historical causation do not exist; he invents new realities out of a few cherry-picked events and the limited frame of reference of a privileged, jet-setting columnist based out of New York City. On the one hand, this entire review stems from an act that we all can do: to try and gauge the quality of Friedman’s writing and thought. But Taibbi manages to do more than wag his finger at Friedman for writing poorly — he discovers something important and true that we didn’t know before, and more importantly, couldn’t know just by taking Friedman at his word. So Taibbi passes Daniel Mendelsohn’s “meaning” test, because we now know something new about Friedman’s book that we didn’t before. He certainly passes Dwight Garner’s bar for being both excellent and punishing. This is not simple aesthetic snobbery: it’s formal criticism that actually matters. Then there is the big picture. It’s hard to get much bigger than James Wood’s famous 2000 proclamation: “A genre is hardening.” In it, he identifies the “perpetual-motion machine that appears to have been embarrassed into velocity” that characterized novelists like Don DeLillo, Salman Rushdie, and mainly, Zadie Smith, whose then-new book White Teeth Wood was reviewing. These practitioners of “hysterical realism,” Wood argued, were to the novel what the van Eyck brothers were to medieval painting — artists who thought that conceptual virtuosity and an inexhaustible supply of detail substituted for a plausible, profound exploration of the human experience. Instead of treating the text as a mirror for the writer’s psyche, this kind of criticism assumes that the novel in question is a mirror of some kind of shared worldview, brought on not just by the writer’s personal choices (of character, setting, plot, and so on) but also by the context in which he or she is writing. In the case of the hysterical realists, they are all too in love with their grand, underlying, and basically untrue idea — everything and everyone is interconnected in ambition and subject to the same fate — that they have to make their characters essentially inhuman to make their plots work. But not everyone has to be present at the birth of a genre to do this sort of criticism. Rosecrans Baldwin discovered a trope that’s almost as old as the modern novel — the “distant-dog impulse” — from Tolstoy to Picoult. Evgeny Morozov tracked not only the intellectual vacuousness, but also the stylistic commonalities imposed by the new line of TED Books. What’s going on here? Elif Batuman explains that all of these reviewers are looking for context in the morass of personal and artistic choices that go into every piece of writing: Literature viewed in this way becomes a gigantic multifarious dream produced by a historical moment. The role of the critic is then less to exhaustively explain any single work than to identify, in a group of works, a reflection of some conditioned aspect of reality. Maybe it doesn’t sound great when reduced to a mission statement like this — in fact, I think it sounds vaguely totalitarian, especially when you consider that this sort of criticism is called “Marxist criticism” in academic circles. But in practice, it definitely works. 4. So. Reaction. Summary. Aesthetic and historical appraisal: these are the four classical elements of literary criticism. To that I might add that it helps to be negative — of the twelve reviews I quote here, eight are at least moderately negative, and about five are relentlessly so. That people are even having this conversation about the supposed niceness of book reviews is great: it shows that book reviews are anything but irrelevant. And now that we’ve teased out the ground rules of what can and should go into a book review, it’s time to turn you loose. You now have the tools to cut through the morass of literary criticism and decide for yourself not only if a book review is worthwhile, but why. You can critique the critics. You can be a meta-Michiko. Use this knowledge wisely. As for me, I eagerly await the next big, invented crisis to strike the world of literature. I hope it involves deckled edges. Image via Wikimedia Commons
● ● ●