|In 1977 humankind dispatched its first ‘message in a bottle’ to the stars. Piggybacking on NASA’s regular missions, deep-space orbiters Voyager 1 and Voyager 2 were equipped with gold phonograph records intended for the eyes and ears (in a manner of speaking) of extraterrestrials. The probes eat space at a leisurely 60,000 kilometres an hour and, although not aimed at anywhere in particular, in about forty thousand years they will flyby less than two lightyears from stars Gliese 445 and Ross 248, respectively.
The LPs themselves are time capsules intended to communicate the fundamentals of life on Earth. They contain a variety of sounds that include surf, wind, and thunder, songs of birds and whales, music from Mozart to Chuck Berry, and greetings in dozens of ancient and modern languages. They also carry photographs and diagrams of mathematical and physical constants, the Solar System and the structure of DNA, and human anatomy and procreation. The idea is that all this ought to alert aliens that there is civilization on Earth—which is stupid because, for decades now, anyone out there watching the skies would know that our planet is inhabited by technological beings.
On the cosmic scale the first indication of intelligence is radio. A planet with active transmitters can be detected as soon as they power up into the gigawatts. From outer space, the artificial nature of Earth’s radio intensity is beyond argument. Thanks to television, our planet’s emissions in the shortwave range exceed the equivalent emissions from the sun. Natural phenomena, e.g., discharges from atmospheric storms, could not possibly be a factor, making clear that our race is producing culture and beaming it around the globe.
The amount of cultural information broadcast on Earth has quite literally reached cosmic proportions. At the same time, broadcast TV is only a fraction of the total. What about radio or, for that matter, cable TV? What about the Internet which, running over coaxial or fiberoptic lines, leaves nary a trace on the cosmic radar? What about the gamut of the print media, from books and comic books to newspapers, journals, magazines, and newsletters which, belying decades of prognostications of imminent demise, is expanding at an inflationary rate?
Each year the cultural infosphere on Earth grows at a rate that is almost incomprehensible. To describe it calls for numbers so large that, ironically, they are better expressed verbally rather than numerically. A bit, so called since Claude Shannon’s 1948 paper on the mathematical theory of communication, is a binary digit: a zero or a one. A byte, most commonly eight bits, is a unit of information for digital computers. The first microcomputers’ storage capacity was calibrated in kilobytes. They begot megabytes, which begot gigabytes, which begot terabytes, each of them three orders of magnitude greater:
kilobyte x 1,000=megabyte x 1,000=gigabyte x 1,000=terabyte
“I have swam through libraries”, mused Herman Melville’s young cetologist Ishmael, likening his search for knowledge to traversing the limitless lengths and widths of Earth’s waterways. That was more than a century and a half ago, and since then our oceans have shrunk a great deal, only to be replaced by seemingly limitless oceans of information. By all accounts, we spawn data on Earth at a rate that makes bacteria in agar look like slackers, a state of affairs virtually synonymous with the Internet. YouTube alone uploads upward of thirty hours of video every minute of every hour of every day of the year. This amounts to an hour of showtime every two seconds—and the pace is quickening.
If these trends continue, by 2020 Google will have tagged close to 100 trillion discreet internet addresses. That’s 1 followed by fourteen zeroes—a reflection of the little appreciated fact that Google’s search algorithms do full-text-search-of-everything-that-is-on-the-Internet, sniffing out and cataloguing every bit of data among the cosmic amounts already out there. Similarly, its spellchecks do not use dictionaries but word-use probabilities from the entire cyberspace, with the result that they can correct misspellings even of nonstandard terms and locutions, such as surnames or neologisms.
All this tagged cornucopia is, of course, no more than a fraction of the tens of exabytes of information estimated to have been produced in the history of our civilization. And the production is accelerating. In just one night, just one bank of cameras on just one telescope (the LSST in El Peñón, Chile) throws off terabytes of data, the equivalent of 1,000 sets of Britannica:
terabyte x 1,000=petabyte x 1,000=exabyte x 1,000=zettabyte
The last number in this series is virtually beyond anybody’s ken.
Yet the calculations of the information stored just on the World Wide Web are on the order of several zettabytes: several trillions of gigabytes. Most of us could store our life’s work on a single compact disk, and do so many, many, MANY, MANY times over. If you were to store just one zettabyte of information on high-density CDs, the cylindrical stack would reach past the Moon.
Put these numbers together, and they add up to a future that is as hair-raising as it is inevitable: the end of cultural history. Zettabyte is, after all, more than a mind-dwarfing number. It is a mind-dwarfing problem with dismal consequences for the entire civilization on Earth. Every index of every area of cultural production everywhere around the world is on the rise. As a consequence, the dateline that divides in half all the information ever produced is inching ever closer to the present. Long ago, it used to be millennia, then only centuries behind. Today it has crept to within forty years or less.
We are no longer talking about an embarrassment of riches but, culturally speaking, about the end of the world as we know it. In 2011, in an article titled “Too Much Information”, The Economist calculated that the quantity of data stored on planet Earth doubles every eighteen months. Whether the exact period is eighteen, eight, or twenty eight, the function that maps the processes that gave us the zettabyte problem is asymptotic. In plain English, it means that every day our civilization edges closer to an event horizon when all of our cultural past becomes a negligible fraction of the cultural present in which we are destined to be trapped forever.
A Fool’s Errand
Around 350 BC, Speusippus, nephew of Plato, embarked on what even then was a fool’s errand: collecting and collating all knowledge of his time (in a single volume, no less). Within a generation of his quixotic quest, Ptolemy I founded the Library of Alexandria, which at its height housed three-quarter million scrolls, some containing entire codices. In the first century of our era, Roman scholar Pliny the Elder compiled his monumental Historia Naturalis in thirty-seven volumes. Early in the fifteenth century, Ming Dynasty encyclopedists completed the Yongle Canon, a knowledge database in eleven thousand volumes (digitized in 2002).
Six hundred years later, the volume of the printed word virtually defies understanding. In the second decade of the third millennium, the number of new book titles published every year around the world exceeds three million. In 2013, more than a million and a quarter titles, old and new, were released in America alone—five times more than just a decade before (so much for the death of the book at the hands of television and the Internet). A cross-check of publishers’ lists suggests that between four and five hundred paperbacks appear every month in the United States. Although America does not even come close to Japan, where nearly half of all books and periodicals sold are manga cartoons, it still sells sixty-five million comic books every year. A typical collector owns more than three thousand of them and will spend a year of his life reading them—at least according to Unbreakable, a 2000 movie that took pulp fiction and its heroes at face value.
By now Google has digitized about 15 million of the conservative estimates of 150 million book titles published since the invention of the Gutenberg press. The actual number could be as high as 250 million: a quarter billion books in the last five centuries and a half, plus millions more every year in our millennium. In 2011 the British Library teamed up with Google to put another quarter million of uncopyrighted books —40,000,000 pages in total—online. With the Internet colossus footing the bill, other libraries are lining up to get in on the action.
An illustrative microcosm of the world of publishing is provided by the Association of American University Presses. At its formation in 1932, only eight publishers attended the first meeting of directors. Fifty years later, there were 70. At the turn of the century, 125. Today, 136. Tomorrow? No one knows, except that it will be more. The same trends are mirrored everywhere else. A little over half a century ago there were one-hundred thousand science journals in the world. Today there are more than a million. On the word of The Economist, in the same half-century the number of journals in economics shot up from eighty to eight hundred.
Any way you count it, this explosion of the printed word adds up to one and the same thing: half of all the books ever printed and half of all the wordsmiths who ever put pen to paper have come onto the scene roughly since the death of Agatha Christie. Put differently, half of all the writers who have ever lived are living still. Given the unstoppable population growth and the lengthening of the average lifespan, this fraction seems again destined to increase asymptotically until the day when nearly all writers in history will be creating in the eternal present.
Alas, the prevailing response to the zettabyte problem is to play ostrich and bury our collective head in the sand (actual ostriches do not exhibit such self-deceptive behaviour). “Anything that puts more kinds of art in people’s hands in a way that fosters competition, innovation and creativity is good”, argued recently a top-ranking executive at an IT research company. He is not the first to embrace expansion as a cureall, and not the first to ignore that our hands are only so big. But as a de facto spokesman for the Bigger-Is-Better cultural lobby, he hit the nail right on the head.
In 2010 more than two million new book titles were published in the world by traditional methods—meaning, by completing the slalom-course of professional vetting and editing—only to land on top of those already deposited in public libraries inflated to Borgesian proportions. The difference between traditional (or professional) publishing and the other kind is enormous insofar as, according to Bowker, 2010 also saw a bumper crop of three-quarter million self-published titles in America. A swarm of 2-E-Z apps that enable assembly and upload of ebooks in minutes guarantees that this rush of vanity publishing will continue to intensify.
As if to drive the point home, during the last decade the number of authors self-publishing their titles online has expanded by more than a quarter, and their sales by half—per year. Some of this e-commerce is handled by the authors themselves, who use print-on-demand services and act as their own distributors, marketers, and sales teams. The lion’s share, however, is routed via Amazon, which plugs this paperless traffic into the world’s biggest online market and skims huge profits from the royalties. Indeed, as the world’s biggest provider of cloud computing, the company boasts that it adds as much server capacity every day as it needed to run everything a decade ago.
Like it or not, we live in the age of infoglut and infogluttony. The problem with the zettabyte problem is that, even though the proportion of what is culturally valuable to the totality of the cultural information may not have changed (and how would you know?), multiplying both a millionfold has the effect of obscuring the former as effectively as if it was not there at all. It may take a long time, but you can be sure to find a proverbial good book in a thousand. But you will never find a million good books in a billion. So much for the bromides of serving the public by putting more and more art in people’s hands, or at their fingertips.
The spectre of unknown numbers of contemporary Shakespeares lying buried under pyramids of paper that no one could hope to burrow through should be enough to look for some kind of capping mechanism to the inflationary trends exhibited by our culture. These have, indeed, assumed crisis proportions, with some academic journals temporarily suspending submissions in order to dig themselves out from backlogs, and some professional bodies contemplating drastic policy changes to alleviate the worst symptoms of the ‘publish or perish’ infoglut in their domains.
Meanwhile, every year the Frankfurt Book Fair showcases close to half a millions books from more than seven thousand exhibitors from around the world. The book culture is thriving, even as it migrates into electronic formats, at least for readers who prefer their Dostoyevsky or Paretsky in pixels rather than in inkblots. Bucking the trend, Stephen King first released his 2013 novel Joyland in paper only, despite being an ebook pioneer. Still, in less than ten years since Amazon introduced Kindle the volume of ebooks has grown so fast that it now accounts for half of its sales. Just one academic publisher, Taylor and Francis, boast more than 50,000 ebooks in the humanities, social sciences, and education alone.
Still, there appears to be no cure for the infoglut itself. Although a voluntary publishing cap should be in everyone’s interest, it will never work so long as it is also in everyone’s interest to defect. There is, after all, a minimum payoff—publication—that every writer can secure on his own and, in a competitive game without binding agreements, this is the minimax strategy every rational individual ought to pursue. There is, in short, no incentive to abstain from aggravating the zettabyte problem, short of an imaginary Save the Human Race Foundation that would pay writers not to write, like in the mad world of Catch-22 where the more alfalfa farmers do not grow, the more they get paid.
7,000 in a Lifetime
With fewer brick-and-mortar bookstores left every year, consolidation is the name of the game in the bookselling business. By the early 1980s Doubleday remained the only big-name family-owned publisher, before it too became swallowed by Bertelsmann, the German communications conglomerate that served as a model for the dot-com empire fought by Bond in Tomorrow Never Dies.
In July 2012, capping a wave of metamergers, Random House and Penguin fused into a corporate leviathan that controls almost a quarter of the world’s book publishing. Although spun as good news for booklovers, any direct link between quantity and quality in the world of publishing, not to mention the world of art, is yet to be established.
One thing, however, is certain: with ever more books around and buyers spoiled for choice, these days it takes a lot more than a knack for telling a good story to stand out as a writer. Hemmed in by millions of competitors for the readers’ and reviewers’ time, to break out from the shadows of zettabyte anonymity more than ever before authors need to think strategically to differentiate themselves and their brand from the countless millions of others on tap. Welcome to the culture of celebrity gushing and bashing, in which creators compete less with history than with the zettabyte present.
Today even established names, from John Grisham to the Bone-Farm forensic anthropologist Jefferson Bass, who previously relied on public-relations departments of their publishing houses, hire their own promo teams to help them and their product stand out from the crowd. With the amount of print devoted to books and book reviews shrinking faster than a virgin from a French kiss, publicity campaigns are waged online with the help of social-media chatbots and algos that ‘blug’ (blog and plug) writers and novels with the mechanical monotony of a Dan Brown.
Big name endorsements still work as authordisiacs, just like they have done since the beginning of time. When Barack Obama bought an armful of books at Martha’s Vineyard in 2011, they included a selection of country noirs by Daniel Woodrell, who has been flying off the shelves since. Obama’s Democratic predecessor, that consummate populist and intellectual Bill Clinton, also enjoyed his mysteries, even as he curried presidential gravitas by joking about his cheap-thrills addiction. Still, two thumbs up from a fan in the Oval Office turned Walter Mosley and his Negro private investigator, Easy Rawlins, into instant celebrities.
Film deals, critical kudos, and book prizes generate the buzz that gets the author and the book onto the bestsellers lists—and vice versa. Most charts do not, after all, measure volume but velocity. It is not, in other words, how many books you sell, but how fast they move per unit of time (with pre-orders all counting toward the first week, marketers pump up pre-sales to vault a book onto the list and beat the zettabyte crowd). In 2014 the New York Times implicitly bowed to the zettabyte problem by doubling its main bestseller lists and enlarging the number of categories.
Another acute symptom of the zettybyte problem is the levelling of literary distinctions, whereby branding trumps the actual contents of a book or a book review. For recognized authors, the system still works, after a fashion. Plaudits in the New York Times Book Review or the Los Angeles Times Book Review still tend to pump up sales, whereas pans tend to bring them down. For everyone else, however, it is pure Alice in Wonderland. Two thumbs up or two thumbs down? No matter. Both will give you legs.
The reason, rooted in human psychology and ultimately biology, is that, while our capacity to produce information has assumed cosmic dimensions, our processing capacity has remained unchanged from the days of cave art at Lascaux or Altamira. The memory of a review of an unknown book by an unknown author fades quickly among millions of others, whereas the memory of the eminent venue in which it appeared lingers. As advertisers and other spin doctors know all too well, brands simplify choices, and especially so in the world that makes a fetish of inundating us with too many.
There is, of course, nothing new in the old saw that any publicity is good publicity, except that it reveals the extent to which the zettabyte problem has undermined the entire literary system, beginning with the critical superstructure. Tasked by cultural institutions with curating art in the name of values that transcend the merely aesthetic and extend to the social and even the political, the literary system fights a rearguard battle. With throughput channels clogged up by the zettabyte problem, encounters between art and art-critics increasingly amount to random Brownian motion.
This state of affairs has dramatic repercussions for all those who devote their lives to books, be they scholars, critics, publishers, or just readers at large. Preliminary data I have collected on all continents save Africa and Antarctica suggest that even the most avid bookworms rarely average reading more than a book a week. Taking double that, and assuming optimal conditions—no rest, no re-reads, no memory loss—this still gives only about 100 books a year, or 7,000 in a lifetime. This is the upper value on the literary database on which taste-makers can base their judgments (in practice, it is much smaller).
A few thousand against a quarter billion sounds pretty pathetic, even when you factor in that a sizeable portion of the total is nonfiction (not that nonfiction cannot be artsy: the 1950 Nobel Prize for Literature went to the logician and philosopher Bertrand Russell and the 1953 one to politician and biographer Winston Churchill). And it is at this point that cultural conservatives execute a methodological sleight-of-hand to stave off the zettabyte problem. Can’t read all that is out there? Don’t need to. All you need to do is convince yourself that it is formulaic and cheap, and 98 percent of literature can be tossed out of the window on a trash heap of popular culture.
The argument is simple—almost aphoristic: once you’ve read one genre paperback, you’ve read them all. Note that this line of reasoning would be accurate if books were like electrons, every one identical and invariant to the examining eye. It is true, after all, that once you’ve seen one electron, you’ve seen them all. Poke it and probe it till Judgment Day, and it will still show you the same face as today. But books are not like that. What from the Ivory Tower looks like a homogeneous mass, from up close reveals distinctions as profound as those professed on behalf of the literary classics.
Although few intellectuals would state their case so forthrightly, sooner or later their bias comes to the fore. Dennis Dutton exemplifies this ‘scratch a progressivist and watch a purist bleed’ in his bestselling book The Art Instinct (2009). Laying his cards on the table, he declares that “high art traditions demand individuality”. The implication? The unvariegated mass of pulp fiction does not demand individuality, hence they can be dismissed en masse. Dutton holds these truths to be so self-evident that he does not even argue in their defense, content to advance them as a fiat.
Except that popular art prizes individuality no less than high art. No need to look further than the arguably most ossified novelistic genre of all: crime mystery. It does not take a connoisseur to individuate the urbane quirkiness of Donald Westlake, the gradient-defying plots and villains of Elmore Leonard, the liposuctioned aesthetic of James Ellroy, the imp-of-the-perverse psychopathology of Michael Collins, the post-comradely flavour of Martin Cruz Smith, the forensic jigsaws of Kathy Reichs, the kosher-deli comedy of Kinky Friedman, the Möbius-twisted mindgames of Jeffrey Deaver, the public conscience of Ruth Rendell, the complex Creole gumbo of James Lee Burke, the modern Bostonians of Dennis Lehane—and so on, and so forth.
Conceding that the above are all examples of individuality would make Dutton’s argument (tautologically) true at the cost of abandoning his distinction between genre fiction and high art. But his list of literary artists—Homer, Shakespeare, Cervantes, Austen, Dostoyevsky—reveals that he is locked up there with Rapunzel in a high tower, lording over a forest of literary entertainment. People make art to please one another, he concedes a little later on. “There is a cool objectivity, however, about the greatest works of art: the worlds they create have little direct regard for our insistent wants and needs; still less do they show any intention on the part of their creators to ingratiate themselves with us”.
So this is how to separate high art from generic entertainment. If it disregards human wants and needs, it is a timeless masterpiece. If it ingratiatingly tries to please, it is not. Where does it leave Shakespeare, whose Prospero claims in the Epilogue to The Tempest that the players’ aim was merely to please, echoing the end of Twelfth Night when Feste sings of striving “to please you every day”? In fact, Dutton’s Darwinist theses as a whole undermine his highbrow prejudices. For if art funnels something universal in terms of our biological wants and needs—and he leaves no doubt that such is the case—then great art must do so as well, which in his terms would mean that it is not great art, after all, but ‘mere’ entertainment.
The Godfather meets The Count of Monte Christo
The charge that popular fiction is unindividuated and formulaic is true only to the same extent that it is true of highbrow fiction. After all, what is unindividuated, and at what level of comparison? All monasteries are alike by dint of being monasteries, yet if you look more closely, no two are identical, and most are not even similar. Likewise, no two mysteries are identical, even if all are alike, once you look at them the right way. What is Macbeth, after all, if not Crime and Punishment meets “The Tell-Tale Heart”? What is The Great Gatsby if not The Godfather meets The Count of Monte Christo? What is Tough Guys Don’t Dance if not Kiss Me, Deadly meets Naked Lunch?
Genre writers create within an established aesthetic that attracts readers by advertising the type of game to be played for their pleasure. A clear analogy with sport stems from the fact that, although in football or basketball the rules of the games are also known ahead of time, fans flock to them all the same just because no one can tell in advance how a particular engagement will play out. Unlike sport, of course, writers can tweak the rules of the game in search of the optimal mix of convention and invention. In this, genre fiction is once again no different from high art, which also prizes formula and (self-) imitation, although under the guise of style.
The annual Bad Hemingway and Faux Faulkner contests could never work—and could never be such riots—without readily identifiable formulas to spoof. Their rules are simple enough. Entrants submit onepage samples of Papa’s and Pappy’s style and the most least masterful among them wins the honours. Crime ace Joseph Wambaugh was only one of the countless parodists who over the years had run amok with the Nobelists to the appreciative groans from the judges and kibitzers delighted to see stylistic excesses punished and reputations taken down a notch.
No formula, no style.
When it comes to genre art and to importing highbrow notions of taste where they do not belong, few missteps, however, can rival that of Ruth Bunzel’s in her classic ethnographic study of North American pottery of the Hopi nation. Hopi women—only women decorate pottery —are known for their sophisticated aesthetic, the central component of which is their veneration of originality. With ill-masked disdain, Bunzel reported, however, that their painted designs were essentially identical, often differing in elements so minute as to be almost negligible.
Decrying the sterility of the art and of the art-makers’ aesthetics, the critic failed to take into account the tradition—that is, the genre—in which the potters worked. Within that genre, the hallmark of originality is the use of variations on inherited elements. It is as if Bunzel decried Wyatt’s variations on the Petrarchan sonnet, or Drayton’s variations on Wyatt’s, or Surrey’s variations on Drayton’s, or Sidney’s variations on Surrey’s, or Shakespeare’s variations on Sidney’s as sterile.
Indeed, if you believe Christopher Booker, the infinite variety of literary storylines hides just seven fundamental plots, albeit in a myriad variants, varieties, and variations. As if to highlight the arbitrariness of all such literary structuralism, after advancing the fundamental seven— rags to riches, voyage and return, tragic overreach, rebirth, comic chaos and happy ending, quest narrative, overcoming the monster—Booker added two more ‘fundamental’ categories: mystery/crime and rebellion (never mind that both are quests to overcome the monster).
Another place where his typology collapses is the advent of high modernism or postmodernism with its turn toward autotelism and selfdeconstruction, manifest in the focus on the act of telling at the expense of the tale. But once you get past the procrustean schematic, the wealth of literary examples, which range over aboriginal folk tales, beast fables and fairy tales, epics from the antiquity, operatic librettos, epistolary novels, Wilkie Collins thrillers, Victorian multi-deckers, and Bollywood blockbusters, exhibits similarities that cut across literary kinds, genres, and not least, brows. So much for high art as a paragon of individuality and for genre art as a paragon of sterility.
Story formulas—structures of incidents, as Aristotle called them —hook us afresh because of our interest in the fundamental patterns of human existence. Essentially unchanged since the beginning of history, these insistent wants and needs account for our unflagging pleasure in consuming storylines familiar from time immemorial. This is what the Russian morphologists and formalists intuited already at the beginning of the twentieth century, even though they could not explain it without the tools of modern evolutionary literary studies (evolist).
Today we know that the answer lies is our universal propensity for thinking in stories—so universal, that it forms an inalienable part of our nature and, as such, an inalienable part of both our emotional and intellectual lives. Percipient as ever, back in 1757 David Hume himself appealed to human universals in a bid to tackle the greatest problem in aesthetics and art: the phenomenology of taste. The general principles, he enunciated in “Of the Standard of Taste”, are uniform in all human beings. Recognizing, naturally, that people are actually given to strident disagreements in aesthetic judgments, he concluded, therefore, that we must be prone to errors.
Although, on his account, these errors are systematic in nature, they cannot be innate, since that would contradict his major premise of the principles of taste being uniform. All misjudgements of taste must, therefore, be attributed to the coarsening of our natural faculties, either due to disuse or ill-use. This neoplatonic position resurfaces in Hume’s remark that inadequate knowledge distorts judgment (sample bias can lead to error), even as he neglects to elaborate what counts as adequate knowledge or even as adequate database.
In the end, recognizing that he has painted himself into a corner, Hume executes a stunning U-turn. Admitting that different consumers exhibit different humours, he contradicts his premise of uniformity. He then drives another nail into his own coffin by admitting the effects of manners and opinions of the age and country on taste, another way of admitting that de gustibus non est disputandum. For all his analytical efforts, Hume ends up being of no help in elucidating how judgments of taste can compare from one consumer to another—a sine qua non for the ‘read one, read them all’ school of thought.
Not much has changed during the intervening two centuries and a half. The highbrows’ dismissals of the unread mass of popular fiction always come down to the same methodological sleight-of-hand: how do you know that all pulp fiction is formulaic if you have not read it? And how do you know it is pulp fiction in the first place? Reliance on other’s judgment of taste could only be justified to the extent that there existed phenomenologically transferrable methods of literary comparison and rating. In their absence, one reader’s trash will always remain another’s treasure, and literary axiology always a matter of personal—which is to say, subjective—taste.
Critics can reliably say a number of things about a literary work, except whether it is good. T.S. Eliot had the presence of mind to laud The Great Gatsby even when the book sold just a quarter of the 75,000 copies that Fitzgerald predicted it would. This was the same Eliot who, as the commissioning editor for Faber, dismissed Orwell’s soon-to-be-classic Animal Farm as jejune and worthless. In fact, the most reliable thing that can be said about the cultural pyramid erected over the ages is that, in some ways, it is a house of cards. Stochastic regularities make it certain, after all, that some of the big names in the canon got there by luck rather than purely by talent.
Every field of study shows bias in favour of reporting statistically significant results (and underreporting corrections to the original faulty findings). In the sciences, this bias typically disappears when repeated studies fail because the original spike was a random outlier. In matters of taste we do not have anything like such a self-correcting mechanism. The point is not that Shakespeare was necessarily a lousy writer or Poe an intellectual lightweight. But to shrug off the randomness inherent in the zettabyte problem is to shrug off that there were—must have been— other Shakespeares and Poes lost in the same random shuffle of history that ossified into the canon.