Last week, the discerning users of the esports corner of Twitter dot com were presented with a pair of studies (I’m using that word in its most expansive sense) by the market research groups Newzoo and onalytica. For those keeping score, Newzoo has been around the esports ecosystem for a while, whereas onalytica has never showed any interest in esports before last week. In fact, as far as I can tell, their primary marketing strategy seems to be tweeting “top 100” lists around various internet subcultures and abusing hashtags for visibility.
I call attention to these two studies not because I’m interested in reporting on their conclusion about the “biggest” and “most important” personalities/teams/tournaments/games etc. (as if those two things mean the same thing anyway). I mention them because it’s chance to talk about the kinds of data we use to talk about concepts of size and significance in esports. I’ll put this plainly: Newzoo’s conclusions are insightful; Onalytica’s are idiotic. Among other curiosities (the generous term), Onalytica suggests that “culture” has the same “topic share of voice” as Microsoft. Kudos to anyone who can figure out what the hell that actually means.
Newzoo’s conclusions are insightful; Onalytica’s are idiotic.
Anyway, Onalytica’s report is a case of exceptional analytic incompetence, but it’s useful because it exaggerates, and thus, lays bare, the ways in which data can be hugely misleading when gathered or analyzed poorly. Despite its appearance of and aspirations to “objectivity,” data is never neutral; it’s only as objective as its assumptions, and the assumptions of whatever algorithms it gets fed into. The kinds of metrics we use to talk about, say, the relative size of esports teams, communities, etc.—so things like prize pools, peak viewers, Twitter followers and the like—can all be framed in particular ways to generate widely divergent conclusions. Here’s an example: for many years, Major League Gaming arrived at its enviable viewership numbers by adding up the unique viewers from each day of its multi-day broadcasts and framing that as total viewership. A bit dishonest? Sure. But the overwhelming majority of data in esports is self-reported, and without something like, say, the Nielsen Ratings to proffer a nominally neutral number, it’s safe to say that, through various kinds of statistical chicanery, most companies are only going to trot out data that flatters them.
So what kinds of assumptions was Onalytica making, and what can we learn from their mistakes? Consider a pair of Twitter exchanges between Onalytica and Grant “guacguac” Zinn and Ben “Noxville” Steenhuisen, who rightly question the accuracy of the study’s findings. In response, Onalytica describes their method (sort of): “we compile a list based on a boolean search query. We then rank the influencers based on reach, relevance, and [sic] ressonance.” The results are tellingly odd, and oddly telling: @GamergyES (an esports festival that I would hazard no one living outside of the Iberian Peninsula has heard of) is ranked higher than @lolesports.
Oh, really? Tell me more, Onalytica.
I don’t have the space (or patience) to deconstruct every element of Onalytica’s dumpster fire of a précis, so I’ll focus on its dogmatic Twitter-centricity. On one level, this makes perfect sense. More influential organizations should have larger Twitter followings. But by that logic, Call of Duty is the biggest esport in the world: its most famous player, Nadeshot, boasts 1.8 million followers, compared to Bjergsen’s 810,000 and Arteezy’s paltry 229,000. But by just about any other measure (viewership, etc.), Call of Duty isn’t the biggest esport in the world. It’s not even close. So what are we to make of this discrepancy?
The answer is that different scenes have different cultures of social media, and any model that flatly equates Twitter followers with influence misses this subtlety entirely. Consider this: Twitter is essential to Call of Duty because the platform was a crucial site of organization and coherence during the professional scene’s intermittent dormancies between 2010 and 2014. (Also consider: Call of Duty has almost no presence on Reddit, unlike, say, Dota 2, which is a central hub for its culture. I propose that none of this is coincidence). Likewise, if you go back a few years, you’ll discover that the late Heroes of Newerth had far more presence on Facebook than any other network. Social media growth is not perfectly parallel across networks; it concentrates and it dissipates in relationship to the particular histories and choices of a given community. Suffice it to say that Onalytica knows fuck all about esports history, and so it’s utterly incapable of making any useful point about esports today.
So is what we need more data? Something like the relative percentage of users from a given scene who use a particular social network might help clarify this kind of muddle. But it can also add more space for more confusion. And when you start to add in the additional data like users’ age range (some ages are more influential—read, valuable—to advertisers), geographic location (Indonesia has the largest Facebook community in the world outside of the U.S., but many companies don’t distribute there, diminishing users’ relative ‘worth’)—all of which shift across scenes and social networks—the whole enterprise becomes so hopelessly complex that it’s eventually hard to draw any clear conclusions at all. One of the great findings of organizational sociology has been that the acquisition of new data very rarely resolves outstanding debates; it usually just renders them obsolete, even as it inaugurates new ones. Our fetishization for data as a culture (esports and otherwise) can prevent us from seeing the soft and hard limits of data’s utility, especially when the knee-jerk response to a bad study is “acquire more data!”
What we actually need is better data.
What we actually need, of course, is better data. And elegant data is rarely complex; usually, in fact, it’s rather simple. This is where Newzoo’s study is illustrative, in part because its scope is so modest. All it compares is the “esports hours” of a given game that are viewed on Twitch compared to the total number of hours viewed. What that tells us, simply, is the percentage of people who are interested in a particular game who are also interested in its professional scene. Dota 2 viewers, for example, are about twice as likely to be interested in Dota 2‘s professional scene than League’s viewers are to be into the LCS. There’s still a need to contextualize the numbers of course—Dota 2’s numbers are skewed because the data was collected in August, when The International takes place and after which many players/streamers take a break—but it nevertheless reveals something about the different ways in which different viewers/players engage with a game. This is a modest point, and I think most who are interested in esports could probably come to it instinctively. But it’s still a good example of how good data doesn’t mean more data. Sometimes, it can mean less.