In one of the most famous single season performances in Major League Baseball history, the 2002 Oakland Athletics won a league-topping 105 games on a paltry $33 million budget. Their secret? Sabermetrics—that is, the use of statistical analysis, rather than subjective judgement, to evaluate players’ relative strengths and weaknesses. The value of this kind of analysis is rather self-evident now, but, in 2002, the notion that the right statistics were more reliable than the best scouts was heresy. Even today, some remain wary of teams that use sabermetrics. As the old timers say, they’re not playing baseball; they’re playing moneyball.
Moneyball (2003), of course, is the name of Michael Lewis’ famous account of that fated season. One of the book’s major, if often implicit, themes is figuring out just how much of the human body’s performance can be neatly quantified and calculated in advance of being fed into a statistical model (what would Foucault have said about biopower and Moneyball?). In esports, though, this question seems rather beside the point: competitive videogames do not need to be made quantifiable because they are always already rooted in quantification. For obvious reasons, sabermetrics (or something like it) in esports don’t have the same kind of narrative, revenge-of-the-nerds-type schtick that gave Moneyball its zing, but it’s worth revisiting the book’s central premise in the context of esports: just how much can statistical analysis improve a team’s performance?
On this point, the current state of the analyst in Dota 2 (2013) is an object lesson. Two weekends ago, 16 teams gathered in Manila, Philippines for the Manila Major, one of the largest professional Dota 2 tournaments of the year. Roughly half of the teams competing brought a dedicated analyst with them, a significant increase from even a year ago. Distinct from a team’s coach, who often is or was a professional player and is tasked with making high-level strategic decisions, the analyst offers a deliberately narrow point of view. As Murielle “Kipspul” Huisman, first-time analyst for the Malaysian Dota 2 team, Fnatic, notes, “a coach is basically the team’s sixth player… an analyst’s task, on the other hand, is to equip the players with the information they need to do battle.”
That’s theory, at least. Here, in her words, is Huisman’s craft in practice:
“Let’s take Fnatic’s upper bracket win against LGD Gaming! During my research, I noticed a ward that LGD never managed to find and deward. [That ward] worked for us too. I also knew one of LGD’s signature smoke [of deceit, which confers temporary invisibility on the user] timings and showed the players [on Fnatic]. Lo and behold, LGD smoked during that exact window and we dodged their attack.”
These are modest strategic advantages, but, over time and in the aggregate, they can convert a close game into a win. In the early days of professional Dota 2 competition, this kind of research was by default the responsibility of players, and especially a team’s captain. But as professional Dota 2 has grown and teams have become better-funded and the competition more intense, the means and need for a dedicated analyst have slowly brought the position into being. As another prominent analyst, Trent “MotPax” MacKenzie, who works with the Euro-American compLexity Gaming, recalls, “My first talk with [team captain] Swindelz came down to: ‘we need to get better at the game still, mechanically etc. We don’t have time to research all these other teams.’”
Though Huisman thinks that every team will eventually employ an analyst, at the moment, not every team is convinced that an analyst is worth the expense (there’s no one formula for compensation, but Huisman notes that she gets a flat fee plus a performance-based bonus, while MacKenzie is paid by the hour). In some ways, their skepticism might be well-founded: at the Manila Major, you’d be hard pressed to find any real correlation between a team’s placement and whether or not they brought an analyst with them. Huisman’s Fnatic took a respectable fifth place, while compLexity, despite the guidance of MacKenzie, who is widely considered to be one of the best in the business, was among the first teams eliminated. At the same time, neither of the teams who made it to the grand finals, OG and Team Liquid, used a dedicated analyst.
There’s a significant dissonance, then, between the plainly beneficial information an analyst can offer and the ability of teams to transmute that information into victory. The origin of this incongruity isn’t hard to identify: Dota 2 is an immensely complex game and over the course of a typical match, players will make thousands of in-game decisions in an extremely fluid environment. So, by necessity, there’s only so much analyst can provide useful information about (Huisman notes that it’s “very easy to overload a team with information and make them less effective instead of empowering them.”) What’s more, the massive number of decisions made by players in an extremely dynamic game often make it hard to know which decisions were, in fact, the most significant. Superior information is never a bad thing, of course, but neither is superior execution, preparation, or brute individual skill. Would Fnatic have actually lost to LGD if Huisman hadn’t identified that smoke timing? Or did this slight advantage combine with many others to eventually yield a victory? Perhaps it was utterly inconsequential? And maybe it was, in fact, a singular, game-winning play—those sometimes happen too.
But here’s the thing: in different versions of this same match, it could have been any of these, which is part of what makes Dota 2 so thrilling to watch and challenging to analyze. What this speaks to, in other words, is the importance of thinking not just about causality in Dota 2, but also conditionality. Consider this: it’s relatively uncontroversial to say that, under the right circumstances, Fnatic’s received knowledge of LGD’s smoke timing could have been a game-changing moment. Under certain circumstances, Fnatic might not have dodged the attack, but confronted it directly, setting up a (potentially) game-winning fight. Yet the circumstances under which this would have been the “right” decision are based on a huge array of factors, from the gold and experience differential, both teams’ item timings, the status of their cooldowns, the relative strength of their heroes at that moment in the game, lane equilibrium, and… well, you get the idea. My point regarding the analyst is that the information they provide helps produce the conditions of opportunity; determining the best response to those conditions is a much trickier question, better asked (and answered) in the moment by the team itself.
One reason sabermetrics has caught on in baseball much more so than any other professional sport is the length of its professional regular season—162 games, compared to 16 in the NFL. Baseball is notoriously stable—a good team might only win 60 percent of its games—and so necessitates a very long season to determine what teams are, in fact, better (compare this to Cam Newton’s 2015/16 Carolina Panthers, who, at 15-1, won over 90 percent of their games, a percentage that’s simply unthinkable in baseball). One might make the argument that stats, for better or worse, are more useful in baseball because it’s substantially less dynamic than, say, the systematic chaos of football—simply put, there are fewer variables in play at a given moment, and, as a result, each data point can matter for more. But, on a somewhat more subtle level, the length of the baseball season means there are more chances for each data point to matter.
So, too, in Dota 2. There’s little doubt that the right piece of information can tip the balance of a game, even if in most games it’s impossible to know for sure. Let’s say that, in one out of 25 games, Huisman’s sneaky ward really does determine victory or defeat; perhaps in 10 of those games, it has no tangible effect, and in the remaining 14, it’s impossible to know for sure. Those aren’t great odds, but, eventually, the presence of a competent analyst will make possible victories that wouldn’t otherwise be. The question is whether or not this happens often enough to make a difference, especially because it’s possible to be eliminated from a Dota 2 Major tournament in as few as five games (the fate compLexity suffered at Manila). The strongest possible argument against the analyst is simply that bits of game-changing information are simply too infrequent to be of much practical use. Maybe there is really no such thing as Moneyball in Dota 2.
But by that logic, we should also be wary of the “the teams with analysts didn’t do any better at the Manila Major” for a familiar statistical reason: there simply aren’t enough data points to make that judgement well. A more telling evaluation of the analyst’s efficacy would track a team over many, many games through multiple patches and metagames. And, in fact, last weekend, both Huisman and MacKenzie reprised their analytic roles in Fnatic and compLexity at ESL Frankfurt 2016, notching another half-dozen or so games (though neither team did especially well).
So can statistical analysis improve a team’s performance? The most frustrating answer is also the most truthful: maybe, sometimes, it depends. Occasionally, the stars align and a single ward can make all the difference.