IEM Katowice PvT Blink: 64%. PvT Overall: 61%, a game theoretic analysis

2 weeks ago, IEM Katowice showed off some sick games. My favorite was sOs’s Phoenix into Colossi into Carriers on Alterzim Stronghold, but the biggest news apparently was Blink Stalkers in PvT. After several crushing games by HerO and sOs, it really looked like Blink was imbalanced in this matchup. The most insightful analysis I saw was from bwindley, and now I have had a chance to label data and crunch numbers, here’s my take on the situation with a few numbers, a little analysis, and some game theory.

The easiest point to make here is the raw win rates. In 33 PvTs, Protoss went Blink Stalkers in 11 of them. The overall PvT win rate was 20-13 for 61% (ref). In games where Protoss went Blink, the win rate was 7-4 for 64% (ref).

For comparison, in Spawning Tool, the overall PvT win rate is 804-767 for 51% (ref). In games where Protoss researched Blink before 6:00, the win rate is 71-58 for 55% (ref).

So Blink wins slightly more than normal, but it’s pretty dang close. One would hope that different strategies would have different win rates, or else the meta-game had stagnated as no strategy would confer an advantage over any other (more on this below). Continue reading

Spawning Tool and the philosophy of FiveThirtyEight

This past week, ESPN launched FiveThirtyEight, a website dedicated to data journalism. The lead for the site is Nate Silver, a statistician and writer, who most famously correctly predicted the winner of all 50 states in the 2012 US presidential elections. Roughly, the site is dedicated to to the use of quantitative methods in journalism across politics, economics, science, life, and sports.

Silver outlines a manifesto for the site, and I want to draw attention to a few points he makes. First, he leads with the point that his presidential prediction was not impressive by comparing it to other models, but instead by comparing it to pundits. I think the framing of his argument is very important here because it points out the type of thinking that we’re used to. Second, he points out the spectrum along quantitative and qualitative approaches to journalism. Both types of analysis are important, but he sees quantitative as being under-represented, hence the creation of FiveThirtyEight. Third, he outlines an approach to journalism as collection, organization, explanation, and generalization. Particularly, he criticizes the last two steps in in conventional journalism. Explanation in often missing as journalists fail to properly attribute causation, and predictions (as part of generalization) are under-scrutinized and often inaccurate. Continue reading

In-game scorekeeping for StarCraft?

So far, Blizzard has given the community amazing tools for doing replay analysis, and the community has compiled a lot of stats and built services to work with that data. Even so, I think there’s a gap in levels of analysis. At the top, we have aligulac, which compiles win-loss data into accessible predictions but doesn’t give much insight into a playstyle or why the data is the way it is. At the bottom, we have ggtracker, which provides graphs and data for individual games and can compile stats from low-level mechanics in games, but is missing higher-level analysis.

In-between is where I think theorycrafting reigns: strategy. What build orders work against what? Which players are the best at worker harass? Who has the best forcefield placement? These details lie below win-loss analysis, and although these may be extractable from the replay data, this analysis requires some qualitative analysis to determine what constitutes specific tactics or plays. Even the best machine learning techniques (supervised learning) require labeled examples to learn from. Continue reading