In the second part of her series, Dr Darina Goldin of Bayes Esports discusses another idea for a product she’d like to see in the esports ecosystem. None of these are easy to build, she writes, but each would bring tremendous worth to the community if accomplished.
At the start of most esports games, you are asked which kind of character you want to play from the available pool. Depending on the title, there will be from a dozen to over a hundred different possibilities, each one with their own strength and weaknesses. In team-based games in particular it becomes important to create a balanced selection – you want your team to have somebody who can deal damage, somebody who can heal, somebody who can provide ranged support, and so on.
This is of course especially true for professional matches, where team play is everything. The so-called ‘draft phase’ is crucial for the outcome of the match. And while anecdotal knowledge exists about which setups perform best, it is hardly ever backed by facts.
Pick and ban
MOBAs (multiplayer online battle arena games) in particular make this analysis difficult. The ‘pick and ban’ phase – the time at the start of the map when teams can pick their champions or ban other champions from the shared pool of options, has long become its own game within the game. And with reason. After all, especially when everything else is equal, your selection can make or break your game. It’s meaningful in many ways:
● Is there currently an overpowered champion and was one team able to select it?
● Does the team prefer certain champions and if so were they able to get them?
● Are there champions or groups of champions that can effectively counter the opposing team?
● Have the teams selected early or late game champions (that is, do they have champions that can effectively destroy the opposition while everyone is still weak, or will they become very strong towards the end of the match?)
● Has one team picked a champion which the enemy would really like, effectively blocking them from that champion?
Certain champions are always coveted because they perform better than others. There are also some setups that are very situational or tailored to outplay one particular team. Finally, some setups are simply forced by the opposition banning the heroes one actually intended to pick.
As analysts we can attach a value to each pick or ban and make suggestions based on that.
When the draft phase starts, we would like to say what the best choice for the first team is. This choice being made (the first hero being picked or banned), we can update our beliefs about what the best option for the other team is.
Sometimes we will end up with just one very strong candidate, while other times several viable options will be available. This is what a data scientist on an esports team would prepare.
But at Bayes Esports we don’t want to just suggest the right picks. We also want to analyse an existing composition. After each pick or ban, the win probability of the teams needs to be reevaluated. This is highly sensitive for betting.
Since the draft is crucial for a team’s win probability, it should also be reflected in the betting odds, too. A team not getting their optimal hero selection should bring their overall chances of winning the map down.
Ideally this evaluation method can be highly customized and able to learn from a team’s past matches. And of course we want this to be easy to adapt once a patch has come out. After all, a patch can significantly disrupt the in-game mechanics, changing the value of any given champion and triggering a new meta-game.
Looking for a solution
It seems like an easy problem to solve, especially given the large pool of historic data that Bayes Esports has access to. Couldn’t we just formulate win statistics from this data and use it to evaluate future matches?
While this sounds like a good idea, the maths behind it don’t work out. There are currently 154 champions in League of Legends and 120 in Dota 2. In each professional match, ten of these characters will be picked and ten more will be banned. This leaves us with trillions of potential team combinations. Even if we wanted as little as 20 matches per draft, it’s very obvious that we would never get enough data.
Moreover, some combinations are never played in the first place – you will be hard pressed to find a team consisting of all support heroes, for example. Some champions get picked in 40% of the matches, others get picked only once or twice in an entire tournament. Is that due to their clear downsides, or does a team picking them have an innovative strategy? When a team finally decides to play with five (traditionally) support heroes, how will you evaluate their decision, if you’ve never seen this played before?
This is a problem that many data scientists in esports have been trying to solve for a long time now. I do believe that a solution is possible – but just like in my last column, it involves being able to generate more data, as much as is needed.
If we had agents trained to play League of Legends or Dota2 on a professional level, we could simulate the match again and again, assigning random team combinations to both sides. It’s a lot of work – the agents would need to be retrained after every patch, too – but maybe we could finally answer the question if Renektorn is better than Orrn.