Heatmaps got an update, upgrade, however you want to call it. The initial version had a few issues, the biggest one being that generating the data set would mean importing all the extracted packets and running it through a map/reduce job. Additionally, it would record each individual location as a data point, and given that most maps are 1000×1000 wide, and locations are recorded as floating point numbers, this’d mean there could be up to a million positions recorded for each game.
The import job alone at this point took between 15 minutes and half an hour, after that the map/reduce job could take upwards of an hour to grind through the data. The resulting JSON file though could end up upwards of 10Mb. Now to download that every time you want to view a heatmap is a bit ridiculous.
So instead heatmap data sets are now limited in their size. Each map is divided into a 100 cells – you may already know these, they’re the map grid (10×10 cells = 100). Then each cell was divided further into 10×10 sub-cells. That means that every map now has 10.000 subcells. When heatmaps are generated, all coordinates and locations are converted to the subcell location they fall in, and the subcell’s center coordinate is then used as the data point. This means that for every map, there are at most 10.000 data points.
The added benefit is that this conversion and storage process is easy enough to do during the processing of a replay, so that means that the heatmap datasets are always up to date, and can now be requested through the API which can do some juju to it such as including or excluding certain match types from the final data set.
Additionally since the data set is a bit more limited in it’s scope, the resulting heatmap also looks better.
So there’s that…