So one of the things that’s been plagueing the site for a while is that it’s in fact a rather slow beast, this is due to the fact that storing all replay data in MongoDB leads to having single documents (e.g. a replay) that are about 16Kb in size, if not larger, due to the amount of data that gets yanked out of a replay. What that means is that every time a query is made to construct a list of replays (e.g. the front page, or whilst browsing), it has to yank the entire replay out of the database.
This is pretty resource intensive, because some of the data in the replay that is used to display the lists (also referred to as ‘panels’) is still in it’s “raw” format. Things like the map are only referred to by it’s ID number, and so, for each panel it needs to dip back into the database to yank out the icon, the name, the proper link, and so on. I changed that by pre-generating most of the data that is displayed in the panel and storing that separately inside a replay.
MongoDB has this fun thing where you can query a collection, and tell it to only return certain fields out of each document, in this case, the site related data (likes, views, etc.) and the panel. The query went from taking 0.9 seconds to 0.07 seconds. Or you could say from 90 to 7 – that’s more than 10 times faster than before. Strangely enough the pick-a-few-fields-from-a-document style of fetching data has never been mentioned as being so much faster, but in a way it makes sense.
So, the site got faster. Moar replays plox!