For the Record: World of Tanks Client Analysis (Part 4)
Part 1: http://ftr.wot-news.com/2014/07/18/w...ient-analysis/ Part 2: http://ftr.wot-news.com/2014/07/20/w...alysis-part-2/ Part 3: http://ftr.wot-news.com/2014/07/22/w...alysis-part-3/ Author: Thiemo Jung Network and other community requests This part will cover multiple topics, all from user requests of the WoT community. Firstly I will cover the network traffic, I’ve recorded the network traffic with Wireshark. The network part is mostly reasoning about the frequency, the size, looking at undecoded content of the network packages and some guessing. The other user requests are, behavior if the process priority gets changes, behavior when minimized and rendering of the new maps (in this case Kharkiv). Some people requested a analysis of the servers, unfortunately this is not possible, I have the same access to the servers as everyone else and can only use tools to view stuff from the client side and this allows only a very limited view of the server structure. Network On starup, the client goes to http://rss.worldoftanks.eu (and probably .asia for Asian, .com for us, .ru for Russian client), with added language sub path (eg /de for german). The requested rss feed contains only on entry, a link to the World of Tanks page. I assume this will contain some patchday related data, but that is just guessing. After you click the login button, the client will DNS lookup all related login servers (each cluster seems to have its own) and pings them over UDP multiple times and then sends its login data to one of them (probably the with the lowest ping and/or lowest users count). On success, the client receives a blob of data, probably the address of the server node where the client gets transferred to and some extra data, probably some transfer handshake data. Immediately after that, the client sends data to its assigned cluster server node. This one UDP (UDP is used for speed instead of TCP) port handles all data transfers and stays open until the client gets closed. After the transfer is complete the client receives again a blob of data, probably containing the entire account data set. On every tick (20Hz) data is send back and forth, between client and server, this seems to be a keep alive and/or ping mechanic. It seems that the client send this data set every time, regardless of being in hangar or battle mode. At start of a battle, the client receives multiple blobs, containing every data it needs to set the initial state of the battle mode, which is the data about all participants, their tanks and spawn position. While the countdown ticks down, the client sends it standart keep alive packet, the server sends data back, with some parts always the same and other parts are different. I assume the part that is always the same, is some sort of delta compressed state of each tank, and the different data is info about the clock, the clients that have finished loading and some other data that needs to be kept in sync. After the countdown reaches zero, the client starts to send an additional packet almost every tick, always with the same size. This is probably the processed user input in some form, probably the state (tank position, tank rotation, turret and gun direction) in which the client wants to be. The data send back from the server gets larger too, because every tank is now user controlled and changes its state every tick. The average memory size of each packet sent from the server is about 7 to 10 Bytes per tank. This is surprisingly low, which indicate a good optimized (for size) net protocol. With this low number, I assume that only visible tanks are send to the client (this would make cheating in that regard impossible too). It seems the network protocol is straightforward, without different variations for projectiles and other stuff, unlike in some other games (eg Halo: Reach has special protocols and prioritisation for grenades and projectiles), this is a very solid approach done since ages (see Quake 3 source code for an example). But beware, some if this is deduced through some experience that i’ve gained through network programing and studying of other code bases (like quake 3 source code), so take this analysis with a grain of salt. As an interesting fact, I’ve noticed during my rendering test sessions for the previous parts. When I took a frame snapshot, which took several seconds, my ping did go up to 999 and the game never recovered and I had to kill and restart it to continue the match (i did this during countdown, no worries :)). This indicates that the network is handled in the same thread as the rendering, which is a common design. They could move this to a separate thread, the question is, is it worth the trouble. Moving stuff from single threading to multi threading can cause many sync issues that eat all the possible gained performance and even can degrade performance. Effects Of Process Priority By changing the process priority from normal to height, I’ve gained less than 5% fps on both old and new rendering modes. This is probably very situational, it can help especially if your system is at nearly 100% cpu usage. It will also not help, if your system is capped by your graphics card. Behavior When Minimized When minimized, WoT stops rendering resulting in 0% graphics card usage, cpu usage drops to 1-2% only processing the network so that the client does not get disconnected, this is even less than fmod uses to mix the audio. So you can safely minimize it and it will not burn down your system. Rendering Of New Maps (Kharkiv) I’ve logged a replay of Kharkiv with the new renderer. I’ve noticed not any significant differences, but this is hard to spot in that kind of logs. I believe they make better use of Umbra (it seems on some maps it is used), because of the many buildings and other obstacles that cover many parts of the map when rendered. Additional Comments On The New Rendering System One thing that I thought about at first was, why they are using a deferred renderer method for the new rendering system? On battle mode, there are no real dynamic light sources, and if there are any you can count them with one hand, so there is no benefit to use a such expensive technique to render the image. It would probably simpler and faster to use a forward renderer.