(07-20-2010, 06:58 PM)dyin Wrote: [ -> ]Except that this means
a) you've to compute the cell(s) for each map individually
b) you've to convert/compare the floats to find a matching cell
c) you've to update a pretty huge array. let E, F, G be width, depth, and height of a map divided by D each. then you've to have an array of size 64xExFxG, which, unless D is rather large, becomes pretty big. Yet you can not pick D large or else you'll loose accuracy for 'camps on a random spot' vs 'waits to get mega health' vs 'somewhere between mega health, rocket launcher and nex gun'.
checking the player positions each X milliseconds would mean 64*(1/X) updates per second at most. with all of b), so it is still computationally expensive and I fear it would cause server-lag. And again as with the 3d-resolution, you also need a high enough time-resolution.
ahhhh, at last, we go into details
a) yep, this task could require some work to do manually. I don't know. Maybe mappers could help us. We should keep in mind that the cell-mapping does not have to
exactly match the map. Indeed, it is enough to have a cell-mapping larger than the actual map, and you simply will have some cells which won't never be occupied.
c) I get your point, but you here consider to achieve the most complete system. Instead you could:
- make the function only at the client side, so your 64 becomes 1 and you then trace only your own statistics. Nothing is lost because if you further want to get the statistics of all players, you only should send these individual stat to the server, which in turn will build the heat map by averaging all stats.
I you want to maintain it at the server-side, you could start by only tracing some 1v1 or 2v2 games, which would already be very useful.
- make all the statistic computation off-line. To do this, you can use the demo recording system. Then, time-constraints become none critical.
- only trace the surfacic occupancy, not the volumic one. Then G = 1.
- X does not need to be 1ms. I'm pretty sure you can choose a reasonnably lower sampling rate and still extract a representative information.
- Of course D comes with a tradeoff. Again, D does not need to equal 100000 to be significant.
b) taking into account the restrictions of c), the computational load should remain quite reasonnable.
Let's take an (almost) pure 2D map like eggandbaccon, with X = 0.1 s, E = F = 100.
Giving a (X,Y,Z) position, you have basically compute 2 divisions (plus some cheaper operations) to determine the cell containing (X,Y,Z).
Total: you have to manage a matrix which is 100x100, and make 20 divisions/sec. Any 90's computer can do this.
For 3D maps containing several floors, this approach has to be adpated, but i don't see anything impossible.
What do you think dyin?