Files
RobustToolbox/Robust.Server/Utility
Pieter-Jan Briers 2898f5396f Account for windows time period latency in Lidgren.
1. Set timeBeginPeriod(3) on the server to reduce scheduler latency in the lidgren thread.
2. Add 16ms of guaranteed lag bias to client prediction calculations to account for scheduler latency.

Both of these changes are to account for how the windows scheduler seems to handle time periods in related to socket polls. See this Discord conversation for why, details down below as well: https://discord.com/channels/310555209753690112/770682801607278632/798309250291204107

Basically Windows has this thing called time periods which determines the precision of sleep operations and such. By default it's like 16ms so a sleep will only be accurate to within 16ms.

Problem: Lidgren polls the socket with a timeout of 1ms.

The way Windows seems to handle this is that:
1. if a message comes into the socket, the poll immediately ends and Lidgren can handle it.
2. If nothing comes in, it takes the whole 16ms time period to actually process stuff.

Oh yeah, and Lidgren's thread needs to keep pumping at a steady rate or else it *won't flush its send queue*. On Windows it seems to normally pump at 65/125 Hz. On Linux it goes like 950 Hz as intended.

Now, the worst part is that (1) causes Lidgren's latency calculation to always read 0 (over localhost) instead of the 30~ms it SHOULD BE (assuming client and server localhost).

That 30ms of unaccounted delay worst caseis enough to cause prediction undershoot and have messages arrive too late. Yikes.

So, to fix this...

On the server we just decrease the tick period and call it a day. Screw your battery life players don't have local servers running anyways.

On the client we bias the prediction calculations to account for this "unmeasurable" lag.

Of course, all this can be configured via CVars.
2021-01-12 02:43:15 +01:00
..