Networking, lag meter, and gaming consistency guide to Urban Terror

The development of this document will not be discontinued. Original sources have been taken from the no more existing www0.org page.

"Hits"? "Lag"? This guide attempts to soothe such symptoms by explaining relevant networking and gaming consistency aspects such as the reading of the lag meter and the use of relevant vars. The purpose is diagnose problems and give solutions or create a knowledge base for potential solutions. Additionally, the information provided can be utilized for improving efficiency even if there's no apparent problem to begin with.

It begins with a section dedicated to the lag meter for its importance in diagnosing problems. It is followed by important concepts relevant to the topic. A server administration section is included which could be informative to players too. A collection of other relevant concepts and frequently asked questions concludes the document.

Contents

[hide]

Introductory remarks

Complexity of the guide

This may appear complex without elementary knowledge in networking and game engineering aspects. However, effort is made to be as understood as possible.

It may have to be read twice (at least partly) in case the reader comes across relevant concepts for the 1st time.

Accuracy of the guide / TODO

Omissions and errors could be in order. Improvement is ongoing

Sentences [in brackets] indicate parts that may be in higher need of revision or attention.

Repetition is currently utilized for clarity. Effort is made to redirect to relevant sections.

Glossary

Client: The gamer side, a gamer's urban terror installation, computer or network device.

Server: The server side, game server's operation, computer, network device or urban terror installation.

Lag meter: lag meter, lagometer, lagmeter, cg_lagometer, the meter, net graph.

client->server: client to server data delivery.

server->client: server to client.

The Lag Meter

The Lag Meter, also known as net graph, is of fundamental importance in diagnosing problems.

Parts of the Lag Meter

In basic terms one can distinguish the Top part and the Lower part of the meter.

How to enable the Lag Meter

type in the console:

/cg_lagometer 1

Bring down the console with '`', '~', etc. Alternatively set the var in q3config.cfg or autoexec.cfg, use a toggle bind, etc. (their use is beyond the scope of this guide).

Traits: How to read the Lag Meter

Α nice lagmeter on 64 ping. No problems of significance are distinguishable. The top part is spiky but always light blue, and the lower part an approximately straight line of green.

Healthy state of the lag meter

It generally should be:

upper line: always blue in a sawteeth form

lower line: always green and about straight.

About as it is in this image ⇨

Top Line

The blue line on the top is the frame interpolate / extrapolate graph. It advances 1 pixel for every rendered frame. An only-blue spiky line, shows interpolation (guessing the state of the frame rendered on the screen from two known snapshots of information) made normally from game server updates to the client. This is the ideal.

It thickens when ut_timenudge is used. The higher the value the thicker it gets. The larger size can potentially hide yellow spikes that may appear.

Yellow on the top line

A single yellow spike on the top line. The reason was suppressed snapshot rate, as it can be seen by yellow on the lower line (a small part down-right).

Yellow on the top means unstable incoming snapshot rate in excess of 50msec fluctuation. The client had to extrapolate (guess the state of the frame rendered on the screen) from past information only. Such a prediction may turn out to be incorrect, creating problems in the consistency of gameplay (and hence "hits").

This is where ut_timenudge may help and that is because it instructs the client to wait more than 50msec before drawing the next frame (however, it should not be used in mild cases). More on timenudge are below in its own section.

More technically, 50msec is the interval between normal updates, since sv_fps/snaps is 20 (20 times a second, 1000 msec in a second / 20 times = 50 msec interval). If the updates - because of an unstable connection or software operation - come in an interval higher that 50msec (or not at all), the client has to extrapolate(guess what's going on from past information) and that shows as yellow spikes.

What often inflicts the occurrence of extrapolation, is rate / sv_maxrate delaying the reception of snapshots from the server to actively save bandwidth. See the relevant yellow on the lower line section below. Similarly, red lines on the lower line may be in direct relation with yellow on the top one. The reason is that having suppressed or dropped snapshots from the server is equivalent to having them very delayed, hence forcing the client to extrapolate.

[Another possible reason is CPU congestion on the server machine] (hence the server software being unable to send packets in time ('in time' referring to 20/second according to snaps/sv_fps)). This often looks as a very large area of yellow often reaching the top boundary of the lagmeter.]

Lower Line

This is the snapshot latency / drop graph. It advances 1 pixel for every snapshot of information the client received from the server (currently the rate is locked to sv_fps/snaps of 20snaps/s). If the bottom (green) line isn't completely straight, ping is not completely stable. Its height simply shows the level of ping. If it's low, it's short, if high, tall. Its ideal form is to be always green and of a smooth height. Ping is also simply shown by the ping meter.

Red Lines

It means that snapshots from the server to client were dropped (server->client packet loss).

It is generally considered bad [since it interrupts the normal transference of information in the delta compression ring buffer of the game. With active throttling of the bandwidth such disruption would not occur. At the same time, snapshots of info from the server simply don't come in time (that's a problem with throttling too).]

A common underlying reason is a connection between clients<->server(upload capacity of server or/and download capacity of client) where the bandwidth has been congested. A lower sv_maxrate (or client rates) may improve the situation. However, it may be any other congestion/instability source between the involved computers.

It may be in direct relation with extrapolation (and hence yellow on the top line), since having dropped packets is equivalent to having them very delayed, hence forcing the client to extrapolate (more on it on the 'yellow on the top line' part above). This in effect means that consistency of gameplay may be improved with the use of ut_timenudge because of this indirect relation of red on bottom with yellow on top (more on ut_timenudge's section).

cl_packetdup is not directly related to it since it duplicates information from client to server (and not server->client) in order to have a higher probability of getting input commands from the client to the server on the next packet in case of packet drop. However, since it's probable that the same reason that created dropped packets from server->client (and hence red lines) to also cause packet loss from client to server (e.g. a faulty modem and not a congested upload of a server), if one sees red lines to benefit from cl_packetdup (it gets the values of 1, 2 or 3 for the amount of snapshots of information from the past any current snapshot from the client to server duplicates). An extra reason for using cl_packetdup is that we don't have a direct indication of client->server tranfer accuracy on the lag meter (only server->client), hence we are at least partly in the dark about that part of the bargain (a server lag meter would help in that regard if it was available server-side, to monitor clients' incoming data).

Yellow on the lower line

It is theoretically normal but it should always be avoided if not required. It occurs to report "rate delay", which means that according to our /rate and sv_maxrate a snapshot from the server was suppressed to save bandwidth.

If you want to improve latency and potential gameplay inconsistencies you should avoid the occurrence of that.

To avoid the occurrence of that, try to return /rate to a reasonable value (say, the max (25000) on a modern broadband connection), increase your sv_maxrate (on your server) or find another server.

The main reason this should be avoided if possible, is that rate delaying is often cause for extrapolation and yellow on the top line mentioned above since having suppressed snapshots to save bandwidth, the client may be forced to guess the current state from past information (extrapolation) which is often a source of inconsistencies. An occurrence of that can be seen on the previous screenshot with the yellow spike.

It should be less possible to happen nowadays since the lowest possible value on UrT is locked to 8000.

However, a very populated (and fairly common) server of say 16 players or more, may need to use more than 12000-15000 on sv_maxrate to avoid yellow on the bottom line (which may be a source of yellow on the top). If you can afford it in bandwidth (since uploading congestion is even worse (red lines)), the max value of 25000 is a safe bet (0, the default value is equivalent to the max, 25000). For a detailed account of this see the relevant test case on the server administration section below.

Gaps on the lower line (no color at all)

Gap on the lower line of the lag meter after using /screenshot repeatedly. The resources load at that time froze rendering temporarily.

It is rendering having frozen temporarily due to CPU or other resources load, often related to disk writes or reads by the engine during gameplay, e.g. It can be forced if one hits /screenshot repeatedly but it may also be a recurring issue such as a video driver-operating system bug or a switch on the video card having been used for easing overheating.

An example is shown in this image ⇨

A probable source is anything the engine can do but taking a while to finish it; the engine doesn't work in parallel almost at all hence such gaps may be sourced on any operation of the engine that just took a while to finish. This should be considered abnormal during gameplay.

Of great help on gaps is this optimized build. It solves two issues that are a source of irritating freezes of rendering and hence gaps: Radio and some other audio not being pre-cached and Funstuff not being pre-loaded either.

Since the issue is often related to disk access (it should be avoided during gameplay), one can run try with /fs_debug 1 and see if any disk write/read occurs during the gaps.

Black lines

They report that antiwarping was used on the current client. It would be initially shown as a warper to other players but antiwarp kicked in to prevent others from seeing that. Warping occurs when the server doesn't receive frequently enough updates from a client either due to packet loss from client to server or an unstable ping. Antiwarp (and its parameters) is controlled by the server (relevant section is below). [This will simultaneously make gaming choppy to the warper, finding it (even) harder to play smoothly.]

Since that may indirectly mean that there was packet loss from client to server (net graph does not directly report client->server loss, only server->client (and we don't have a server-side lagmeter)), cl_packetdup may help in this occasion (i.e. try cl_packetdup 3) (in occasions, it may not help since warping may be a result of unstable ping and not loss). A higher cl_maxpackets may also help in that regard, though its limited range in UrT (30-42) may make that doubtful for most occasions.

Limitations of the lag meter

A significant limitation of the net graph is that while it's quite informative about the consistency of the rate of incoming snapshots from the server, we don't know, at least directly, about the fate of the client's input commands sent to the server. In that case, only a meter on the server could indicate directly their fate.

This is the reason cl_packedup is more important than it may seem at first, since it may not be directly apparent on the lag meter whether its beneficial effects (resending input information for a better chance of reception by the server in case of packet drop) are needed.

As already mentioned about red lines on the meter, their appearance does not necessarily mean that outgoing packets (from client to server) drop as well since red refers to incoming snapshots (from the server) being lost. Client->server drop can happen for instance when the upload capacity of a server is fine, but its download one has been congested, or more commonly, the upload capacity of a client is limited (since most broadband connections have limited uploading) while its downloading is fine.

This situation may indicate the potential benefits of a lag meter for server administrators on the server side in the interest of a more complete picture (potentially in console mode, as many servers run in a shell-only environment) for reporting clients->server packet loss or connection instability.

Important Relevant Game Aspects

ut_timenudge

A thick top line. ut_timenudge of 50 thickens it.

ut_timenudge (also known in UrT as 'Local Net Buffer') delays the processing of game information on the client, in order to achieve smoother visual representation of gameplay. That means that local lag is traded for visual gameplay consistency. Hence it's of importance to know when to use it, when not to use it, and when used, what value would be suitable.

In base q3, negative values increase gameplay responsiveness in exchange for visual jerkiness. Positive values (that we have now) increase smoothness in exchange for input/gameplay responsiveness.

In terms of 'hits' it's a complex matter since it can both be argued that jerkiness = 'hits lost'/visual smoothness = 'hits improved' and that jerkiness = 'better latency and hence maybe better hits'/ positive timenudge = 'local lag'. It may depend on the person or mood on what's best in a case. At least for the time being though we only have positive values in urt so it's either that or 0.( Personally I find it very irritating on negative values (with eventual only negatives to gameplay) so it's fine as it is for me but I suspect others operate differently. Unless -10 would be bearable..)

To understand the function of this command, one has to know how the game works in terms of sharing of information between clients and server.

The server and client have a "discussion" during gameplay, exchanging what is going on in the game world. The game world information is sent by the server to a client in a 50 msec interval (this is because sv_fps/snaps is 20 and 1000msec(=1second)/20 parts = 50 msec). Similarly the client sends its own input commands to the server in accordance to cl_maxpackets and FPS though that is of little importance in the use of timenudge.

If the top line of the lag meter gets yellow spikes, it means that the client didn't get the information at the time it expected (more than 50 msecs passed since the last update) and that "discussion" didn't go as smoothly as it could so it had to guess what happened based on past information, i.e. it extrapolated. By setting ut_timenudge e.g. to 30, the client waits 30ms longer before it draws the gameworld with new information from the server, in the hope of dealing with a more complete picture and achieving a smoother representation of the gameworld in case of delayed incoming information.

When ut_timenudge is used, the top bar of the lagmeter gets thicker. Thinking of it visually, it's harder for it to get yellow, because the larger thickness of the bar is easier to cover it [(however, that (visual indication) probably shouldn't be taken as a definitive indication of the avoidance of extrapolation)].

You could experiment with the 10 or 20 values (as proposed by timenudges's coder(TwentySeven)) even if there's no noticeable problem, at least in the short term. The range allowed is from 0 to 50 (no negative values are allowed in UrT).

At a low level, ut_timenudge alters timestamps of incoming packets to appear to come later. cl_timenudge of q3 mods allowed also the alteration of outgoing packets but that is exploitable to inflict warping. UrT's implementation avoids that.

This process adds local lag, since you eventually draw on your screen a bit later what happened in the game world (later than the usual unavoidable lag). Consequently, ut_timenudge shouldn't be used if you don't get yellow bits on the upper line of the lagometer or generally if there's no problem that it could solve.

So called "hits" may improve by avoiding extrapolation as described here since jerkiness would be lower. But responsiveness is hit by internal latency.

Unlagged system described below is taken into account when ut_timenudge alters timings in the perception of the gameworld hence its operation isn't disturbed.

In case of very rare yellow on the top line, it is probably better to not use timenudge at all; the reason is that the local lag inflicted by its operation is probably not worth the chance of a smoother view of the game once every 10 or 20 seconds.

Unlagged code in the game and "hits around walls"

Unlagged code [(operating from the server)], also known as 'antilag' in UrT, avoids the need to predict where a target fired upon "will be" to account for lag (as it was the case "in the old days"). It does that by keeping track of where and when an attacker actually saw a target accounting the latencies involved in the game.

So when a high pinger sees a low pinger, he will still hit him at the spot he thought he aimed at, without having to predict where he "will be" to account for lag (even though because of base lag, the registration of the shot will still happen later compared to being a low pinger).

A common side-effect: That effectively means that a high pinger can hit a low pinger "around a wall": A low pinger may have been in sight of a high pinger when he was fired upon; however, because of high pinger's higher lag, the high pinger saw that happening "earlier in time" from the low pinger's perspective. In low pinger's "real time", low pinger is already hidden but the high pinger still sees him (since the high pinger lags). So the high pinger shoots and unlagged code helps him by letting him shoot what he sees and not having to predict to account for lag. Hence, the low pinger thought someone hit him after he was already hidden. (This phenomenon also applies between 2 high pingers, but it'd be harder to describe. It is also apparent from the perspective of a high pinger being hit by a low pinger.)

But that still doesn't make the high pinger equal to a low pinger (in ping handicap) since the high pinger still has the disadvantage of seeing everything that happens in the gameworld after the low pinger, because of his higher base lag. Low pingers still have the upper hand.

The side-effect could theoretically be counteracted by coding the game to ignore hits for the cases the phenomenon occurs (perhaps though heavy in resources needs), so I wouldn't be surprised if a coder did some magic about it.

It should be pointed out that in a game full of hitscan weapons (weapons that instantly reach their target) such as UrT (quake3 only had a railgun and the machine gun, importance was lower), unlagged code is of fundamental importance for smooth gameplay. Most probably even relatively low pingers (e.g. 50-60 msec) would find it very hard to hit consistently without it.

It can not be configured, disabled or altered, well at least legitimately; and most probably rightly so.

Note: It is important that such 'hits around a wall' may have existed without unlagged, merely because of common lag, but they would be much less pronounced.

Antiwarp

Antiwarp is a server-side system (which is also aided by prediction in clients truth-be-told), which simply, makes people less able to warp. It's hard to see warpers nowadays that-is. Some of the time it's not them that warp but the observer that has a jumpy connection. Sometimes they do warp but it's a server admin not having setup antiwarp well. Warping occurs when the server doesn't receive frequently enough updates from a client either due to packet loss from the client to the server or due to unstable ping.

If antiwarping kicks in for you, it shows as black lines on the lag meter. This may be an indication of packet loss from client to server hence cl_packetdup to help there. However, that's not definitely true since it may also be an unstable ping. A higher cl_maxpackets may also help though that's doubtful in its limited range allowed.

Its server-side configuration is covered below.

/rate

rate controls how much the client allows the server to send in bytes per second (server -> client). It is also limited by the server vars sv_maxrate and sv_minrate, e.g. if it's 20000 sv_maxrate and 12000 sv_minrate there, you are limited between those two values.

It is not of a fundamental importance nowadays since the game now uses delta compression and about 4 to 8kilobytes/sec down is what usually a client utilizes. This however, may not include certain overheads hence the lowest allowable value of 8000 (=7.81KB/s) or even higher may be forcing throttling a lot of the time.

Hence, a very low rate, say 8000 or 10000 on very populated servers (16+) is often a source of yellow on the lower and top part of the lag meter, which is a source of inconsistencies. See the relevant test case on the server administration section for an account of that.

Keeping rate to the max of 25000 should be ok nowadays for a common broadband connection: Rate means bytes / second and it is only 24.41KB/sec (which isn't reached most of the time if not at all anyway). There was a rumor that only if the server set its max rate lower than 25000, it'd be better to set it to that (lower value) but that is highly doubtful (since the server will do its internal checks to set the client's rate within its set limits anyway).

VoIP in recent ioquake3 versions, utilizes the rate setting; in case ioUrT includes it, it should be wise to keep rate to a high value, still of course at a value suitable for the internet connection.

A very clear indication of the health status given from a proper rate (and sv_maxrate on a server) is the occurrence or not of yellow on the lower line of the lagometer. If there is yellow instead of green, it means that the client reports that rate was suppressed. Ideally, in the interest of lowering latency and the avoidance of potential yellow on the top line, this should be avoided.

Of course, even in this age where game engine needs are peanuts to modern broadband connections, let's not forget a high rate on a slow connection is worse that a low one because (according to Carmack) low rate means a little choppy, too high (for a slow connection), major lag. This is still of some importance since congestion may come from the server side or another part of the internet.

cl_maxpackets and FPS

What cl_maxpackets does

cl_maxpackets governs the max amount of packets of information the client is willing to send to the server (client->server) a second. It is locked between 30 and 42 in UrT.

This is highly related to the FPS of the client.

Important and slightly incomprehensible bit

Because of certain game engine mechanics that can be summed up as 'FPS is the governing cycle through which stuff are being done' (not just graphics), the client is able to send only 1 update to the server every 1 frame, or 1 every 2 frames, or 1 every 3 frames and so on. Hence, if your FPS is 125 you can send either 125 a second, 62.5 a second, 41.7 updates a second, 31.25, 25 and so on (no rounding up is taking place). If it drops (or is set) to 100 FPS it can only send 100 a second, or 50 a second, or 33.3 a second, 25, 20 and so on.

What that means

So, if you have a cl_maxpackets of 30 on 125 FPS, the client will send 25 updates a second. If you have it on 100 FPS, it will send again 25 a second. If you have 42, it will send 41.7 on 125 (i.e. the locked max value of 42 currently, appears to be an optimized value for (a steady) 125 FPS), and on 100 FPS it will send 33.3 a second.

Math based on the above base logic can be made for calculating other cases.

The basic aim here is to have the lowest latency possible by trying to send the most packets per second to the server as possible. e.g. currently on a stable FPS of 125 it appears it is best to leave it at the max value of 42.

An example of a bad decision here is that if one draws FPS at 60 the client will be able to send either 60, 30, 20 packets per second to the server. That means it won't be able to use the max value of 42 but only 30 of cl_maxpackets. Similarly with 100, it will only be able to send 33.3.

Maybe higher allowable values of cl_maxpackets (perhaps up to 125) on next versions of UrT would be beneficial for lowering latency in the game.

The limitation when setting max FPS with com_maxfps

Now, when setting com_maxfps to presumably take advantage of the above information, keep in mind that FPS (visual and internal, same thing) is limited to values of 1000msec divided by an integer; this is because the engine measures frametimes using millisecond integers. So you can either set 125 FPS (1000/8), 111.11 (1000/9), 100, 90.9, 83.3, 76.9, 71.4, 66.6, 62.5, 58.82, etc. (without any rounding up.)

[An exception may be in the case of vertical sync where the rendering is forced to be done exactly on the frequency of the monitor (usually needed on 60Hz).]

As a rule of thumb

According to the above information, it can be concluded that in the interest of lowering latency, cl_maxpackets can be left on the max value of 42 on any FPS occurring, unless one is in serious need of limiting their client's upload bandwidth - this is highly doubtful on recent broadband connections. At the same time, a steady FPS of 125 appears to be a handy state for achieving optimal efficiency, since it utilizes almost exactly 42 packets/second.

A point to take notice of is that one could conclude that if one reaches usually 110-120FPS it may be preferable to have a com_maxfps at the value of 83 and not 125, since in that case maxpackets will reach 41.7 (per reasons above) instead of a lower value because of an unnecessarily higher value, of say 90.9, in which case the max will be 30.3 (since 90.9/3 = 30.3 but 90/2 = 45, overreaching maxpackets of 42). However, this conclusion is misleading since if a computer can usually only reach 110-120, FPS may occasionally (because of the complexity of the game, being depended on map, type of game, population, etc.) be lower and higher than that, making such assumptions partly unrealistic.

1000/integer/integer

The relation can be briefed to: FPS: 1000/integer; packets/second sent to server: 1000/integer(FPS)/integer.

An unintuitive internal rounding up

Notice that the ceiling of the var is being rounded up internally in an unintuitive fashion: com_maxfps 84 goes to 90.090.. while 83 stays to 83.333...

i.e. com_maxfps 84 gives effectively 30.30.. packets per second since it puts effective max fps to 90.91.. while the correct value would be 83 to get 41.666.. since that is the com_maxfps value that will give an effective max fps of 83.333 which we'd want for lowering networking latency with the cl_maxpackets ceiling available.

cl_packetdup

cl_packetdup duplicates information from client to server in order to have a higher probability of getting input commands from the client to the server on the next packet in case of packet drop. It gets the values of 1, 2 or 3 for the amount of snapshots of information from the past any current snapshot from the client to server duplicates.

Turning it on is more important than it may seem at first. This is because the lag meter is quite generous on informing us about what happened with the packets a server sends, but does not inform us about what happened with our input commands sent to the server (a lag meter on the server could have helped in that regard). Hence it's quite probable to have gameplay inconsistencies sourcing from the client->server part of the bargain while the lag meter appears to be clear. Of course, in the quest for "absolute low latency", one could experiment with a low or disabled cl_packetdup [not currently available] if confident for no relevant packet loss.

As already mentioned in the lag meter section, red lines on the lag meter do not mean that cl_packetdup may certainly help. This is because red lines reflect packet loss of information from server to client and not client to server where cl_packetdup applies. However, since the same reason that created packet loss for server snapshots (and hence red lines) may create packet loss from client to server too (e.g. faulty network equipment and not a congested server upload) cl_packetdup to be often beneficial when red lines appear at the same time.

The occurrence of black lines on the lag meter which report the application of antiwarping, may indirectly indicate the occurrence of packet loss from client to server in which case cl_packetdup may help (even though it may be also sourced on unstable ping).

For Server administrators

The basics

Trying to avoid CPU, network and other resources saturation is supposed to be a basic goal.

Don't believe immediately what a random user says about a server that "sucks". The internet is a complicated structure and blame can be put on anything between a client and a server: the server, the client, the isp of either, the isp of the isp of either, a modem in the middle of nowhere, an antivirus etc. At the same time certain users from a certain demographic may have excellent connections to a certain server but others the opposite. e.g. rarely the users from a University lag with a server on the same University, but the whole rest of the world may.

The lag meter on clients explained above, is again a vital tool for diagnosing server problems. After all, the game is set up for the players.

Antiwarping

This is governed through 2 variables:

g_antiwarp 1 enables it.

g_antiwarptol sets the interval antiwarp is willing to 'forgive' (default is 50). Hence, a value of 70, lets people that flactuate up to 70msecs to not be treated by antiwarp, a value of 30 inflicts antiwarping smoothing even when the fluctuation is only [equal or ]more than 30.

[The value of 50 seems to be suitable since it relates to sv_fps/snaps being 20 (which means updates by the server are send every (1 second = 1000msec/20 =) 50 msec].

[Presumably, an unnecessarily very low value could create considerable additional CPU load to the server in its attempt to smooth out players.]

[It is assumed unlagged code takes into account smoothed out players.]

An unnecessarily high value will miss potential "medium" warping, and keep it untreated.

If you don't know what to do, keep it at the default: 50.

Uploading consistency

sv_maxrate and rate issues

This - as mentioned in the /rate section - limits to a certain value of bytes per second the upload of a server to any client (server->client). However, since there is delta compression in current q3 gaming, the max value of 25000 (24.41KB/sec) is never reached in a common server setup. Uploading of snapshots is usually done at 4 to 8kilobytes/sec to each client even on the max setting. This however may not include certain overheads since the lowest allowable value of /rate of 8000 (which is 7.81KB/s) is often a source of throttling, and often on values of 10000 or 12000 too.

Because of throttling side effects, values of 10000-12000 or less on very populated servers may have to be avoided. Take a look at the relevant test case below for a detailed account of this.

In case ioUrT includes in the future VoIP features from ioquake3, this var will also limit the bandwidth used by that feature.

A very important indicator for the correct usage of rate limits is the state of the lower line on the lag meters of players. If it is yellow instead of green it means that rate was suppressed (that this limiting discussed here had come into effect, unless they limited themselves with /rate). This is not necessarily bad, in fact it is good if your upload bandwidth was indeed in need of limiting. But, you may want to decrease latency and potentially avoid yellow on the top created by this state, in which case yellow on the lower lines of your players' lag meters may mean the need of a higher maxrate if possible.

Yellow on the lower line is often the reason for yellow on the top since suppressing information to save bandwidth, the client may be forced to extrapolate (guess the current state from past information) which is a source for inconsistencies in gameplay, hence an extra reason to avoid excessive bandwidth limiting if possible.

Keep in mind that a low rate to the clients is much preferable than a high one that would congest connections. This is because a low one will simply mean a bit choppy and inconsistent, a too high one that congests connections, major lag (accompanied by red lines on the meter).

If clients get red on their lag meters (dropped snaps of info from the server), a possible underlying reason may be a connection between clients<->server where the bandwidth has been congested. A lower sv_maxrate (or /rate s) may improve the situation. However, it may be any other congestion/instability source between the involved computers.

A related variable is sv_minrate which forces a minimum rate on clients though in rare cases, a client may need a lower one (e.g. on very limited bandwidth).

sv_minrate and sv_maxrate don't change /rate of a client shown in /rcon status, it gets the /rate of a client, it checks it in server code and forces the rate it uses for later throttling based on sv_maxrate and sv_minrate (and other built-in limits).

If you have sv_maxrate on 0, the default, it's equivalent to 25000. If no red appears on clients and no bandwidth saving is needed, it's most probably better to leave it at that max value.

Number of clients

Directly in relation to sv_maxrate, number of clients is important simply for calculating the upload bandwidth provided by the server and avoiding saturation of upload bandwidth as well as limiting CPU needs.

In addition, devs of the game (most notably BladeKiller), have pointed out the importance of not exceeding the recommended number of players. This may be also related to the load on the clients. i.e. a server may cope, but will the clients?

Limited needs for bandwidth and Overheads

It is only logical that upload capacity required isn't only a matter of number of clients x sv_maxrate. This is because 1) the game may simply not need the full capacity of sv_maxrate and send less, 2) the UDP connections for transferring information may include overheads, raising the expected bandwidth being used.

Download needs for the server are affected by the values of cl_maxpackets and actual FPS of the clients, however, [there isn't much a server can do to limit or increase the reception of those].

CPU congestion (on the server machine)

This is a common reason for inconsistencies since such game servers can be quite demanding in cpu time (often when multiple instances share the same computer). [In such a case, a client may show big chunks of continuous and high yellow on the top part of the lag meter, meaning the server was unable to send snapshots in time (according to 20 snaps/second of snaps/sv_fps_).] This could be alleviated by giving higher priority to the server process or by simply avoiding the problem by running less processes or less demanding types of servers. [It may be preferable to avoid reaching 100% of CPU use than reaching it and relying only on priority settings, since the OS may still give higher priority on other vital functions periodically. This is why running a server on a computer also running a client (very demanding on CPU) may be unreliable (it should be less of a problem on multiple cores nowadays though).]

A Test Case: Avoiding an excessively low sv_maxrate

It will be shown that, if affordable, a server admin should use sv_maxrate values on the high side allowed, say more than 12000-15000 on a regular public server of 16 players or more.


Testing shows that with a low max rate, e.g. 8000, on a busy server of 16 players or more, net graph appears like this:

Lots of yellow.png


As we've seen, yellow on the lower line means your sv_maxrate (or the player's /rate) forced the info from the server to come later to save bandwidth but that in turn meant here lots of yellow on the top because the client didn't have enough new info so it had to guess what's going on from past information (i.e. extrapolate) and that's often damaging to the consistency of gameplay.


Roughly:

Lots of yellow with writing.png



Of course, don't increase sv_maxrate to a point where you're going to congest a connection, since a congested connection is even worse; i.e. arrival of red lines, and not the promise that the server will eventually send delayed snapshots [and not interrupt the ring buffer of delta compression], as it happens with suppression for bandwidth-saving through sv_maxrate.


Another test case:

A very busy public server (20+ players) with sv_maxrate on 25000:

N1.png

The same instance but having forced rate to 8000 with /rate:

N2.png


notes: if you have sv_maxrate on 0, the default, it's equivalent to 25000. If no red appears and no bandwidth saving is needed, it's most probably better to leave it at that max value.

One can force a minimum rate to clients with sv_minrate, however it could be a source of problems if a client really needed a lower rate. But, this should be most of the time not an issue with recent broadband connections (even the weakest ones).

Other relevant aspects / FAQ

Networking

What are the basic factors that affect ping?

Ping is primarily affected by the distance between you and the server and secondarily by the intervening equipment unless there's a bug or a congestion (full connection) at hand. When saying "distance", allow for networking connections distance being potentially different from the geographic one, though this is usually a secondary factor (these were of course only a few basics of the complex venue of Internet Routing but they should be sufficient for most gamers' needs).

A congested link in a route may also be a common factor; this can be investigated with tools such as traceroute.

Methods that may improve ping

The obvious is to choose a different server, since ping is primarily affected by geographic distances and networking setup between a certain client and a certain server. Also avoid a congested connection at all costs. Other improvements may be:

Traffic Shaping

Traffic Shaping may be utilized to give higher priority to UDP packets (that are used exclusively by the quake3 engine and TCP does not come into play) or achieving the same goal by prioritizing certain ports or addresses. However, this improvement may be only noticeable in relatively busy connections only, that potentially already have congestion issues. Traffic Shaping of such kind (e.g. prioritizing UDP packets) is said to be utilized by certain network cards aimed to gamers, but they should have the same limitation with software traffic shapers, in only being useful in relatively busy connections and while theoretically a hardware implementation may be faster or more efficient, that is not definite.

Operating System and driver tweaking

UDP packets throughput may be improved by tweaking certain networking parameters relevant to the Operating System you are running or driver you are using after some searching. However, be aware of 'magical settings' that some propagate blindly - you have to know what you're changing, and some tweaks may be completely irrelevant. e.g. It's common to find TCP connection tweaks for many games if you search a little, but the problem is TCP isn't used at all in UrT!

Improved drivers, improved underlying software in general

Updating drivers or the operating system elements responsible for networking (e.g. with a newer version of a linux kernel or using a newer version of Windows) may be a source of improvement. However, that does not mean new bugs may not surface too. Generally though, improvement is more probable in newer versions unless they are in a beta or alpha stage.

Wireless networking optimization

Notice that often (but not necessarily) wireless communication will inflict latencies on a connection (compared to ethernet) regardless of signal quality and setup (if it's physically easy to use ethernet, it's most of the time better for achieving better networking latency). But there is often room for improvement.

An important diagnosing tool is a software wireless scanner for scanning for interfering signals. Other signals should ideally be at least 5 channels apart to not interfere. Weak signals on the same channel as the closest device, may not appear on the scan; hence it might be a good idea to change channel (or turn the device off for a while) for a complete scan.

The signal should be as strong as possible; [60% or more shouldn't usually pose a problem.] If the networking devices involved are too far from each other one could try to bring them closer, use other antennas or adjust their positioning, or use other relevant boosting means.

Newer drivers (for the wireless device), or also newer firmware for a router may help.

Instability of ping on low ping? Is it possible?

Instability of ping can occur regardless of lowest ping (especially if one has saturated their connection with their ISP (e.g. through the download of something at the same time)) which means a high ping can be stable (though it's more often that high pings are the ones with instability (since the further the interfaces are (which highers ping) the higher the probability of issues)).

What kind of ping fluctuation is dangerous?

[If 'dead end' exceeds 50msec during the time of reception of a snapshot from the server there should be definite extrapolation (yellow spikes), but even lower may make it possible but less frequently so. Also ping may have to be spiking at the 'right' moments for negative effects to occur. This is related to the periodic dispatching of information from the server and clients (which occurs at intervals of 50msec).]

Do High Pingers have an advantage?

No, mostly. This is because high pingers always see the gameworld later than low pingers (even with unlagged code). Low pingers always have the advantage in an 1on1 situation. They simply shoot first in the game (in the hypothetical scenario of two players hitting fire at the same time in the real world).

An argument that could be accepted for not letting high pingers play (gameplay-wise, not getting into international politics) is that high ping and unlagged code alterations do make the game-world look a bit surreal (in the negative sense of the word). e.g. You'd expect naturally for someone to react somehow (usually instinctively is how you'd expect it) but then the delays and antiwarping make you think something's weird. (That can be perceived in all pings but the higher the ping the larger the 'discontinuity').(This could potentially be in itself an advantage (or just score-changing) to the high pinger or others). This is aggravated by unlagged code's side effects described above.

But it has to be stressed that the rumor that has it all high pingers get an advantage by default, purely on ping grounds at least after a point, is false. Purely on latency grounds, low pingers are the undeniable winners.

"Why do I still die when I'm high ping when there is unlagged (and antiwarp) code?"

You die more in high ping servers since, even with unlagged, you are still shown earlier to opponents than you see them. It's as simple as that.

"Isn't timenudge only used to increase your ping?"

No. That's only a side effect (details are included above). If it was the only thing it does, coders would be crazy to include it.

In fact, it doesn't even increase ping actually, it only adds an internal delay on the way the client perceives and projects the game world.

"How can I become unhit?" (by networking means)

"Unhits" usually just know how to move.

You can theoretically make your connection a mess for the server in order to warp but that would be a "benefit" mostly on servers with antiwarping off and it'd be common for the warper to not be able to hit a thing because its client wouldn't have a good view of what's going on.

In most servers antiwarping will prevent any such exploitation to have a beneficial impact to the exploiter.

"I get very rare yellow bits on the upper line of the lag meter, should I use ut_timenudge?"

Probably not. The reason is that the local lag ut_timenudge inflicts on the client, is a significant handicap (especially above the value of 20) and not worth the chance of getting a smoother view of the gameworld once every 10 or 20 seconds. This is most apparent in games between players of similar skill, where the slightest delay may mean a big difference in outcome.

"My lag meter is clear. Is it still possible to be the server's fault that I get hitting inconsistencies"?

A lot of the time, hitting inconsistencies are just related to a person not aiming well enough or not understanding how deliberate inaccuracies of weapons work. It's natural, it's a complex game in that department, especially against players of considerable experience (that often know how to move and become 'unhit' (legitimately)). I don't think it's the place for a "how to be a better player" discussion so we won't delve into that.

I would doubt with a clear lag meter that blame could be put on the server.

However, you can never be 100% sure of course. e.g One could hack a server to be inconsistent on purpose or to find an obscure bug related to a variable (or a recurring bug) thought such occasions are highly improbable.

Misconfigured antiwarping or generally warping, could be a source for inconsistencies while the lag meter may show fine since warping is usually sourced on the instability of the connection of another player, not the observer. However even then, warping should be apparent and when an enemy is on target, previous or later warping shouldn't be the reason for hitting still missing.

"Can it be the client's fault (that I get hitting inconsistencies) on a clear lag meter?"

It is quite possible actually, most certainly more possible than being the server's fault on a clear meter. This is because while the lag meter is very generous on informing us about what's going on with incoming snaps from the server, it doesn't tell us what happened with our commands sent to the server (their rate governed by cl_maxpackets and FPS). Consequently, cl_packetdup is always a good bet in that department, even though, in the quest for "absolute low latency" if confident for connection consistency, a disabled or low value of cl_packetdup could be used.

Hardware lag such as computer to monitor connection, frequency of the monitor, response time of the monitor, vertical sync lag, even mouse input lag, could be a source of inconsistencies on a clear meter, though granted in a very limited fashion to be often noticeable. Most of the time it surfaces as the game slightly lagging, not input commands passing through noticeably inaccurately. It shouldn't be underestimated or disregarded though.

"Why do we throttle bandwidth and not let some snapshots just drop in a congested connection?"

[Since throttling bandwidth with sv_maxrate/rate already creates a delay of incoming information from the server (and any side-effects that may introduce, such as yellow on the top line), one could think that it makes sense to just let the server send the max amount of data it could, regardless of congestion probability, since with throttling there's a disruption of optimal gameplay anyway. However, that is not the case since with dropped packets the ring buffer of delta compression is being interrupted. At the same time, even if there was no such interruption, periodically keeping back information through throttling, gives a higher probability of smoother transmission. Secondarily, it's better housekeeping for a server administrator that wants to keep bandwidth use in checks.

That again explains why Carmack suggested that it's way worse to have red on the lag meter than a throttled connection through sv_maxrate/rate, since with red (which is often because of bandwidth congestion) the result is major lag, with a throttled connection only choppy or a bit inconsistent gameplay.]

Do client->server packets get larger on a higher FPS even with a certain cl_maxpackets?

[Yes, because they incorporate information from more frames. That means FPS is of importance in bandwidth congestion prevention considerations.

Though, since their data content will be larger only, the headers (and consequent overhead related to them) would be increased on a higher maxpackets.]

Ping command vs Engine's ping

The game engine doesn't 'ping' like the regular Operating System commands. It uses its own gaming packets recorded timings, e.g. from cl_parse.c: "cl.snap.ping = cls.realtime - cl.outPackets[ packetNum ].p_realtime;".

Now whether that's significant it's another matter. What's most probable is that it is a more accurate representation of ping strictly for gaming in the engine since regular OS pinging may not show certain engine overheads or inconsistencies. It could be argued though that the difference may be low to be noticeable at least if everything runs fine internally.

Its significance should be high in case certain overheads are shown only on engine ping.

Traceroute - investigating the low level networking quality

Tools such as traceroute (tracert on windows) are often used by networking specialists to assess the quality of a connection; this can be useful for gaming since one can investigate whether a problem is on a particular point in the route between him and the game server. For example, if one notices that the 4th nod in the route is the only one generating ping spikes, it might be the reason of instability and hence rule out the problem being on the gaming computer; that may bring to mind to change IP (which may change routing in some ISPs) or to use a different ISP or server (or to raise the issue with the ISP) etc.

Changing IP may help

This can be life saving; some ISPs route differently if you simply pick a different IP in a dynamic IP service (which can be usually done by reconnecting to them). That means problems with the network might be alleviated(and vice versa:) if one simply changes IP.

Graphical/Visual

Is it true that on 60Hz I should have 60 FPS for optimal latency?

It's a misconception that on a monitor operating at 60Hz it's optimal, latency-wise, to have 60FPS on the game. This is because while theoretically the monitor polls from the video card 60 times a second there is still a probability of lower visual latency if the FPS is larger. For example on 60 vs 125 FPS there is a probability (not constantly) to get a max improvement of visual latency of 1000/60 - 1000/125 msec. This translates to about 8.7 msec.

Also, subsystems such as audio or input might work more efficiently since they are not strictly bound by visuals, while FPS does usually affect them (since it also governs the rate of the global loop of the application).

Now, a related - but not directly related to latency - issue is image tearing. It's a an irritating artifact that occurs when the monitor refreshes its screen while the image on the video card is not comprised of a single frame (usually noticeable below 70Hz). To avoid that one could use vsync and hence in say 60Hz to automatically lock to 60FPS; this is completely different from having com_maxfps 60, it will tear in that case and probably worse than having higher FPS since IMO the "more fragmented tearing" of higher FPS, looks better than the big tear on only 2 frames. This however - vsync - inflicts local lag on the game and display so it should be avoided for optimal gameplay unless visual beauty is preferred to performance. It is important though that it could be argued smoothness in visuals may be partly improving gameplay performance; trade-offs.

At the same time cl_maxpackets, the rate of packets from client -> server, is affected by FPS hence on 60 FPS you can have max 30 packets in reality going from the client to the server a second while on 125 almost 42, hence the latency will be better on 125 FPS even in terms of networking (besides the visual advantage).

Vertical sync in relation to latency

Vertical sync ads internal lag on what is being shown to the player on the monitor. It should be avoided if not needed. However, it may be needed (at least as a personal preference) by some players since without it, image tearing may appear on monitors of low frequency. e.g. on 60Hz and below, quite apparent, on 70Hz and higher, not so apparent.

It may be also more problematic on SLI setups.

Other monitor aspecs

One could try to have the highest vertical frequency in CRTs, and the highest response time and frequency in LCDs possible.

Vertical frequencies (for CRTs) and frequencies (for LCDs) above or equal 70Hz should be sought and response times (for LCDs) below 10msec are preferable.

Increasing and stabilizing FPS

Having a steady or/and sufficiently high FPS is beneficial in several ways in relation to gaming consistency.

e.g. Apart from the obvious advantage of smoother rendering on screen, it is important in relation to to cl_maxpackets and FPS where a steady FPS is preferable for a lower latency in client->server packets.

For methods for improvement, we can briefly mention that almost everything in video settings that makes the game look 'uglier' increases FPS, e.g. a lowered resolution, disabled antialiasing, lowered texture quality, disabled anisotropic filtering are four major sources of increased FPS. It's probably beyond the scope of this guide to be too detailed about it, the sources on the matter are after all a lot. Check also the FAQ of the game.

Servers (or gametypes) with less action are also less demanding on the video card.

Keep in mind though that a sufficiently modern computer can handle Urban Terror on high quality settings with a steady FPS, so no need to sacrifice much from image quality.

Is visual latency lower on higher FPS even if FPS is higher than monitor's frequency?

[There may be lower visual latency on higher FPS, e.g. up to a max of 1000/60 - 1000/125 = 8.7 msecs when comparing 60 with 125 FPS.]

Is image tearing lower on a higher FPS?

[It may be assumed that tearing worsens the higher the FPS since fragments of more frames would be in a "tear". But that may not be the case. 60FPS on a 60Hz monitor without vsync may appear with more apparent tearing artifacts than 125FPS on 60Hz. The reason for that may be that even if tearing would be more fragmented it would be done more smoothly when done on more fragments.]

"Is it true that the human eye can only distinguish up to <number> FPS?"

The human eye and brain are organic devices, their abilities can not be measured in FPS and Hz that directly. One can only make controlled tests to see what an individual may distinguish.

An average person may make out differences up to a number, an FPS gamer because of experience in the practice, perhaps more.

At the same time it should be depended on the clarity of the image and the difference to be spotted in question, the state of mind of the individual such as fatigue, and other such factors.

It is reported that pilots as test subjects could see a flashing image of a plane at 220 FPS[1]. However, if that's true it can not be taken as definitive proof of the ability of the human eye to see differences in a certain FPS game in all situations since it doesn't cover cases of complex video and it depends on the person and state of being. i.e. It can be more or less depended on the individual, the situation, and the video in question.

What will be my FPS on a modern machine?

On a modern CPU and GPU, CPU is usually the bottleneck. This is because

This can change by choosing very high quality settings for the GPU. This is usually translated to very high antialiasing and anisotropic filtering settings.

That way you make GPU the bottleneck.

..usually.

So practically on low quality settings, a modern GPU system will usually not be able to do the best it can, being restricted to a figure by the CPU.

When we say "CPU", we can probably also put in there memory transfer capabilities.

(The regime may improve in 4.2 if it includes VBO since that implies more stuff are sent to the GPU.)

i.e. on hard numbers, it is usually high, it will usually overreach 125 but very high GPU settings may lower it, with the status quo described here applied.

Input related

/in_mouse on windows clients

[Try to keep it at -1 (basic Windows Desktop API) since 1 which uses DirectInput is often reported buggy (and it has overheads according to Microsoft) even if it's not currently the optimal[2].]

(The var does nothing on non-windows clients by the way, unless you count 0(disabling mouse on ioq3)).

Other mouse tweaks for lower latency

Consider using a higher Report Rate and a higher DPI setting. Google about them for your device. Report Rate (or 'Polling Rate'), which is usually overlooked, is probably considerably more important than DPI. e.g. a report rate of 1000reports/sec means only 1msec lag on mouse input (1000msec/1000) but if it's only 125, it's 8msec! (1000msec/125)

Why can't I press 3 or 4 keys on my keyboard at the same time?

That's a hardware limitation on the keyboard itself (called nonsensically 'ghosting'). You can try other combinations of nearby keys that may work. Also more quality/expensive keyboards seem to cater more for this problem. If one is very determined about the issue, I suppose could test drive keyboards before buying them on the specific venue.

/r_finish

[This improves input lag at least when vertical sync is on.] However it lowers FPS so it should be avoided if not needed[, especially if vsync is off].

Other Engine Mechanics

"Demos show inconsistencies in hits"

While inconsistences that indeed happened (while the game was actually played) may show in a demo, demos don't record exactly what happens. There is a relevant guarantee that will happen only if g_synchronousclients is set to 1 while recording is done. [The setting instructs the game to wait for all clients to update.] It may increase lag to playing if it's on. Its purpose is to achieve smooth demo recording.

It is certain that with any provision on the demo recording taken into account by game designers, still to not see exactly what happened. For example, for video device stuttering (such as the gaps on the lower part of the lag meter) you won't see exactly what happened.

sv_fps/snaps being locked to 20, cl_maxpackets

This may be self explanatory from information contained in above sections but having sv_fps on the server and snaps on the client locked to 20 it means that server sends to a client information every 50msec (for 1 sec = 1000msec / 20 = 50 msec) but that in turn means there's a considerable local latency build-in the game. It is logical to assume a higher sv_fps/snaps allowed would lower latency in the game. This however may also mean higher resources needs.

cl_maxpackets (being locked between 30 and 42) is a directly related (but not directly affected) variable as it controls (in relation to client's FPS) the rate of packets of information sent from the client to the server. Its (complex) operation is described in more detail in a relevant section since it is adjustable though granted in a limited range.

Prediction vars

These are vars that govern client-side prediction.

cg_smoothclients : If set to 0 (disabled) client waits for the server to determine the position of players.

cg_predictItems : Similarly, on 0 it waits for the server to determine the state of picked up items.

(cg_nopredict : This should probably never be set to 1 since it makes gaming very choppy; i.e. leave it unchanged unless you know what you're doing.)

One could experiment with cg_smoothclients and cg_predictItems. 0 may be better for being more "real". At the same time, the smoothness avoided by turning them off may create other problems. Experimentation may be required.

"Should the value of sv_fps/snaps be taken into account when setting cl_maxpackets?"

[Apparently not, even if some websites suggest a relation. This is because according to the engine code, the reception of input commands doesn't seem to collide or directly correlate with the dispatching of server snapshots.]

What about FPS 125 for jumping

It doesn't matter anymore, only in ancient versions of Q3. With cg_physics set to 1 (default), any FPS gives optimal (and same) game physics.

The strong relation of FPS with networking and the way the game works in general

It should be always taken into account that FPS in the game is not just related to visual quality (or just visual latency) as it can be logically assumed at first. It is very related to the global way the game engine operates. Simply put in a strong example, if there is no frame, there is no information send to a server by a client.

Ping,FPS,Snaps etc. Meters for standard deviation and max spikes/max FPS drops

This build/source of the engine includes some networking and engine meters that may be useful for assessing networking, FPS and gaming stability.

"FPS is lag" - general passive latency

What I mean by that is that FPS is a fundamental factor in the game engine mechanics, it's not just a visual indicator of image quality or even just a way to measure visual latency. It is also governed by the global loop the game engine goes through to do its operations. So if FPS on the client is 125 there is base local lag of 8 ms (1000msec (1sec) / 125) to send its information to the server. It's not an accident the rate of information from server to client (snaps / s) is governed by a var called sv_fps.

i.e. it could be argued that on 125FPS the "general passive latency" is 1000/125 = 8msec, on 60 16.66msec, etc.

Is it a good idea to go with 83.333 FPS for achieving 41.7 client packets?

(Yes and No.) I wouldn't go with 83.333 that easily. This is because while you get the 'general latency' of simply operating slower [on 125 FPS the "general passive latency" is 1000/125 = 8msec], you also don't know if you will drop from 83 and hence simply lose the 41.7. Of course if, for example, one sees he 'always' is on 100-110 (hence it won't be easy to be either 125 or drop below 83.333), it might be a better idea.

Why is my CPU (or at least 1 Core of it) always at 100%?

When you check CPU usage of ioq3 notice the following:

For reasons related to the [low] precision of the 'Sleep for some time and do nothing' function in a multi-platform environment, the client code ignores any CPU sleeping above the number of 100 FPS. This means if you go above 100FPS, 1 Core will be used at 100% regardless. If you are below 100FPS, then, if the 1 Core is capable of coping with it, you may see reduced CPU usage.

(1 Core means the client isn't multithreaded; even if it's shared between n cores, it's still 100%/n cores on a general meter).

Other

"Lag by monitors/antiviruses/etc. Is it possible?"

Yes. Some monitors (antiviruses etc.) check all that passes through a network connection so they may delay communication even when CPU usage isn't near 100% (and probably more when it is). They may have components of them performing such operations that may be able to be disabled or have levels of detailing.

In most situations they shouldn't pose a considerable threat in networking stability. Turning them all off can be a good troubleshooting attempt though.

It should be noted this is usually in the category of "theoretical lag, probably impossible to notice"; e.g. so what if an extra "if then else" goes through a packet, it's usually not noticeable in the real world. But worth considering in the rare cases the load inflicted (or active delay) happens to be considerable.

Lowering CPU needs on the client

Closely related to FPS optimization, since it directly affects it, decreasing CPU needs can be done by simply drawing less 2D stuff on the screen (e.g. not drawing the lagometer if it's not needed, since not only it's not drawn but also not internally calculated) and also disabling certain effects.

Choosing a less populated server (or a gametype with less action (e.g. TDM is usually more 'messy' than TS)) may help in that regard.

Again though, a sufficiently modern computer may not see much improvement from such tweaks or not at all.

Can i run a server on the same machine I play?

If the network can handle it, it shouldn't be a problem.

A point of importance - in the venue of gameplay consistency - is CPU usage. If it reaches 100% it may create inconsistencies to players. Even if an OS scheduler may do its best to delegate priorities, latencies may surface. Granted, certain types of schedulers may give a different outcome (e.g. the linux kernel can be compiled in different fashions, a 'desktop' scheduler may be problematic compared to a 'server' one, though the higher latency potentially inflicted by a server scheduler isn't ideal for the player on the machine).

A hack that may help in that case is slowing down the client by using usleep() in a main function of the client to prevent it reaching 100% on a processor (since it works as fast as it can, hence reaching 100% in a CPU (unless it's throttled by FPS restrictions)). Of course that may be undesirable if the client in that case can't reach an optimal FPS (of say 125).

In recent multiple core processors this issue may not appear since ioquake3 doesn't utilize multiple processors (yet?) hence the probability of saturating the CPU use with a client and one server is low.

Should I give 'real time priority' to the client?

On multicore processors without other processes taking at least more than one core left, this would either not give an advantage all, or a very minimal advantage.

On single core processors it's dangerous to make it worse since the game runs on 100% on a single core by design and real-time priority may hang processes needed by the game itself, e.g. mouse or audio subsystems.

It's largely a workaround (and often a negative), not common practice (otherwise games would do it by default).

Also, beware of the placebo effect.

It might be saner to only go slightly higher priority than the average, if at all.

Could running a different OS (Operating System) lower latency?

It's possible. For example, linux has a kernel (the central basic part of an OS) that offers a scheduler (the low level software responsible for delegating priorities to jobs and processes) that can be compiled in a 'Desktop" way (instead of a 'Server' or 'Standard' way) and can work at 1000Hz. However, a) that's not necessarily better than other OSes that may schedule and operate similarly in those venues (though that may be hard to derive if closed source) and b) take notice of FPS that may be very different in different OSes. A very different FPS, especially in weaker systems (for the game) may be a deciding factor. This is often related to a driver being better for a certain OS (usually because the market share is higher there).

Other factors may be at work, e.g. more efficient programming in certain venues and it'd be quite hard to make an assessment without at least measuring latencies in a real test case environment by comparing the different systems.

Last thoughts

There are no magic settings a lot of the time because if an action that shouldn't be done is followed, it may be worse. For example ut_timenudge a lot of the time is not needed at all and is a disadvantage since it adds local lag, but when needed it can be an advantage since it makes gameplay smoother. Similarly, sv_maxrate can be beneficial at low values to save bandwidth or to avoid saturation of a connection, but at very low values for the number of players, it may lead to gameplay inconsistencies.

Appendix

An alternative engine build for potential problem solving

ioq3-urt; builds of ioq3 engine for urt

Sources

References

  1. "How many frames per second can our wonderful eyes see?" http://amo.net/NT/02-21-01FPS.html
  2. "Taking Advantage of High-Definition Mouse Movement [In reference to in_mouse]" http://msdn.microsoft.com/en-us/library/ee418864%28VS.85%29.aspx

Where to send corrections or feedback

Send us a message