Ralf Ertzinger wrote:
Hi.
On Tue, 11 Mar 2008 22:15:47 -0500, Bruno Wolff III wrote:
If you are talking about the qos flags in IP packets, the answer is no. There aren't that many states and generally you can only describe broad things about packets (such as I want low latency or I want high throughput) not detailed bandwidth allocations.
You can do bandwidth allocation based on QoS flags (with the newer DSCP interpretation of the relevant IP header flags you get 64 possible traffic classes, which is quite a lot).
Also I tried a few experiments with testing qos bits and found that they seemed to be getting stripped (I expected them to be ignored, but that at least they would be preserved) in transit. I didn't do enough experiments to see where this was happening or to get a good idea of how common this was.
On the other hand QoS is very much an end-to-end process, which means that all involved devices (as in: routers) have to agree on the QoS policy (which DSCP flag signifies which traffic class, and what preference this traffic gets). This is why it is does not work over the internet as a whole. Your provider usually has a diffenent view on what constitutes important traffic, and thus strips your classification (or does not strip it but ignores it).
I was thinking more along the lines of just the local machine's behavior with different connections having higher or lower priority for outbound (which is often what hurts response time the most for slower connections while longer running transfers occur). I really don't know how effective QOS is, so it may be a bad way to approach this issue.
If an update connection had low priority for the bandwidth resources, that connection should be postponed whenever a higher priority connection wants to push outbound traffic. A browser then would get to send its page requests or acks ahead of running transfer packets from the update utility; the result would be a much more responsive browser while still using most of the available bandwidth. Whether the QOS flags are being stripped/mangled once the traffic leaves the local machine should not really hurt that improvement would it?
I'm just thinking it may not require full end-to-end to enjoy some benefit. The incoming connection would not be slowed or postponed to let the browser respond, but by not acking what comes in until the outbound clears up I think it might help anyway.