Corporate Ethernet Latency Experiment R2 (IPv6 access)

This is measurement results on a low end corporate Ethernet. There are some 20 switches, 2 server rooms, and the longest path between two nodes consists of 6 switches in and between the server rooms.
In server room 3 there is a frame generator attached to an HP2824 switch. From there it goes to an HP1800-24G, another HP1800-24G, a DGS-3024, an HP1800-24G, and finally an HP1800-8G in the tape rack in server room 1. A Linux machine doing packet timestamps is connected to this 1800-8G, and listens for the packets from the frame generator. The generator produces 10 priority tagged, and 10 untagged packets per second with 100ns p-p jitter.

In server room 3 there is also the main iSCSI SAN, which gets cloned to server room 1 every night. Thus the test traffic shares path with the SAN traffic. For this test, all redundant links have been unplugged, pushing SAN and USER traffic onto the same 1G links. Flow control is enabled. No jumbo frames are used.

Once the network was in a degraded state, and the monitor machine was reasonably synced to the traffic generator, I manually triggered SAN replication (22:40). During replication, average bandwidth usage is approx. 600 Mbps.

Yes I know that most of these are cheap switches, the upside is that cold spares are also cheap. As a test network this is excellent since you are unlikely to find anything lower-spec. that does .1p.

Result #1: (there is a more interesting result #2 below)
Even though I generate priority tagged frames, the network does not appear to honor it. I would expect approximately 20 microseconds extra jitter, yet both the tagged and untagged test frames show 100μs p-p jitter.
I will move the frame generator one hop (to the first HP1800) and re-test. The HP2824 is in 'qos passthrough optimized' mode, which may be the reason for it effectively ignoring my tags?

The jittering on the untagged packets is a bug in the monitoring program and not real.

Result #2:
I now have 5 days worth of data, and it turns out that it is not the SAN traffic that is the worst. The normal user traffic, while less in volume, is more bursty. Where the voluminous SAN traffic causes about 100μs delay, the Windows boxes push latency for untagged packets to 1.2ms. Here the priority tagging does help, with only 2 out of 4 million packets exceeding 400μs.
These two packets may possibly be the 'fault' of the observing machine, since the interface has a publicly routable IP address, and sits on a VLAN with other publicly accessible machines. The clients that generate the bursty traffic is on a different VLAN. Regardless of the source, tagging traffic is worthwile. Tagging also seems to introduce 4μs latency over the untagged packet. The frame generator is set to 10Mbps, so the additional tag should be 3.2μs long, however I did compensate by this amount in the generator so it should not be the origin of the 4μs. I will put an oscilloscope on the output of frame generator to verify this.