[nSLUG] upstream bandwidth with iptables
nslug at fop.ns.ca
Wed Jun 4 22:13:20 ADT 2003
On Wed, 4 Jun 2003, Ben Armstrong wrote:
> On Tue, Jun 03, 2003 at 09:18:03PM -0300, Dop Ganger wrote:
> > > > lartc = http://www.lartc.org/lartc.html
> > >
> > > My homework this week is to read and understand this completely.
> > Mmmm. The letters "G", "F" and "L" come to mind ;->
> Well, a guy has gotta have goals. Even if utterly unattainable ones. :)
> I have read to page 18 so far, and am enjoying and absorbing most of it.
> We'll see when it comes to design/coding how much of it I retain.
I found the best way to pick it up is to write little scripts as you go
along. There's some parts that I recall not fully explaining stuff, and
I'd find myself with traffic management that was doing odd things, like
managing traffic in the wrong direction.
> OK, one more crack at this. You said you have no application that produces
> enough UDP for you to measure meaningful results. I mentioned cipe not
> because I wanted to attach a qdisc and filter to it, but as an example of an
> "application" (ciped) that might be used to see how your scheduling policies
> affect large (in fact, arbitrary, since any conventional tcp-based
> application can be used to send data down the tunnel and it all gets
> converted to UDP) amounts of UDP traffic. That is, you'd focus on the
> encapsulating interface (say, eth0) and just shape the port#(s) the VPN is
> carried on via that interface. Then you'd measure the results. That's
> quite a different matter from shaping traffic *within* the VPN, which I
> didn't mean to get into.
OK... now I get you :-) Yes, it'd be handy, particularly to see what the
errors in ifconfig look like.
As a sidenote, iptables can be used to mark packets that are difficult
and/or impossible using the u32 filter; I hit this with a pptp tunnel the
other day, and had to apply QoS on GRE IP packets. After tinkering with
U32 rules and not getting anywhere, I simply added a rule to my firewall
script that set a mark on GRE packets, which was picked up by the filter
using the handle value (and, just to confuse matters further, the handle
is also used to refer to classes when used in the qdisc command).
> What I didn't consider when I suggested this is that it is not only a
> completely contrived example, but it also falls apart in the "measure the
> results" department. Measure what? The streams of UDP that go down the
> tunnel are meaningless to the end user. All the user sees is the effect
> that shaping has on their encapsulated traffic. And if that traffic is TCP,
> as in a file transfer, then the fact that it is encapsulated in UDP is
> invisible to the user. If UDP packets go missing (which is guaranteed to
> happen when shaping occurs) or out of order or are duplicated, the
> application will have to resend. Since we're shaping the UDP "outside" the
> tunnel we don't benefit from tcp-specific shaping rules if we had shaped
> cipc0. In short: the test sucks for real-world measurable results of
> shaping on UDP. :/
You benefit to some extent from shaping the traffic inside the tunnel if
you're prioritising traffic. If you're prioritising ACKs, for example,
it'll increase your throughput, particularly if you're dequeuing packets
back to your internal interface and can do matching QoS on that too.
The UDP shaping will also have somewhat of a beneficial effect if you're
using it to give yourself guaranteed bandwidth; in theory it should cause
the TCP timers to match your bandwidth. Of course, if it's a compressed
tunnel, this can quite often get thrown out of whack for the upper layer,
but in that case the speed increase should be worth it anyway.
> > Well, assuming a "would be nice" framerate of 25 FPS, and 300 bytes per
> > frame, I make 7500 bytes per second; adding 15345 (the xpilot port,
> > according to my /etc/services) to the LOBHIPT and LOBHIPU strings should
> > do the trick quite nicely.
> Ultimately, I'd like to try this at apt.mathstat.dal.ca when under high
> network load, assuming I can get the admins to implement shaping on the box
> itself. This system hosts both a Debian mirror *and* an xpilot server, so
> it is a ready-made real-world example.
Well, in case it helps, here's the tc -s qdisc output from one of our more
qdisc sfq 60: dev eth0 quantum 1514b perturb 2sec
Sent 7884556 bytes 87274 pkts (dropped 0, overlimits 0)
qdisc sfq 50: dev eth0 quantum 1514b perturb 10sec
Sent 13431806 bytes 146306 pkts (dropped 0, overlimits 0)
qdisc sfq 30: dev eth0 quantum 1514b perturb 10sec
Sent 10052056090 bytes 11085256 pkts (dropped 1, overlimits 0)
qdisc sfq 20: dev eth0 quantum 1514b perturb 10sec
Sent 13711085 bytes 137053 pkts (dropped 0, overlimits 0)
qdisc sfq 10: dev eth0 quantum 1514b perturb 10sec
Sent 9497776 bytes 70041 pkts (dropped 0, overlimits 0)
qdisc sfq 6: dev eth0 quantum 1514b perturb 10sec
Sent 0 bytes 0 pkts (dropped 0, overlimits 0)
qdisc sfq 5: dev eth0 quantum 1514b perturb 5sec
Sent 65790064 bytes 1209884 pkts (dropped 0, overlimits 0)
qdisc sfq 2: dev eth0 quantum 1514b perturb 10sec
Sent 0 bytes 0 pkts (dropped 0, overlimits 0)
qdisc cbq 1: dev eth0 rate 10Mbit (bounded,isolated) prio no-transmit
Sent 10163129196 bytes 12751175 pkts (dropped 1, overlimits 5758314)
borrowed 0 overactions 0 avgidle 624 undertime 0
This is about a day's worth of traffic, as I was tinkering about with the
script yesterday. Queue 2 is the "bad traffic" queue for anything that
manages to slip through the filters; this gives a 3 kilobit link for
anyone trying to do anything bad (3 kilobits might be better known as
approximately 300 baud - any less and quantising problems start arising).
> > Hmmm... How about QSS, short for "Meaningless TLA Acronym"?
> Que Sera Sera
> Quality of Service Scripts
> QSS is Shaping Simplified
> Quality of Service Simplified
> Quacks Scripting Stuff
> Quadrangular Stake Stuffers (a square peg into a round hole?)
Yep, that all sounds good to me ;->
More information about the nSLUG