1SFB(8) Linux SFB(8)
2
3
4
6 sfb - Stochastic Fair Blue
7
9 tc qdisc ... blue rehash milliseconds db milliseconds limit packets max
10 packets target packets increment float decrement float penalty_rate
11 packets per second penalty_burst packets
12
13
15 Stochastic Fair Blue is a classless qdisc to manage congestion based on
16 packet loss and link utilization history while trying to prevent non-
17 responsive flows (i.e. flows that do not react to congestion marking or
18 dropped packets) from impacting performance of responsive flows.
19 Unlike RED, where the marking probability has to be configured, BLUE
20 tries to determine the ideal marking probability automatically.
21
22
24 The BLUE algorithm maintains a probability which is used to mark or
25 drop packets that are to be queued. If the queue overflows, the
26 mark/drop probability is increased. If the queue becomes empty, the
27 probability is decreased. The Stochastic Fair Blue (SFB) algorithm is
28 designed to protect TCP flows against non-responsive flows.
29
30 This SFB implementation maintains 8 levels of 16 bins each for account‐
31 ing. Each flow is mapped into a bin of each level using a per-level
32 hash value.
33
34 Every bin maintains a marking probability, which gets increased or
35 decreased based on bin occupancy. If the number of packets exceeds the
36 size of that bin, the marking probability is increased. If the number
37 drops to zero, it is decreased.
38
39 The marking probability is based on the minimum value of all bins a
40 flow is mapped into, thus, when a flow does not respond to marking or
41 gradual packet drops, the marking probability quickly reaches one.
42
43 In this case, the flow is rate-limited to penalty_rate packets per sec‐
44 ond.
45
46
48 Due to SFBs nature, it is possible for responsive flows to share all of
49 its bins with a non-responsive flow, causing the responsive flow to be
50 misidentified as being non-responsive.
51
52 The probability of a responsive flow to be misidentified is dependent
53 on the number of non-responsive flows, M. It is (1 - (1 - (1 / 16.0))
54 ** M) **8, so for example with 10 non-responsive flows approximately
55 0.2% of responsive flows will be misidentified.
56
57 To mitigate this, SFB performs performs periodic re-hashing to avoid
58 misclassification for prolonged periods of time.
59
60 The default hashing method will use source and destination ip addresses
61 and port numbers if possible, and also supports tunneling protocols.
62 Alternatively, an external classifier can be configured, too.
63
64
66 rehash Time interval in milliseconds when queue perturbation occurs to
67 avoid erroneously detecting unrelated, responsive flows as being
68 part of a non-responsive flow for prolonged periods of time.
69 Defaults to 10 minutes.
70
71 db Double buffering warmup wait time, in milliseconds. To avoid
72 destroying the probability history when rehashing is performed,
73 this implementation maintains a second set of levels/bins as
74 described in section 4.4 of the SFB reference. While one set is
75 used to manage the queue, a second set is warmed up: Whenever a
76 flow is then determined to be non-responsive, the marking proba‐
77 bilities in the second set are updated. When the rehashing hap‐
78 pens, these bins will be used to manage the queue and all non-
79 responsive flows can be rate-limited immediately. This value
80 determines how much time has to pass before the 2nd set will
81 start to be warmed up. Defaults to one minute, should be lower
82 than rehash.
83
84 limit Hard limit on the real (not average) total queue size in pack‐
85 ets. Further packets are dropped. Defaults to the transmit
86 queue length of the device the qdisc is attached to.
87
88 max Maximum length of a buckets queue, in packets, before packets
89 start being dropped. Should be sightly larger than target , but
90 should not be set to values exceeding 1.5 times that of target .
91 Defaults to 25.
92
93 target The desired average bin length. If the bin queue length reaches
94 this value, the marking probability is increased by increment.
95 The default value depends on the max setting, with max set to 25
96 target will default to 20.
97
98 increment
99 A value used to increase the marking probability when the queue
100 appears to be over-used. Must be between 0 and 1.0. Defaults to
101 0.00050.
102
103 decrement
104 Value used to decrease the marking probability when the queue is
105 found to be empty. Must be between 0 and 1.0. Defaults to
106 0.00005.
107
108 penalty_rate
109 The maximum number of packets belonging to flows identified as
110 being non-responsive that can be enqueued per second. Once this
111 number has been reached, further packets of such non-responsive
112 flows are dropped. Set this to a reasonable fraction of your
113 uplink throughput; the default value of 10 packets is probably
114 too small.
115
116 penalty_burst
117 The number of packets a flow is permitted to exceed the penalty
118 rate before packets start being dropped. Defaults to 20 pack‐
119 ets.
120
121
123 This qdisc exposes additional statistics via 'tc -s qdisc' output.
124 These are:
125
126 earlydrop
127 The number of packets dropped before a per-flow queue was full.
128
129 ratedrop
130 The number of packets dropped because of rate-limiting. If this
131 value is high, there are many non-reactive flows being sent
132 through sfb. In such cases, it might be better to embed sfb
133 within a classful qdisc to better control such flows using a
134 different, shaping qdisc.
135
136 bucketdrop
137 The number of packets dropped because a per-flow queue was full.
138 High bucketdrop may point to a high number of aggressive, short-
139 lived flows.
140
141 queuedrop
142 The number of packets dropped due to reaching limit. This should
143 normally be 0.
144
145 marked The number of packets marked with ECN.
146
147 maxqlen
148 The length of the current longest per-flow (virtual) queue.
149
150 maxprob
151 The maximum per-flow drop probability. 1 means that some flows
152 have been detected as non-reactive.
153
154
156 SFB automatically enables use of Explicit Congestion Notification
157 (ECN). Also, this SFB implementation does not queue packets itself.
158 Rather, packets are enqueued to the inner qdisc (defaults to pfifo).
159 Because sfb maintains virtual queue states, the inner qdisc must not
160 drop a packet previously queued. Furthermore, if a buckets queue has a
161 very high marking rate, this implementation will start dropping packets
162 instead of marking them, as such a situation points to either bad con‐
163 gestion, or an unresponsive flow.
164
165
167 To attach to interface $DEV, using default options:
168
169 # tc qdisc add dev $DEV handle 1: root sfb
170
171 Only use destination ip addresses for assigning packets to bins, per‐
172 turbing hash results every 10 minutes:
173
174 # tc filter add dev $DEV parent 1: handle 1 flow hash keys dst perturb
175 600
176
177
179 tc(8), tc-red(8), tc-sfq(8)
180
182 o W. Feng, D. Kandlur, D. Saha, K. Shin, BLUE: A New Class of
183 Active Queue Management Algorithms, U. Michigan CSE-TR-387-99,
184 April 1999.
185
186
188 This SFB implementation was contributed by Juliusz Chroboczek and Eric
189 Dumazet.
190
191
192
193iproute2 August 2011 SFB(8)