1TC(8)                                Linux                               TC(8)
2
3
4

NAME

6       sfq - Stochastic Fairness Queueing
7

SYNOPSIS

9       tc  qdisc  ...   [  divisor hashtablesize ] [ limit packets ] [ perturb
10       seconds ] [ quantum bytes ] [ flows number ] [ depth number ]  [  head‐
11       drop ] [ redflowlimit bytes ] [ min bytes ] [ max bytes ] [ avpkt bytes
12       ] [ burst packets ] [ probability P ] [ ecn ] [ harddrop ]
13

DESCRIPTION

15       Stochastic Fairness Queueing is a classless queueing discipline  avail‐
16       able for traffic control with the tc(8) command.
17
18       SFQ does not shape traffic but only schedules the transmission of pack‐
19       ets, based on 'flows'.  The goal is to ensure  fairness  so  that  each
20       flow is able to send data in turn, thus preventing any single flow from
21       drowning out the rest.
22
23       This may in fact have some effect in mitigating  a  Denial  of  Service
24       attempt.
25
26       SFQ is work-conserving and therefore always delivers a packet if it has
27       one available.
28

ALGORITHM

30       On enqueueing, each packet is assigned to a hash bucket, based  on  the
31       packets  hash value.  This hash value is either obtained from an exter‐
32       nal flow classifier (use tc filter to set them), or a default  internal
33       classifier if no external classifier has been configured.
34
35       When the internal classifier is used, sfq uses
36
37       (i)    Source address
38
39       (ii)   Destination address
40
41       (iii)  Source and Destination port
42
43       If these are available. SFQ knows about ipv4 and ipv6 and also UDP, TCP
44       and ESP.  Packets with other protocols are hashed based on  the  32bits
45       representation  of  their  destination  and  source. A flow corresponds
46       mostly to a TCP/IP connection.
47
48       Each of these buckets should represent a unique flow. Because  multiple
49       flows  may  get  hashed to the same bucket, sfqs internal hashing algo‐
50       rithm may be perturbed at configurable intervals so that the unfairness
51       lasts only for a short while. Perturbation may however cause some inad‐
52       vertent packet reordering to occur. After linux-3.3, there is no packet
53       reordering  problem,  but  possible  packet drops if rehashing hits one
54       limit (number of flows or packets per flow)
55
56       When dequeuing, each hashbucket with data is queried in a  round  robin
57       fashion.
58
59       Before  linux-3.3,  the  compile  time maximum length of the SFQ is 128
60       packets, which can be spread over at most 128 buckets  of  1024  avail‐
61       able.  In  case  of  overflow,  tail-drop  is  performed on the fullest
62       bucket, thus maintaining fairness.
63
64       After linux-3.3, maximum length of SFQ is 65535  packets,  and  divisor
65       limit  is  65536.   In  case of overflow, tail-drop is performed on the
66       fullest bucket, unless headdrop was requested.
67
68

PARAMETERS

70       divisor
71              Can be used to set a different hash table size,  available  from
72              kernel 2.6.39 onwards.  The specified divisor must be a power of
73              two and cannot be larger than 65536.  Default value: 1024.
74
75       limit  Upper limit of the SFQ. Can be used to reduce the default length
76              of 127 packets.  After linux-3.3, it can be raised.
77
78       depth  Limit  of packets per flow (after linux-3.3). Default to 127 and
79              can be lowered.
80
81       perturb
82              Interval in seconds for queue algorithm  perturbation.  Defaults
83              to  0,  which  means that no perturbation occurs. Do not set too
84              low for each perturbation may cause some  packet  reordering  or
85              losses. Advised value: 60 This value has no effect when external
86              flow classification is used.  Its  better  to  increase  divisor
87              value to lower risk of hash collisions.
88
89       quantum
90              Amount  of  bytes a flow is allowed to dequeue during a round of
91              the round robin process.  Defaults to the MTU of  the  interface
92              which is also the advised value and the minimum value.
93
94       flows  After  linux-3.3,  it is possible to change the default limit of
95              flows.  Default value is 127
96
97       headdrop
98              Default SFQ behavior is to perform tail-drop of packets  from  a
99              flow.   You can ask a headdrop instead, as this is known to pro‐
100              vide a better feedback for TCP flows.
101
102       redflowlimit
103              Configure the optional RED module on top of each SFQ flow.  Ran‐
104              dom  Early  Detection  principle  is  to perform packet marks or
105              drops in a probabilistic way.  (man  tc-red  for  details  about
106              RED)
107              redflowlimit configures the hard limit on the real (not average) queue size per SFQ flow in bytes.
108
109       min    Average  queue  size  at  which  marking  becomes a possibility.
110              Defaults to max /3
111
112       max    At this average queue size, the marking probability is  maximal.
113              Defaults to redflowlimit /4
114
115       probability
116              Maximum   probability   for   marking,  specified  as a floating
117              point number from 0.0 to 1.0. Default value is 0.02
118
119       avpkt  Specified in bytes. Used with burst to determine the  time  con‐
120              stant for average queue size calculations. Default value is 1000
121
122       burst  Used  for  determining how fast the average queue size is influ‐
123              enced by the real queue size.
124              Default value is :
125              (2 * min + max) / (3 * avpkt)
126
127       ecn    RED can either 'mark' or 'drop'. Explicit  Congestion  Notifica‐
128              tion  allows  RED to notify remote hosts that their rate exceeds
129              the amount of bandwidth available.  Non-ECN  capable  hosts  can
130              only  be  notified  by  dropping  a packet. If this parameter is
131              specified, packets which indicate that  their  hosts  honor  ECN
132              will  only be marked and not dropped, unless the queue size hits
133              depth packets.
134
135       harddrop
136              If average flow queue size is above max  bytes,  this  parameter
137              forces a drop instead of ecn marking.
138

EXAMPLE & USAGE

140       To attach to device ppp0:
141
142       # tc qdisc add dev ppp0 root sfq
143
144       Please note that SFQ, like all non-shaping (work-conserving) qdiscs, is
145       only useful if it owns the queue.  This is the case when the link speed
146       equals  the  actually available bandwidth. This holds for regular phone
147       modems, ISDN connections and direct non-switched ethernet links.
148
149       Most often, cable modems and DSL devices do not fall  into  this  cate‐
150       gory. The same holds for when connected to a switch  and trying to send
151       data to a congested segment also connected to the switch.
152
153       In this case, the effective queue does not reside within Linux  and  is
154       therefore not available for scheduling.
155
156       Embed SFQ in a classful qdisc to make sure it owns the queue.
157
158       It  is  possible  to  use external classifiers with sfq, for example to
159       hash traffic based only on source/destination ip addresses:
160
161       # tc filter add ... flow hash keys src,dst perturb 30 divisor 1024
162
163       Note that the given divisor should match the one used by  sfq.  If  you
164       have  changed  the sfq default of 1024, use the same value for the flow
165       hash filter, too.
166
167
168       Example of sfq with optional RED mode :
169
170       # tc qdisc add dev eth0 parent 1:1 handle 10: sfq limit 3000 flows  512
171       divisor 16384
172         redflowlimit 100000 min 8000 max 60000 probability 0.20 ecn headdrop
173
174

SOURCE

176       o      Paul E. McKenney "Stochastic Fairness Queuing", IEEE INFOCOMM'90
177              Proceedings, San Francisco, 1990.
178
179
180       o      Paul E. McKenney "Stochastic Fairness  Queuing",  "Interworking:
181              Research and Experience", v.2, 1991, p.113-131.
182
183
184       o      See also: M. Shreedhar and George Varghese "Efficient Fair Queu‐
185              ing using Deficit Round Robin", Proc. SIGCOMM 95.
186
187

SEE ALSO

189       tc(8), tc-red(8)
190
191

AUTHORS

193       Alexey   N.    Kuznetsov,    <kuznet@ms2.inr.ac.ru>,    Eric    Dumazet
194       <eric.dumazet@gmail.com>.
195
196       This manpage maintained by bert hubert <ahu@ds9a.nl>
197
198
199
200iproute2                        24 January 2012                          TC(8)
Impressum