1TEAMD.CONF(5) Team daemon configuration TEAMD.CONF(5)
2
3
4
6 teamd.conf — libteam daemon configuration file
7
9 teamd uses JSON format configuration.
10
12 device (string)
13 Desired name of new team device.
14
15 debug_level (int)
16 Level of debug messages. The higher it is the more debug mes‐
17 sages will be printed. It is the same as adding "-g" command
18 line options.
19
20 Default: 0 (disabled)
21
22 hwaddr (string)
23 Desired hardware address of new team device. Usual MAC address
24 format is accepted.
25
26 runner.name (string)
27 Name of team device. The following runners are available:
28
29 broadcast — Simple runner which directs the team device to
30 transmit packets via all ports.
31
32 roundrobin — Simple runner which directs the team device to
33 transmits packets in a round-robin fashion.
34
35 activebackup — Watches for link changes and selects active port
36 to be used for data transfers.
37
38 loadbalance — To do passive load balancing, runner only sets up
39 BPF hash function which will determine port for packet transmit.
40 To do active load balancing, runner moves hashes among available
41 ports trying to reach perfect balance.
42
43 lacp — Implements 802.3ad LACP protocol. Can use same Tx port
44 selection possibilities as loadbalance runner.
45
46 notify_peers.count (int)
47 Number of bursts of unsolicited NAs and gratuitous ARP packets
48 sent after port is enabled or disabled.
49
50 Default: 0 (disabled)
51
52 Default for activebackup runner: 1
53
54 notify_peers.interval (int)
55 Value is positive number in milliseconds. Specifies an interval
56 between bursts of notify-peer packets.
57
58 Default: 0
59
60 mcast_rejoin.count (int)
61 Number of bursts of multicast group rejoin requests sent after
62 port is enabled or disabled.
63
64 Default: 0 (disabled)
65
66 Default for activebackup runner: 1
67
68 mcast_rejoin.interval (int)
69 Value is positive number in milliseconds. Specifies an interval
70 between bursts of multicast group rejoin requests.
71
72 Default: 0
73
74 link_watch.name | ports.PORTIFNAME.link_watch.name (string)
75 Name of link watcher to be used. The following link watchers are
76 available:
77
78 ethtool — Uses Libteam lib to get port ethtool state changes.
79
80 arp_ping — ARP requests are sent through a port. If an ARP reply
81 is received, the link is considered to be up.
82
83 nsna_ping — Similar to the previous, except that it uses IPv6
84 Neighbor Solicitation / Neighbor Advertisement mechanism. This
85 is an alternative to arp_ping and becomes handy in pure-IPv6
86 environments.
87
88 ports (object)
89 List of ports, network devices, to be used in a team device.
90
91 See examples for more information.
92
93 ports.PORTIFNAME.queue_id (int)
94 ID of queue which this port should be mapped to.
95
96 Default: None
97
99 runner.hwaddr_policy (string)
100 This defines the policy of how hardware addresses of team device
101 and port devices should be set during the team lifetime. The
102 following are available:
103
104 same_all — All ports will always have the same hardware address
105 as the associated team device.
106
107 by_active — Team device adopts the hardware address of the cur‐
108 rently active port. This is useful when the port device is not
109 able to change its hardware address.
110
111 only_active — Only the active port adopts the hardware address
112 of the team device. The others have their own.
113
114 Default: same_all
115
116 ports.PORTIFNAME.prio (int)
117 Port priority. The higher number means higher priority.
118
119 Default: 0
120
121 ports.PORTIFNAME.sticky (bool)
122 Flag which indicates if the port is sticky. If set, it means the
123 port does not get unselected if another port with higher prior‐
124 ity or better parameters becomes available.
125
126 Default: false
127
129 runner.tx_hash (array)
130 List of fragment types (strings) which should be used for packet
131 Tx hash computation. The following are available:
132
133 eth — Uses source and destination MAC addresses.
134
135 vlan — Uses VLAN id.
136
137 ipv4 — Uses source and destination IPv4 addresses.
138
139 ipv6 — Uses source and destination IPv6 addresses.
140
141 ip — Uses source and destination IPv4 and IPv6 addresses.
142
143 l3 — Uses source and destination IPv4 and IPv6 addresses.
144
145 tcp — Uses source and destination TCP ports.
146
147 udp — Uses source and destination UDP ports.
148
149 sctp — Uses source and destination SCTP ports.
150
151 l4 — Uses source and destination TCP and UDP and SCTP ports.
152
153 runner.tx_balancer.name (string)
154 Name of active Tx balancer. Active Tx balancing is disabled by
155 default. The only value available is basic.
156
157 Default: None
158
159 runner.tx_balancer.balancing_interval (int)
160 In tenths of a second. Periodic interval between rebalancing.
161
162 Default: 50
163
165 runner.active (bool)
166 If active is true LACPDU frames are sent along the configured
167 links periodically. If not, it acts as "speak when spoken to".
168
169 Default: true
170
171 runner.fast_rate (bool)
172 Option specifies the rate at which our link partner is asked to
173 transmit LACPDU packets. If this is true then packets will be
174 sent once per second. Otherwise they will be sent every 30 sec‐
175 onds.
176
177 runner.tx_hash (array)
178 Same as for load balance runner.
179
180 runner.tx_balancer.name (string)
181 Same as for load balance runner.
182
183 runner.tx_balancer.balancing_interval (int)
184 Same as for load balance runner.
185
186 runner.sys_prio (int)
187 System priority, value can be 0 – 65535.
188
189 Default: 255
190
191 runner.min_ports (int)
192 Specifies the minimum number of ports that must be active before
193 asserting carrier in the master interface, value can be 1 – 255.
194
195 Default: 0
196
197 runner.agg_select_policy (string)
198 This selects the policy of how the aggregators will be selected.
199 The following are available:
200
201 lacp_prio — Aggregator with highest priority according to LACP
202 standard will be selected. Aggregator priority is affected by
203 per-port option lacp_prio.
204
205 lacp_prio_stable — Same as previous one, except do not replace
206 selected aggregator if it is still usable.
207
208 bandwidth — Select aggregator with highest total bandwidth.
209
210 count — Select aggregator with highest number of ports.
211
212 port_config — Aggregator with highest priority according to per-
213 port options prio and sticky will be selected. This means that
214 the aggregator containing the port with the highest priority
215 will be selected unless at least one of the ports in the cur‐
216 rently selected aggregator is sticky.
217
218 Default: lacp_prio
219
220 ports.PORTIFNAME.lacp_prio (int)
221 Port priority according to LACP standard. The lower number means
222 higher priority.
223
224 ports.PORTIFNAME.lacp_key (int)
225 Port key according to LACP standard. It is only possible to
226 aggregate ports with the same key.
227
228 Default: 0
229
231 link_watch.delay_up | ports.PORTIFNAME.link_watch.delay_up (int)
232 Value is a positive number in milliseconds. It is the delay
233 between the link coming up and the runner being notified about
234 it.
235
236 Default: 0
237
238 link_watch.delay_down | ports.PORTIFNAME.link_watch.delay_down (int)
239 Value is a positive number in milliseconds. It is the delay
240 between the link going down and the runner being notified about
241 it.
242
243 Default: 0
244
246 link_watch.interval | ports.PORTIFNAME.link_watch.interval (int)
247 Value is a positive number in milliseconds. It is the interval
248 between ARP requests being sent.
249
250 link_watch.init_wait | ports.PORTIFNAME.link_watch.init_wait (int)
251 Value is a positive number in milliseconds. It is the delay
252 between link watch initialization and the first ARP request
253 being sent.
254
255 Default: 0
256
257 link_watch.missed_max | ports.PORTIFNAME.link_watch.missed_max (int)
258 Maximum number of missed ARP replies. If this number is
259 exceeded, link is reported as down.
260
261 Default: 3
262
263 link_watch.source_host | ports.PORTIFNAME.link_watch.source_host (host‐
264 name)
265 Hostname to be converted to IP address which will be filled into
266 ARP request as source address.
267
268 Default: 0.0.0.0
269
270 link_watch.target_host | ports.PORTIFNAME.link_watch.target_host (host‐
271 name)
272 Hostname to be converted to IP address which will be filled into
273 ARP request as destination address.
274
275 link_watch.validate_active | ports.PORTIFNAME.link_watch.vali‐
276 date_active (bool)
277 Validate received ARP packets on active ports. If this is not
278 set, all incoming ARP packets will be considered as a good
279 reply.
280
281 Default: false
282
283 link_watch.validate_inactive | ports.PORTIFNAME.link_watch.vali‐
284 date_inactive (bool)
285 Validate received ARP packets on inactive ports. If this is not
286 set, all incoming ARP packets will be considered as a good
287 reply.
288
289 Default: false
290
291 link_watch.send_always | ports.PORTIFNAME.link_watch.send_always (bool)
292 By default, ARP requests are sent on active ports only. This
293 option allows sending even on inactive ports.
294
295 Default: false
296
298 link_watch.interval | ports.PORTIFNAME.link_watch.interval (int)
299 Value is a positive number in milliseconds. It is the interval
300 between sending NS packets.
301
302 link_watch.init_wait | ports.PORTIFNAME.link_watch.init_wait (int)
303 Value is a positive number in milliseconds. It is the delay
304 between link watch initialization and the first NS packet being
305 sent.
306
307 link_watch.missed_max | ports.PORTIFNAME.link_watch.missed_max (int)
308 Maximum number of missed NA reply packets. If this number is
309 exceeded, link is reported as down.
310
311 Default: 3
312
313 link_watch.target_host | ports.PORTIFNAME.link_watch.target_host (host‐
314 name)
315 Hostname to be converted to IPv6 address which will be filled
316 into NS packet as target address.
317
319 {
320 "device": "team0",
321 "runner": {"name": "roundrobin"},
322 "ports": {"eth1": {}, "eth2": {}}
323 }
324
325 Very basic configuration.
326
327 {
328 "device": "team0",
329 "runner": {"name": "activebackup"},
330 "link_watch": {"name": "ethtool"},
331 "ports": {
332 "eth1": {
333 "prio": -10,
334 "sticky": true
335 },
336 "eth2": {
337 "prio": 100
338 }
339 }
340 }
341
342 This configuration uses active-backup runner with ethtool link watcher.
343 Port eth2 has higher priority, but the sticky flag ensures that if eth1
344 becomes active, it stays active while the link remains up.
345
346 {
347 "device": "team0",
348 "runner": {"name": "activebackup"},
349 "link_watch": {
350 "name": "ethtool",
351 "delay_up": 2500,
352 "delay_down": 1000
353 },
354 "ports": {
355 "eth1": {
356 "prio": -10,
357 "sticky": true
358 },
359 "eth2": {
360 "prio": 100
361 }
362 }
363 }
364
365 Similar to the previous one. Only difference is that link changes are
366 not propagated to the runner immediately, but delays are applied.
367
368 {
369 "device": "team0",
370 "runner": {"name": "activebackup"},
371 "link_watch": {
372 "name": "arp_ping",
373 "interval": 100,
374 "missed_max": 30,
375 "target_host": "192.168.23.1"
376 },
377 "ports": {
378 "eth1": {
379 "prio": -10,
380 "sticky": true
381 },
382 "eth2": {
383 "prio": 100
384 }
385 }
386 }
387
388 This configuration uses ARP ping link watch.
389
390 {
391 "device": "team0",
392 "runner": {"name": "activebackup"},
393 "link_watch": [
394 {
395 "name": "arp_ping",
396 "interval": 100,
397 "missed_max": 30,
398 "target_host": "192.168.23.1"
399 },
400 {
401 "name": "arp_ping",
402 "interval": 50,
403 "missed_max": 20,
404 "target_host": "192.168.24.1"
405 }
406 ],
407 "ports": {
408 "eth1": {
409 "prio": -10,
410 "sticky": true
411 },
412 "eth2": {
413 "prio": 100
414 }
415 }
416 }
417
418 Similar to the previous one, only this time two link watchers are used
419 at the same time.
420
421 {
422 "device": "team0",
423 "runner": {
424 "name": "loadbalance",
425 "tx_hash": ["eth", "ipv4", "ipv6"]
426 },
427 "ports": {"eth1": {}, "eth2": {}}
428 }
429
430 Configuration for hash-based passive Tx load balancing.
431
432 {
433 "device": "team0",
434 "runner": {
435 "name": "loadbalance",
436 "tx_hash": ["eth", "ipv4", "ipv6"],
437 "tx_balancer": {
438 "name": "basic"
439 }
440 },
441 "ports": {"eth1": {}, "eth2": {}}
442 }
443
444 Configuration for active Tx load balancing using basic load balancer.
445
446 {
447 "device": "team0",
448 "runner": {
449 "name": "lacp",
450 "active": true,
451 "fast_rate": true,
452 "tx_hash": ["eth", "ipv4", "ipv6"]
453 },
454 "link_watch": {"name": "ethtool"},
455 "ports": {"eth1": {}, "eth2": {}}
456 }
457
458 Configuration for connection to LACP capable counterpart.
459
461 teamd(8), teamdctl(8), teamnl(8), bond2team(1)
462
464 Jiri Pirko is the original author and current maintainer of libteam.
465
466
467
468libteam 2013-07-09 TEAMD.CONF(5)