1ovn-architecture(7) OVN Manual ovn-architecture(7)
2
3
4
5build/.PP
6
8 ovn-architecture - Open Virtual Network architecture
9
11 OVN, the Open Virtual Network, is a system to support logical network
12 abstraction in virtual machine and container environments. OVN comple‐
13 ments the existing capabilities of OVS to add native support for logi‐
14 cal network abstractions, such as logical L2 and L3 overlays and secu‐
15 rity groups. Services such as DHCP are also desirable features. Just
16 like OVS, OVN’s design goal is to have a production-quality implementa‐
17 tion that can operate at significant scale.
18
19 A physical network comprises physical wires, switches, and routers. A
20 virtual network extends a physical network into a hypervisor or con‐
21 tainer platform, bridging VMs or containers into the physical network.
22 An OVN logical network is a network implemented in software that is
23 insulated from physical (and thus virtual) networks by tunnels or other
24 encapsulations. This allows IP and other address spaces used in logical
25 networks to overlap with those used on physical networks without caus‐
26 ing conflicts. Logical network topologies can be arranged without
27 regard for the topologies of the physical networks on which they run.
28 Thus, VMs that are part of a logical network can migrate from one phys‐
29 ical machine to another without network disruption. See Logical Net‐
30 works, below, for more information.
31
32 The encapsulation layer prevents VMs and containers connected to a log‐
33 ical network from communicating with nodes on physical networks. For
34 clustering VMs and containers, this can be acceptable or even desir‐
35 able, but in many cases VMs and containers do need connectivity to
36 physical networks. OVN provides multiple forms of gateways for this
37 purpose. See Gateways, below, for more information.
38
39 An OVN deployment consists of several components:
40
41 · A Cloud Management System (CMS), which is OVN’s ultimate
42 client (via its users and administrators). OVN integra‐
43 tion requires installing a CMS-specific plugin and
44 related software (see below). OVN initially targets Open‐
45 Stack as CMS.
46
47 We generally speak of ``the’’ CMS, but one can imagine
48 scenarios in which multiple CMSes manage different parts
49 of an OVN deployment.
50
51 · An OVN Database physical or virtual node (or, eventually,
52 cluster) installed in a central location.
53
54 · One or more (usually many) hypervisors. Hypervisors must
55 run Open vSwitch and implement the interface described in
56 Documentation/topics/integration.rst in the OVN source
57 tree. Any hypervisor platform supported by Open vSwitch
58 is acceptable.
59
60 · Zero or more gateways. A gateway extends a tunnel-based
61 logical network into a physical network by bidirection‐
62 ally forwarding packets between tunnels and a physical
63 Ethernet port. This allows non-virtualized machines to
64 participate in logical networks. A gateway may be a phys‐
65 ical host, a virtual machine, or an ASIC-based hardware
66 switch that supports the vtep(5) schema.
67
68 Hypervisors and gateways are together called transport
69 node or chassis.
70
71 The diagram below shows how the major components of OVN and related
72 software interact. Starting at the top of the diagram, we have:
73
74 · The Cloud Management System, as defined above.
75
76 · The OVN/CMS Plugin is the component of the CMS that
77 interfaces to OVN. In OpenStack, this is a Neutron plug‐
78 in. The plugin’s main purpose is to translate the CMS’s
79 notion of logical network configuration, stored in the
80 CMS’s configuration database in a CMS-specific format,
81 into an intermediate representation understood by OVN.
82
83 This component is necessarily CMS-specific, so a new
84 plugin needs to be developed for each CMS that is inte‐
85 grated with OVN. All of the components below this one in
86 the diagram are CMS-independent.
87
88 · The OVN Northbound Database receives the intermediate
89 representation of logical network configuration passed
90 down by the OVN/CMS Plugin. The database schema is meant
91 to be ``impedance matched’’ with the concepts used in a
92 CMS, so that it directly supports notions of logical
93 switches, routers, ACLs, and so on. See ovn-nb(5) for
94 details.
95
96 The OVN Northbound Database has only two clients: the
97 OVN/CMS Plugin above it and ovn-northd below it.
98
99 · ovn-northd(8) connects to the OVN Northbound Database
100 above it and the OVN Southbound Database below it. It
101 translates the logical network configuration in terms of
102 conventional network concepts, taken from the OVN North‐
103 bound Database, into logical datapath flows in the OVN
104 Southbound Database below it.
105
106 · The OVN Southbound Database is the center of the system.
107 Its clients are ovn-northd(8) above it and ovn-con‐
108 troller(8) on every transport node below it.
109
110 The OVN Southbound Database contains three kinds of data:
111 Physical Network (PN) tables that specify how to reach
112 hypervisor and other nodes, Logical Network (LN) tables
113 that describe the logical network in terms of ``logical
114 datapath flows,’’ and Binding tables that link logical
115 network components’ locations to the physical network.
116 The hypervisors populate the PN and Port_Binding tables,
117 whereas ovn-northd(8) populates the LN tables.
118
119 OVN Southbound Database performance must scale with the
120 number of transport nodes. This will likely require some
121 work on ovsdb-server(1) as we encounter bottlenecks.
122 Clustering for availability may be needed.
123
124 The remaining components are replicated onto each hypervisor:
125
126 · ovn-controller(8) is OVN’s agent on each hypervisor and
127 software gateway. Northbound, it connects to the OVN
128 Southbound Database to learn about OVN configuration and
129 status and to populate the PN table and the Chassis col‐
130 umn in Binding table with the hypervisor’s status. South‐
131 bound, it connects to ovs-vswitchd(8) as an OpenFlow con‐
132 troller, for control over network traffic, and to the
133 local ovsdb-server(1) to allow it to monitor and control
134 Open vSwitch configuration.
135
136 · ovs-vswitchd(8) and ovsdb-server(1) are conventional com‐
137 ponents of Open vSwitch.
138
139 CMS
140 |
141 |
142 +-----------|-----------+
143 | | |
144 | OVN/CMS Plugin |
145 | | |
146 | | |
147 | OVN Northbound DB |
148 | | |
149 | | |
150 | ovn-northd |
151 | | |
152 +-----------|-----------+
153 |
154 |
155 +-------------------+
156 | OVN Southbound DB |
157 +-------------------+
158 |
159 |
160 +------------------+------------------+
161 | | |
162 HV 1 | | HV n |
163 +---------------|---------------+ . +---------------|---------------+
164 | | | . | | |
165 | ovn-controller | . | ovn-controller |
166 | | | | . | | | |
167 | | | | | | | |
168 | ovs-vswitchd ovsdb-server | | ovs-vswitchd ovsdb-server |
169 | | | |
170 +-------------------------------+ +-------------------------------+
171
172
173 Information Flow in OVN
174 Configuration data in OVN flows from north to south. The CMS, through
175 its OVN/CMS plugin, passes the logical network configuration to
176 ovn-northd via the northbound database. In turn, ovn-northd compiles
177 the configuration into a lower-level form and passes it to all of the
178 chassis via the southbound database.
179
180 Status information in OVN flows from south to north. OVN currently pro‐
181 vides only a few forms of status information. First, ovn-northd popu‐
182 lates the up column in the northbound Logical_Switch_Port table: if a
183 logical port’s chassis column in the southbound Port_Binding table is
184 nonempty, it sets up to true, otherwise to false. This allows the CMS
185 to detect when a VM’s networking has come up.
186
187 Second, OVN provides feedback to the CMS on the realization of its con‐
188 figuration, that is, whether the configuration provided by the CMS has
189 taken effect. This feature requires the CMS to participate in a
190 sequence number protocol, which works the following way:
191
192 1. When the CMS updates the configuration in the northbound
193 database, as part of the same transaction, it increments the
194 value of the nb_cfg column in the NB_Global table. (This is
195 only necessary if the CMS wants to know when the configura‐
196 tion has been realized.)
197
198 2. When ovn-northd updates the southbound database based on a
199 given snapshot of the northbound database, it copies nb_cfg
200 from northbound NB_Global into the southbound database
201 SB_Global table, as part of the same transaction. (Thus, an
202 observer monitoring both databases can determine when the
203 southbound database is caught up with the northbound.)
204
205 3. After ovn-northd receives confirmation from the southbound
206 database server that its changes have committed, it updates
207 sb_cfg in the northbound NB_Global table to the nb_cfg ver‐
208 sion that was pushed down. (Thus, the CMS or another
209 observer can determine when the southbound database is
210 caught up without a connection to the southbound database.)
211
212 4. The ovn-controller process on each chassis receives the
213 updated southbound database, with the updated nb_cfg. This
214 process in turn updates the physical flows installed in the
215 chassis’s Open vSwitch instances. When it receives confirma‐
216 tion from Open vSwitch that the physical flows have been
217 updated, it updates nb_cfg in its own Chassis record in the
218 southbound database.
219
220 5. ovn-northd monitors the nb_cfg column in all of the Chassis
221 records in the southbound database. It keeps track of the
222 minimum value among all the records and copies it into the
223 hv_cfg column in the northbound NB_Global table. (Thus, the
224 CMS or another observer can determine when all of the hyper‐
225 visors have caught up to the northbound configuration.)
226
227 Chassis Setup
228 Each chassis in an OVN deployment must be configured with an Open
229 vSwitch bridge dedicated for OVN’s use, called the integration bridge.
230 System startup scripts may create this bridge prior to starting
231 ovn-controller if desired. If this bridge does not exist when ovn-con‐
232 troller starts, it will be created automatically with the default con‐
233 figuration suggested below. The ports on the integration bridge
234 include:
235
236 · On any chassis, tunnel ports that OVN uses to maintain
237 logical network connectivity. ovn-controller adds,
238 updates, and removes these tunnel ports.
239
240 · On a hypervisor, any VIFs that are to be attached to log‐
241 ical networks. The hypervisor itself, or the integration
242 between Open vSwitch and the hypervisor (described in
243 Documentation/topics/integration.rst) takes care of this.
244 (This is not part of OVN or new to OVN; this is pre-
245 existing integration work that has already been done on
246 hypervisors that support OVS.)
247
248 · On a gateway, the physical port used for logical network
249 connectivity. System startup scripts add this port to the
250 bridge prior to starting ovn-controller. This can be a
251 patch port to another bridge, instead of a physical port,
252 in more sophisticated setups.
253
254 Other ports should not be attached to the integration bridge. In par‐
255 ticular, physical ports attached to the underlay network (as opposed to
256 gateway ports, which are physical ports attached to logical networks)
257 must not be attached to the integration bridge. Underlay physical ports
258 should instead be attached to a separate Open vSwitch bridge (they need
259 not be attached to any bridge at all, in fact).
260
261 The integration bridge should be configured as described below. The
262 effect of each of these settings is documented in
263 ovs-vswitchd.conf.db(5):
264
265 fail-mode=secure
266 Avoids switching packets between isolated logical net‐
267 works before ovn-controller starts up. See Controller
268 Failure Settings in ovs-vsctl(8) for more information.
269
270 other-config:disable-in-band=true
271 Suppresses in-band control flows for the integration
272 bridge. It would be unusual for such flows to show up
273 anyway, because OVN uses a local controller (over a Unix
274 domain socket) instead of a remote controller. It’s pos‐
275 sible, however, for some other bridge in the same system
276 to have an in-band remote controller, and in that case
277 this suppresses the flows that in-band control would
278 ordinarily set up. Refer to the documentation for more
279 information.
280
281 The customary name for the integration bridge is br-int, but another
282 name may be used.
283
284 Logical Networks
285 Logical network concepts in OVN include logical switches and logical
286 routers, the logical version of Ethernet switches and IP routers,
287 respectively. Like their physical cousins, logical switches and routers
288 can be connected into sophisticated topologies. Logical switches and
289 routers are ordinarily purely logical entities, that is, they are not
290 associated or bound to any physical location, and they are implemented
291 in a distributed manner at each hypervisor that participates in OVN.
292
293 Logical switch ports (LSPs) are points of connectivity into and out of
294 logical switches. There are many kinds of logical switch ports. The
295 most ordinary kind represent VIFs, that is, attachment points for VMs
296 or containers. A VIF logical port is associated with the physical loca‐
297 tion of its VM, which might change as the VM migrates. (A VIF logical
298 port can be associated with a VM that is powered down or suspended.
299 Such a logical port has no location and no connectivity.)
300
301 Logical router ports (LRPs) are points of connectivity into and out of
302 logical routers. A LRP connects a logical router either to a logical
303 switch or to another logical router. Logical routers only connect to
304 VMs, containers, and other network nodes indirectly, through logical
305 switches.
306
307 Logical switches and logical routers have distinct kinds of logical
308 ports, so properly speaking one should usually talk about logical
309 switch ports or logical router ports. However, an unqualified ``logical
310 port’’ usually refers to a logical switch port.
311
312 When a VM sends a packet to a VIF logical switch port, the Open vSwitch
313 flow tables simulate the packet’s journey through that logical switch
314 and any other logical routers and logical switches that it might
315 encounter. This happens without transmitting the packet across any
316 physical medium: the flow tables implement all of the switching and
317 routing decisions and behavior. If the flow tables ultimately decide to
318 output the packet at a logical port attached to another hypervisor (or
319 another kind of transport node), then that is the time at which the
320 packet is encapsulated for physical network transmission and sent.
321
322 Logical Switch Port Types
323
324 OVN supports a number of kinds of logical switch ports. VIF ports that
325 connect to VMs or containers, described above, are the most ordinary
326 kind of LSP. In the OVN northbound database, VIF ports have an empty
327 string for their type. This section describes some of the additional
328 port types.
329
330 A router logical switch port connects a logical switch to a logical
331 router, designating a particular LRP as its peer.
332
333 A localnet logical switch port bridges a logical switch to a physical
334 VLAN. A logical switch may have one or more localnet ports. Such a log‐
335 ical switch is used in two scenarios:
336
337 · With one or more router logical switch ports, to attach
338 L3 gateway routers and distributed gateways to a physical
339 network.
340
341 · With one or more VIF logical switch ports, to attach VMs
342 or containers directly to a physical network. In this
343 case, the logical switch is not really logical, since it
344 is bridged to the physical network rather than insulated
345 from it, and therefore cannot have independent but over‐
346 lapping IP address namespaces, etc. A deployment might
347 nevertheless choose such a configuration to take advan‐
348 tage of the OVN control plane and features such as port
349 security and ACLs.
350
351 When a logical switch contains multiple localnet ports, the following
352 is assumed.
353
354 · Each chassis has a bridge mapping for one of the localnet
355 physical networks only.
356
357 · To facilitate interconnectivity between VIF ports of the
358 switch that are located on different chassis with differ‐
359 ent physical network connectivity, the fabric implements
360 L3 routing between these adjacent physical network seg‐
361 ments.
362
363 Note: nothing said above implies that a chassis cannot be plugged to
364 multiple physical networks as long as they belong to different
365 switches.
366
367 A localport logical switch port is a special kind of VIF logical switch
368 port. These ports are present in every chassis, not bound to any par‐
369 ticular one. Traffic to such a port will never be forwarded through a
370 tunnel, and traffic from such a port is expected to be destined only to
371 the same chassis, typically in response to a request it received. Open‐
372 Stack Neutron uses a localport port to serve metadata to VMs. A meta‐
373 data proxy process is attached to this port on every host and all VMs
374 within the same network will reach it at the same IP/MAC address with‐
375 out any traffic being sent over a tunnel. For further details, see the
376 OpenStack documentation for networking-ovn.
377
378 LSP types vtep and l2gateway are used for gateways. See Gateways,
379 below, for more information.
380
381 Implementation Details
382
383 These concepts are details of how OVN is implemented internally. They
384 might still be of interest to users and administrators.
385
386 Logical datapaths are an implementation detail of logical networks in
387 the OVN southbound database. ovn-northd translates each logical switch
388 or router in the northbound database into a logical datapath in the
389 southbound database Datapath_Binding table.
390
391 For the most part, ovn-northd also translates each logical switch port
392 in the OVN northbound database into a record in the southbound database
393 Port_Binding table. The latter table corresponds roughly to the north‐
394 bound Logical_Switch_Port table. It has multiple types of logical port
395 bindings, of which many types correspond directly to northbound LSP
396 types. LSP types handled this way include VIF (empty string), localnet,
397 localport, vtep, and l2gateway.
398
399 The Port_Binding table has some types of port binding that do not cor‐
400 respond directly to logical switch port types. The common common is
401 patch port bindings, known as logical patch ports. These port bindings
402 always occur in pairs, and a packet that enters on either side comes
403 out on the other. ovn-northd connects logical switches and logical
404 routers together using logical patch ports.
405
406 Port bindings with types vtep, l2gateway, l3gateway, and chassisredi‐
407 rect are used for gateways. These are explained in Gateways, below.
408
409 Gateways
410 Gateways provide limited connectivity between logical networks and
411 physical ones. They can also provide connectivity between different OVN
412 deployments. This section will focus on the former, and the latter will
413 be described in details in section OVN Deployments Interconnection.
414
415 OVN support multiple kinds of gateways.
416
417 VTEP Gateways
418
419 A ``VTEP gateway’’ connects an OVN logical network to a physical (or
420 virtual) switch that implements the OVSDB VTEP schema that accompanies
421 Open vSwitch. (The ``VTEP gateway’’ term is a misnomer, since a VTEP is
422 just a VXLAN Tunnel Endpoint, but it is a well established name.) See
423 Life Cycle of a VTEP gateway, below, for more information.
424
425 The main intended use case for VTEP gateways is to attach physical
426 servers to an OVN logical network using a physical top-of-rack switch
427 that supports the OVSDB VTEP schema.
428
429 L2 Gateways
430
431 A L2 gateway simply attaches a designated physical L2 segment available
432 on some chassis to a logical network. The physical network effectively
433 becomes part of the logical network.
434
435 To set up a L2 gateway, the CMS adds an l2gateway LSP to an appropriate
436 logical switch, setting LSP options to name the chassis on which it
437 should be bound. ovn-northd copies this configuration into a southbound
438 Port_Binding record. On the designated chassis, ovn-controller forwards
439 packets appropriately to and from the physical segment.
440
441 L2 gateway ports have features in common with localnet ports. However,
442 with a localnet port, the physical network becomes the transport
443 between hypervisors. With an L2 gateway, packets are still transported
444 between hypervisors over tunnels and the l2gateway port is only used
445 for the packets that are on the physical network. The application for
446 L2 gateways is similar to that for VTEP gateways, e.g. to add non-vir‐
447 tualized machines to a logical network, but L2 gateways do not require
448 special support from top-of-rack hardware switches.
449
450 L3 Gateway Routers
451
452 As described above under Logical Networks, ordinary OVN logical routers
453 are distributed: they are not implemented in a single place but rather
454 in every hypervisor chassis. This is a problem for stateful services
455 such as SNAT and DNAT, which need to be implemented in a centralized
456 manner.
457
458 To allow for this kind of functionality, OVN supports L3 gateway
459 routers, which are OVN logical routers that are implemented in a desig‐
460 nated chassis. Gateway routers are typically used between distributed
461 logical routers and physical networks. The distributed logical router
462 and the logical switches behind it, to which VMs and containers attach,
463 effectively reside on each hypervisor. The distributed router and the
464 gateway router are connected by another logical switch, sometimes
465 referred to as a ``join’’ logical switch. (OVN logical routers may be
466 connected to one another directly, without an intervening switch, but
467 the OVN implementation only supports gateway logical routers that are
468 connected to logical switches. Using a join logical switch also reduces
469 the number of IP addresses needed on the distributed router.) On the
470 other side, the gateway router connects to another logical switch that
471 has a localnet port connecting to the physical network.
472
473 The following diagram shows a typical situation. One or more logical
474 switches LS1, ..., LSn connect to distributed logical router LR1, which
475 in turn connects through LSjoin to gateway logical router GLR, which
476 also connects to logical switch LSlocal, which includes a localnet port
477 to attach to the physical network.
478
479 LSlocal
480 |
481 GLR
482 |
483 LSjoin
484 |
485 LR1
486 |
487 +----+----+
488 | | |
489 LS1 ... LSn
490
491
492 To configure an L3 gateway router, the CMS sets options:chassis in the
493 router’s northbound Logical_Router to the chassis’s name. In response,
494 ovn-northd uses a special l3gateway port binding (instead of a patch
495 binding) in the southbound database to connect the logical router to
496 its neighbors. In turn, ovn-controller tunnels packets to this port
497 binding to the designated L3 gateway chassis, instead of processing
498 them locally.
499
500 DNAT and SNAT rules may be associated with a gateway router, which pro‐
501 vides a central location that can handle one-to-many SNAT (aka IP mas‐
502 querading). Distributed gateway ports, described below, also support
503 NAT.
504
505 Distributed Gateway Ports
506
507 A distributed gateway port is a logical router port that is specially
508 configured to designate one distinguished chassis, called the gateway
509 chassis, for centralized processing. A distributed gateway port should
510 connect to a logical switch that has an LSP that connects externally,
511 that is, either a localnet LSP or a connection to another OVN deploy‐
512 ment (see OVN Deployments Interconnection). Packets that traverse the
513 distributed gateway port are processed without involving the gateway
514 chassis when they can be, but when needed they do take an extra hop
515 through it.
516
517 The following diagram illustrates the use of a distributed gateway
518 port. A number of logical switches LS1, ..., LSn connect to distributed
519 logical router LR1, which in turn connects through the distributed
520 gateway port to logical switch LSlocal that includes a localnet port to
521 attach to the physical network.
522
523 LSlocal
524 |
525 LR1
526 |
527 +----+----+
528 | | |
529 LS1 ... LSn
530
531
532 ovn-northd creates two southbound Port_Binding records to represent a
533 distributed gateway port, instead of the usual one. One of these is a
534 patch port binding named for the LRP, which is used for as much traffic
535 as it can. The other one is a port binding with type chassisredirect,
536 named cr-port. The chassisredirect port binding has one specialized
537 job: when a packet is output to it, the flow table causes it to be tun‐
538 neled to the gateway chassis, at which point it is automatically output
539 to the patch port binding. Thus, the flow table can output to this port
540 binding in cases where a particular task has to happen on the gateway
541 chassis. The chassisredirect port binding is not otherwise used (for
542 example, it never receives packets).
543
544 The CMS may configure distributed gateway ports three different ways.
545 See Distributed Gateway Ports in the documentation for Logi‐
546 cal_Router_Port in ovn-nb(5) for details.
547
548 Distributed gateway ports support high availability. When more than one
549 chassis is specified, OVN only uses one at a time as the gateway chas‐
550 sis. OVN uses BFD to monitor gateway connectivity, preferring the high‐
551 est-priority gateway that is online.
552
553 Physical VLAN MTU Issues
554
555 Consider the preceding diagram again:
556
557 LSlocal
558 |
559 LR1
560 |
561 +----+----+
562 | | |
563 LS1 ... LSn
564
565
566 Suppose that each logical switch LS1, ..., LSn is bridged to a physical
567 VLAN-tagged network attached to a localnet port on LSlocal, over a dis‐
568 tributed gateway port on LR1. If a packet originating on LSi is des‐
569 tined to the external network, OVN sends it to the gateway chassis over
570 a tunnel. There, the packet traverses LR1’s logical router pipeline,
571 possibly undergoes NAT, and eventually ends up at LSlocal’s localnet
572 port. If all of the physical links in the network have the same MTU,
573 then the packet’s transit across a tunnel causes an MTU problem: tunnel
574 overhead prevents a packet that uses the full physical MTU from cross‐
575 ing the tunnel to the gateway chassis (without fragmentation).
576
577 OVN offers two solutions to this problem, the reside-on-redirect-chas‐
578 sis and redirect-type options. Both solutions require each logical
579 switch LS1, ..., LSn to include a localnet logical switch port LN1,
580 ..., LNn respectively, that is present on each chassis. Both cause
581 packets to be sent over the localnet ports instead of tunnels. They
582 differ in which packets-some or all-are sent this way. The most promi‐
583 nent tradeoff between these options is that reside-on-redirect-chassis
584 is easier to configure and that redirect-type performs better for east-
585 west traffic.
586
587 The first solution is the reside-on-redirect-chassis option for logical
588 router ports. Setting this option on a LRP from (e.g.) LS1 to LR1 dis‐
589 ables forwarding from LS1 to LR1 except on the gateway chassis. On
590 chassis other than the gateway chassis, this single change means that
591 packets that would otherwise have been forwarded to LR1 are instead
592 forwarded to LN1. The instance of LN1 on the gateway chassis then
593 receives the packet and forwards it to LR1. The packet traverses the
594 LR1 logical router pipeline, possibly undergoes NAT, and eventually
595 ends up at LSlocal’s localnet port. The packet never traverses a tun‐
596 nel, avoiding the MTU issue.
597
598 This option has the further consequence of centralizing ``distributed’’
599 logical router LR1, since no packets are forwarded from LS1 to LR1 on
600 any chassis other than the gateway chassis. Therefore, east-west traf‐
601 fic passes through the gateway chassis, not just north-south. (The
602 naive ``fix’’ of allowing east-west traffic to flow directly between
603 chassis over LN1 does not work because routing sets the Ethernet source
604 address to LR1’s source address. Seeing this single Ethernet source
605 address originate from all of the chassis will confuse the physical
606 switch.)
607
608 Do not set the reside-on-redirect-chassis option on a distributed gate‐
609 way port. In the diagram above, it would be set on the LRPs connecting
610 LS1, ..., LSn to LR1.
611
612 The second solution is the redirect-type option for distributed gateway
613 ports. Setting this option to bridged causes packets that are redi‐
614 rected to the gateway chassis to go over the localnet ports instead of
615 being tunneled. This option does not change how OVN treats packets not
616 redirected to the gateway chassis.
617
618 The redirect-type option requires the administrator or the CMS to con‐
619 figure each participating chassis with a unique Ethernet address for
620 the logical router by setting ovn-chassis-mac-mappings in the Open
621 vSwitch database, for use by ovn-controller. This makes it more diffi‐
622 cult to configure than reside-on-redirect-chassis.
623
624 Set the redirect-type option on a distributed gateway port.
625
626 Life Cycle of a VIF
627 Tables and their schemas presented in isolation are difficult to under‐
628 stand. Here’s an example.
629
630 A VIF on a hypervisor is a virtual network interface attached either to
631 a VM or a container running directly on that hypervisor (This is dif‐
632 ferent from the interface of a container running inside a VM).
633
634 The steps in this example refer often to details of the OVN and OVN
635 Northbound database schemas. Please see ovn-sb(5) and ovn-nb(5),
636 respectively, for the full story on these databases.
637
638 1. A VIF’s life cycle begins when a CMS administrator creates a
639 new VIF using the CMS user interface or API and adds it to a
640 switch (one implemented by OVN as a logical switch). The CMS
641 updates its own configuration. This includes associating
642 unique, persistent identifier vif-id and Ethernet address
643 mac with the VIF.
644
645 2. The CMS plugin updates the OVN Northbound database to
646 include the new VIF, by adding a row to the Logi‐
647 cal_Switch_Port table. In the new row, name is vif-id, mac
648 is mac, switch points to the OVN logical switch’s Logi‐
649 cal_Switch record, and other columns are initialized appro‐
650 priately.
651
652 3. ovn-northd receives the OVN Northbound database update. In
653 turn, it makes the corresponding updates to the OVN South‐
654 bound database, by adding rows to the OVN Southbound data‐
655 base Logical_Flow table to reflect the new port, e.g. add a
656 flow to recognize that packets destined to the new port’s
657 MAC address should be delivered to it, and update the flow
658 that delivers broadcast and multicast packets to include the
659 new port. It also creates a record in the Binding table and
660 populates all its columns except the column that identifies
661 the chassis.
662
663 4. On every hypervisor, ovn-controller receives the Logi‐
664 cal_Flow table updates that ovn-northd made in the previous
665 step. As long as the VM that owns the VIF is powered off,
666 ovn-controller cannot do much; it cannot, for example,
667 arrange to send packets to or receive packets from the VIF,
668 because the VIF does not actually exist anywhere.
669
670 5. Eventually, a user powers on the VM that owns the VIF. On
671 the hypervisor where the VM is powered on, the integration
672 between the hypervisor and Open vSwitch (described in Docu‐
673 mentation/topics/integration.rst) adds the VIF to the OVN
674 integration bridge and stores vif-id in exter‐
675 nal_ids:iface-id to indicate that the interface is an
676 instantiation of the new VIF. (None of this code is new in
677 OVN; this is pre-existing integration work that has already
678 been done on hypervisors that support OVS.)
679
680 6. On the hypervisor where the VM is powered on, ovn-controller
681 notices external_ids:iface-id in the new Interface. In
682 response, in the OVN Southbound DB, it updates the Binding
683 table’s chassis column for the row that links the logical
684 port from external_ids: iface-id to the hypervisor. After‐
685 ward, ovn-controller updates the local hypervisor’s OpenFlow
686 tables so that packets to and from the VIF are properly han‐
687 dled.
688
689 7. Some CMS systems, including OpenStack, fully start a VM only
690 when its networking is ready. To support this, ovn-northd
691 notices the chassis column updated for the row in Binding
692 table and pushes this upward by updating the up column in
693 the OVN Northbound database’s Logical_Switch_Port table to
694 indicate that the VIF is now up. The CMS, if it uses this
695 feature, can then react by allowing the VM’s execution to
696 proceed.
697
698 8. On every hypervisor but the one where the VIF resides,
699 ovn-controller notices the completely populated row in the
700 Binding table. This provides ovn-controller the physical
701 location of the logical port, so each instance updates the
702 OpenFlow tables of its switch (based on logical datapath
703 flows in the OVN DB Logical_Flow table) so that packets to
704 and from the VIF can be properly handled via tunnels.
705
706 9. Eventually, a user powers off the VM that owns the VIF. On
707 the hypervisor where the VM was powered off, the VIF is
708 deleted from the OVN integration bridge.
709
710 10. On the hypervisor where the VM was powered off, ovn-con‐
711 troller notices that the VIF was deleted. In response, it
712 removes the Chassis column content in the Binding table for
713 the logical port.
714
715 11. On every hypervisor, ovn-controller notices the empty Chas‐
716 sis column in the Binding table’s row for the logical port.
717 This means that ovn-controller no longer knows the physical
718 location of the logical port, so each instance updates its
719 OpenFlow table to reflect that.
720
721 12. Eventually, when the VIF (or its entire VM) is no longer
722 needed by anyone, an administrator deletes the VIF using the
723 CMS user interface or API. The CMS updates its own configu‐
724 ration.
725
726 13. The CMS plugin removes the VIF from the OVN Northbound data‐
727 base, by deleting its row in the Logical_Switch_Port table.
728
729 14. ovn-northd receives the OVN Northbound update and in turn
730 updates the OVN Southbound database accordingly, by removing
731 or updating the rows from the OVN Southbound database Logi‐
732 cal_Flow table and Binding table that were related to the
733 now-destroyed VIF.
734
735 15. On every hypervisor, ovn-controller receives the Logi‐
736 cal_Flow table updates that ovn-northd made in the previous
737 step. ovn-controller updates OpenFlow tables to reflect the
738 update, although there may not be much to do, since the VIF
739 had already become unreachable when it was removed from the
740 Binding table in a previous step.
741
742 Life Cycle of a Container Interface Inside a VM
743 OVN provides virtual network abstractions by converting information
744 written in OVN_NB database to OpenFlow flows in each hypervisor. Secure
745 virtual networking for multi-tenants can only be provided if OVN con‐
746 troller is the only entity that can modify flows in Open vSwitch. When
747 the Open vSwitch integration bridge resides in the hypervisor, it is a
748 fair assumption to make that tenant workloads running inside VMs cannot
749 make any changes to Open vSwitch flows.
750
751 If the infrastructure provider trusts the applications inside the con‐
752 tainers not to break out and modify the Open vSwitch flows, then con‐
753 tainers can be run in hypervisors. This is also the case when contain‐
754 ers are run inside the VMs and Open vSwitch integration bridge with
755 flows added by OVN controller resides in the same VM. For both the
756 above cases, the workflow is the same as explained with an example in
757 the previous section ("Life Cycle of a VIF").
758
759 This section talks about the life cycle of a container interface (CIF)
760 when containers are created in the VMs and the Open vSwitch integration
761 bridge resides inside the hypervisor. In this case, even if a container
762 application breaks out, other tenants are not affected because the con‐
763 tainers running inside the VMs cannot modify the flows in the Open
764 vSwitch integration bridge.
765
766 When multiple containers are created inside a VM, there are multiple
767 CIFs associated with them. The network traffic associated with these
768 CIFs need to reach the Open vSwitch integration bridge running in the
769 hypervisor for OVN to support virtual network abstractions. OVN should
770 also be able to distinguish network traffic coming from different CIFs.
771 There are two ways to distinguish network traffic of CIFs.
772
773 One way is to provide one VIF for every CIF (1:1 model). This means
774 that there could be a lot of network devices in the hypervisor. This
775 would slow down OVS because of all the additional CPU cycles needed for
776 the management of all the VIFs. It would also mean that the entity cre‐
777 ating the containers in a VM should also be able to create the corre‐
778 sponding VIFs in the hypervisor.
779
780 The second way is to provide a single VIF for all the CIFs (1:many
781 model). OVN could then distinguish network traffic coming from differ‐
782 ent CIFs via a tag written in every packet. OVN uses this mechanism and
783 uses VLAN as the tagging mechanism.
784
785 1. A CIF’s life cycle begins when a container is spawned inside
786 a VM by the either the same CMS that created the VM or a
787 tenant that owns that VM or even a container Orchestration
788 System that is different than the CMS that initially created
789 the VM. Whoever the entity is, it will need to know the vif-
790 id that is associated with the network interface of the VM
791 through which the container interface’s network traffic is
792 expected to go through. The entity that creates the con‐
793 tainer interface will also need to choose an unused VLAN
794 inside that VM.
795
796 2. The container spawning entity (either directly or through
797 the CMS that manages the underlying infrastructure) updates
798 the OVN Northbound database to include the new CIF, by
799 adding a row to the Logical_Switch_Port table. In the new
800 row, name is any unique identifier, parent_name is the vif-
801 id of the VM through which the CIF’s network traffic is
802 expected to go through and the tag is the VLAN tag that
803 identifies the network traffic of that CIF.
804
805 3. ovn-northd receives the OVN Northbound database update. In
806 turn, it makes the corresponding updates to the OVN South‐
807 bound database, by adding rows to the OVN Southbound data‐
808 base’s Logical_Flow table to reflect the new port and also
809 by creating a new row in the Binding table and populating
810 all its columns except the column that identifies the chas‐
811 sis.
812
813 4. On every hypervisor, ovn-controller subscribes to the
814 changes in the Binding table. When a new row is created by
815 ovn-northd that includes a value in parent_port column of
816 Binding table, the ovn-controller in the hypervisor whose
817 OVN integration bridge has that same value in vif-id in
818 external_ids:iface-id updates the local hypervisor’s Open‐
819 Flow tables so that packets to and from the VIF with the
820 particular VLAN tag are properly handled. Afterward it
821 updates the chassis column of the Binding to reflect the
822 physical location.
823
824 5. One can only start the application inside the container
825 after the underlying network is ready. To support this,
826 ovn-northd notices the updated chassis column in Binding ta‐
827 ble and updates the up column in the OVN Northbound data‐
828 base’s Logical_Switch_Port table to indicate that the CIF is
829 now up. The entity responsible to start the container appli‐
830 cation queries this value and starts the application.
831
832 6. Eventually the entity that created and started the con‐
833 tainer, stops it. The entity, through the CMS (or directly)
834 deletes its row in the Logical_Switch_Port table.
835
836 7. ovn-northd receives the OVN Northbound update and in turn
837 updates the OVN Southbound database accordingly, by removing
838 or updating the rows from the OVN Southbound database Logi‐
839 cal_Flow table that were related to the now-destroyed CIF.
840 It also deletes the row in the Binding table for that CIF.
841
842 8. On every hypervisor, ovn-controller receives the Logi‐
843 cal_Flow table updates that ovn-northd made in the previous
844 step. ovn-controller updates OpenFlow tables to reflect the
845 update.
846
847 Architectural Physical Life Cycle of a Packet
848 This section describes how a packet travels from one virtual machine or
849 container to another through OVN. This description focuses on the phys‐
850 ical treatment of a packet; for a description of the logical life cycle
851 of a packet, please refer to the Logical_Flow table in ovn-sb(5).
852
853 This section mentions several data and metadata fields, for clarity
854 summarized here:
855
856 tunnel key
857 When OVN encapsulates a packet in Geneve or another tun‐
858 nel, it attaches extra data to it to allow the receiving
859 OVN instance to process it correctly. This takes differ‐
860 ent forms depending on the particular encapsulation, but
861 in each case we refer to it here as the ``tunnel key.’’
862 See Tunnel Encapsulations, below, for details.
863
864 logical datapath field
865 A field that denotes the logical datapath through which a
866 packet is being processed. OVN uses the field that Open‐
867 Flow 1.1+ simply (and confusingly) calls ``metadata’’ to
868 store the logical datapath. (This field is passed across
869 tunnels as part of the tunnel key.)
870
871 logical input port field
872 A field that denotes the logical port from which the
873 packet entered the logical datapath. OVN stores this in
874 Open vSwitch extension register number 14.
875
876 Geneve and STT tunnels pass this field as part of the
877 tunnel key. Ramp switch VXLAN tunnels do not explicitly
878 carry a logical input port, but since they are used to
879 communicate with gateways that from OVN’s perspective
880 consist of only a single logical port, so that OVN can
881 set the logical input port field to this one on ingress
882 to the OVN logical pipeline. As for regular VXLAN tun‐
883 nels, they don’t carry input port field at all. This puts
884 additional limitations on cluster capabilities that are
885 described in Tunnel Encapsulations section.
886
887 logical output port field
888 A field that denotes the logical port from which the
889 packet will leave the logical datapath. This is initial‐
890 ized to 0 at the beginning of the logical ingress pipe‐
891 line. OVN stores this in Open vSwitch extension register
892 number 15.
893
894 Geneve, STT and regular VXLAN tunnels pass this field as
895 part of the tunnel key. Ramp switch VXLAN tunnels do not
896 transmit the logical output port field, and since they do
897 not carry a logical output port field in the tunnel key,
898 when a packet is received from ramp switch VXLAN tunnel
899 by an OVN hypervisor, the packet is resubmitted to table
900 8 to determine the output port(s); when the packet
901 reaches table 32, these packets are resubmitted to table
902 33 for local delivery by checking a MLF_RCV_FROM_RAMP
903 flag, which is set when the packet arrives from a ramp
904 tunnel.
905
906 conntrack zone field for logical ports
907 A field that denotes the connection tracking zone for
908 logical ports. The value only has local significance and
909 is not meaningful between chassis. This is initialized to
910 0 at the beginning of the logical ingress pipeline. OVN
911 stores this in Open vSwitch extension register number 13.
912
913 conntrack zone fields for routers
914 Fields that denote the connection tracking zones for
915 routers. These values only have local significance and
916 are not meaningful between chassis. OVN stores the zone
917 information for north to south traffic (for DNATting or
918 ECMP symmetric replies) in Open vSwitch extension regis‐
919 ter number 11 and zone information for south to north
920 traffic (for SNATing) in Open vSwitch extension register
921 number 12.
922
923 logical flow flags
924 The logical flags are intended to handle keeping context
925 between tables in order to decide which rules in subse‐
926 quent tables are matched. These values only have local
927 significance and are not meaningful between chassis. OVN
928 stores the logical flags in Open vSwitch extension regis‐
929 ter number 10.
930
931 VLAN ID
932 The VLAN ID is used as an interface between OVN and con‐
933 tainers nested inside a VM (see Life Cycle of a container
934 interface inside a VM, above, for more information).
935
936 Initially, a VM or container on the ingress hypervisor sends a packet
937 on a port attached to the OVN integration bridge. Then:
938
939 1. OpenFlow table 0 performs physical-to-logical translation.
940 It matches the packet’s ingress port. Its actions annotate
941 the packet with logical metadata, by setting the logical
942 datapath field to identify the logical datapath that the
943 packet is traversing and the logical input port field to
944 identify the ingress port. Then it resubmits to table 8 to
945 enter the logical ingress pipeline.
946
947 Packets that originate from a container nested within a VM
948 are treated in a slightly different way. The originating
949 container can be distinguished based on the VIF-specific
950 VLAN ID, so the physical-to-logical translation flows addi‐
951 tionally match on VLAN ID and the actions strip the VLAN
952 header. Following this step, OVN treats packets from con‐
953 tainers just like any other packets.
954
955 Table 0 also processes packets that arrive from other chas‐
956 sis. It distinguishes them from other packets by ingress
957 port, which is a tunnel. As with packets just entering the
958 OVN pipeline, the actions annotate these packets with logi‐
959 cal datapath metadata. For tunnel types that support it,
960 they are also annotated with logical ingress port metadata.
961 In addition, the actions set the logical output port field,
962 which is available because in OVN tunneling occurs after the
963 logical output port is known. These pieces of information
964 are obtained from the tunnel encapsulation metadata (see
965 Tunnel Encapsulations for encoding details). Then the
966 actions resubmit to table 33 to enter the logical egress
967 pipeline.
968
969 2. OpenFlow tables 8 through 31 execute the logical ingress
970 pipeline from the Logical_Flow table in the OVN Southbound
971 database. These tables are expressed entirely in terms of
972 logical concepts like logical ports and logical datapaths. A
973 big part of ovn-controller’s job is to translate them into
974 equivalent OpenFlow (in particular it translates the table
975 numbers: Logical_Flow tables 0 through 23 become OpenFlow
976 tables 8 through 31).
977
978 Each logical flow maps to one or more OpenFlow flows. An
979 actual packet ordinarily matches only one of these, although
980 in some cases it can match more than one of these flows
981 (which is not a problem because all of them have the same
982 actions). ovn-controller uses the first 32 bits of the logi‐
983 cal flow’s UUID as the cookie for its OpenFlow flow or
984 flows. (This is not necessarily unique, since the first 32
985 bits of a logical flow’s UUID is not necessarily unique.)
986
987 Some logical flows can map to the Open vSwitch ``conjunctive
988 match’’ extension (see ovs-fields(7)). Flows with a conjunc‐
989 tion action use an OpenFlow cookie of 0, because they can
990 correspond to multiple logical flows. The OpenFlow flow for
991 a conjunctive match includes a match on conj_id.
992
993 Some logical flows may not be represented in the OpenFlow
994 tables on a given hypervisor, if they could not be used on
995 that hypervisor. For example, if no VIF in a logical switch
996 resides on a given hypervisor, and the logical switch is not
997 otherwise reachable on that hypervisor (e.g. over a series
998 of hops through logical switches and routers starting from a
999 VIF on the hypervisor), then the logical flow may not be
1000 represented there.
1001
1002 Most OVN actions have fairly obvious implementations in
1003 OpenFlow (with OVS extensions), e.g. next; is implemented as
1004 resubmit, field = constant; as set_field. A few are worth
1005 describing in more detail:
1006
1007 output:
1008 Implemented by resubmitting the packet to table 32.
1009 If the pipeline executes more than one output action,
1010 then each one is separately resubmitted to table 32.
1011 This can be used to send multiple copies of the
1012 packet to multiple ports. (If the packet was not mod‐
1013 ified between the output actions, and some of the
1014 copies are destined to the same hypervisor, then
1015 using a logical multicast output port would save
1016 bandwidth between hypervisors.)
1017
1018 get_arp(P, A);
1019 get_nd(P, A);
1020 Implemented by storing arguments into OpenFlow fields,
1021 then resubmitting to table 66, which ovn-controller
1022 populates with flows generated from the MAC_Binding ta‐
1023 ble in the OVN Southbound database. If there is a match
1024 in table 66, then its actions store the bound MAC in
1025 the Ethernet destination address field.
1026
1027 (The OpenFlow actions save and restore the OpenFlow
1028 fields used for the arguments, so that the OVN actions
1029 do not have to be aware of this temporary use.)
1030
1031 put_arp(P, A, E);
1032 put_nd(P, A, E);
1033 Implemented by storing the arguments into OpenFlow
1034 fields, then outputting a packet to ovn-controller,
1035 which updates the MAC_Binding table.
1036
1037 (The OpenFlow actions save and restore the OpenFlow
1038 fields used for the arguments, so that the OVN actions
1039 do not have to be aware of this temporary use.)
1040
1041 R = lookup_arp(P, A, M);
1042 R = lookup_nd(P, A, M);
1043 Implemented by storing arguments into OpenFlow fields,
1044 then resubmitting to table 67, which ovn-controller
1045 populates with flows generated from the MAC_Binding ta‐
1046 ble in the OVN Southbound database. If there is a match
1047 in table 67, then its actions set the logical flow flag
1048 MLF_LOOKUP_MAC.
1049
1050 (The OpenFlow actions save and restore the OpenFlow
1051 fields used for the arguments, so that the OVN actions
1052 do not have to be aware of this temporary use.)
1053
1054 3. OpenFlow tables 32 through 47 implement the output action in
1055 the logical ingress pipeline. Specifically, table 32 handles
1056 packets to remote hypervisors, table 33 handles packets to
1057 the local hypervisor, and table 34 checks whether packets
1058 whose logical ingress and egress port are the same should be
1059 discarded.
1060
1061 Logical patch ports are a special case. Logical patch ports
1062 do not have a physical location and effectively reside on
1063 every hypervisor. Thus, flow table 33, for output to ports
1064 on the local hypervisor, naturally implements output to uni‐
1065 cast logical patch ports too. However, applying the same
1066 logic to a logical patch port that is part of a logical mul‐
1067 ticast group yields packet duplication, because each hyper‐
1068 visor that contains a logical port in the multicast group
1069 will also output the packet to the logical patch port. Thus,
1070 multicast groups implement output to logical patch ports in
1071 table 32.
1072
1073 Each flow in table 32 matches on a logical output port for
1074 unicast or multicast logical ports that include a logical
1075 port on a remote hypervisor. Each flow’s actions implement
1076 sending a packet to the port it matches. For unicast logical
1077 output ports on remote hypervisors, the actions set the tun‐
1078 nel key to the correct value, then send the packet on the
1079 tunnel port to the correct hypervisor. (When the remote
1080 hypervisor receives the packet, table 0 there will recognize
1081 it as a tunneled packet and pass it along to table 33.) For
1082 multicast logical output ports, the actions send one copy of
1083 the packet to each remote hypervisor, in the same way as for
1084 unicast destinations. If a multicast group includes a logi‐
1085 cal port or ports on the local hypervisor, then its actions
1086 also resubmit to table 33. Table 32 also includes:
1087
1088 · A higher-priority rule to match packets received from
1089 ramp switch tunnels, based on flag MLF_RCV_FROM_RAMP,
1090 and resubmit these packets to table 33 for local
1091 delivery. Packets received from ramp switch tunnels
1092 reach here because of a lack of logical output port
1093 field in the tunnel key and thus these packets needed
1094 to be submitted to table 8 to determine the output
1095 port.
1096
1097 · A higher-priority rule to match packets received from
1098 ports of type localport, based on the logical input
1099 port, and resubmit these packets to table 33 for
1100 local delivery. Ports of type localport exist on
1101 every hypervisor and by definition their traffic
1102 should never go out through a tunnel.
1103
1104 · A higher-priority rule to match packets that have the
1105 MLF_LOCAL_ONLY logical flow flag set, and whose des‐
1106 tination is a multicast address. This flag indicates
1107 that the packet should not be delivered to remote
1108 hypervisors, even if the multicast destination
1109 includes ports on remote hypervisors. This flag is
1110 used when ovn-controller is the originator of the
1111 multicast packet. Since each ovn-controller instance
1112 is originating these packets, the packets only need
1113 to be delivered to local ports.
1114
1115 · A fallback flow that resubmits to table 33 if there
1116 is no other match.
1117
1118 Flows in table 33 resemble those in table 32 but for logical
1119 ports that reside locally rather than remotely. For unicast
1120 logical output ports on the local hypervisor, the actions
1121 just resubmit to table 34. For multicast output ports that
1122 include one or more logical ports on the local hypervisor,
1123 for each such logical port P, the actions change the logical
1124 output port to P, then resubmit to table 34.
1125
1126 A special case is that when a localnet port exists on the
1127 datapath, remote port is connected by switching to the
1128 localnet port. In this case, instead of adding a flow in ta‐
1129 ble 32 to reach the remote port, a flow is added in table 33
1130 to switch the logical outport to the localnet port, and
1131 resubmit to table 33 as if it were unicasted to a logical
1132 port on the local hypervisor.
1133
1134 Table 34 matches and drops packets for which the logical
1135 input and output ports are the same and the MLF_ALLOW_LOOP‐
1136 BACK flag is not set. It resubmits other packets to table
1137 40.
1138
1139 4. OpenFlow tables 40 through 63 execute the logical egress
1140 pipeline from the Logical_Flow table in the OVN Southbound
1141 database. The egress pipeline can perform a final stage of
1142 validation before packet delivery. Eventually, it may exe‐
1143 cute an output action, which ovn-controller implements by
1144 resubmitting to table 64. A packet for which the pipeline
1145 never executes output is effectively dropped (although it
1146 may have been transmitted through a tunnel across a physical
1147 network).
1148
1149 The egress pipeline cannot change the logical output port or
1150 cause further tunneling.
1151
1152 5. Table 64 bypasses OpenFlow loopback when MLF_ALLOW_LOOPBACK
1153 is set. Logical loopback was handled in table 34, but Open‐
1154 Flow by default also prevents loopback to the OpenFlow
1155 ingress port. Thus, when MLF_ALLOW_LOOPBACK is set, OpenFlow
1156 table 64 saves the OpenFlow ingress port, sets it to zero,
1157 resubmits to table 65 for logical-to-physical transforma‐
1158 tion, and then restores the OpenFlow ingress port, effec‐
1159 tively disabling OpenFlow loopback prevents. When
1160 MLF_ALLOW_LOOPBACK is unset, table 64 flow simply resubmits
1161 to table 65.
1162
1163 6. OpenFlow table 65 performs logical-to-physical translation,
1164 the opposite of table 0. It matches the packet’s logical
1165 egress port. Its actions output the packet to the port
1166 attached to the OVN integration bridge that represents that
1167 logical port. If the logical egress port is a container
1168 nested with a VM, then before sending the packet the actions
1169 push on a VLAN header with an appropriate VLAN ID.
1170
1171 Logical Routers and Logical Patch Ports
1172 Typically logical routers and logical patch ports do not have a physi‐
1173 cal location and effectively reside on every hypervisor. This is the
1174 case for logical patch ports between logical routers and logical
1175 switches behind those logical routers, to which VMs (and VIFs) attach.
1176
1177 Consider a packet sent from one virtual machine or container to another
1178 VM or container that resides on a different subnet. The packet will
1179 traverse tables 0 to 65 as described in the previous section Architec‐
1180 tural Physical Life Cycle of a Packet, using the logical datapath rep‐
1181 resenting the logical switch that the sender is attached to. At table
1182 32, the packet will use the fallback flow that resubmits locally to ta‐
1183 ble 33 on the same hypervisor. In this case, all of the processing from
1184 table 0 to table 65 occurs on the hypervisor where the sender resides.
1185
1186 When the packet reaches table 65, the logical egress port is a logical
1187 patch port. ovn-controller implements output to the logical patch is
1188 packet by cloning and resubmitting directly to the first OpenFlow flow
1189 table in the ingress pipeline, setting the logical ingress port to the
1190 peer logical patch port, and using the peer logical patch port’s logi‐
1191 cal datapath (that represents the logical router).
1192
1193 The packet re-enters the ingress pipeline in order to traverse tables 8
1194 to 65 again, this time using the logical datapath representing the log‐
1195 ical router. The processing continues as described in the previous sec‐
1196 tion Architectural Physical Life Cycle of a Packet. When the packet
1197 reachs table 65, the logical egress port will once again be a logical
1198 patch port. In the same manner as described above, this logical patch
1199 port will cause the packet to be resubmitted to OpenFlow tables 8 to
1200 65, this time using the logical datapath representing the logical
1201 switch that the destination VM or container is attached to.
1202
1203 The packet traverses tables 8 to 65 a third and final time. If the des‐
1204 tination VM or container resides on a remote hypervisor, then table 32
1205 will send the packet on a tunnel port from the sender’s hypervisor to
1206 the remote hypervisor. Finally table 65 will output the packet directly
1207 to the destination VM or container.
1208
1209 The following sections describe two exceptions, where logical routers
1210 and/or logical patch ports are associated with a physical location.
1211
1212 Gateway Routers
1213
1214 A gateway router is a logical router that is bound to a physical loca‐
1215 tion. This includes all of the logical patch ports of the logical
1216 router, as well as all of the peer logical patch ports on logical
1217 switches. In the OVN Southbound database, the Port_Binding entries for
1218 these logical patch ports use the type l3gateway rather than patch, in
1219 order to distinguish that these logical patch ports are bound to a
1220 chassis.
1221
1222 When a hypervisor processes a packet on a logical datapath representing
1223 a logical switch, and the logical egress port is a l3gateway port rep‐
1224 resenting connectivity to a gateway router, the packet will match a
1225 flow in table 32 that sends the packet on a tunnel port to the chassis
1226 where the gateway router resides. This processing in table 32 is done
1227 in the same manner as for VIFs.
1228
1229 Distributed Gateway Ports
1230
1231 This section provides additional details on distributed gateway ports,
1232 outlined earlier.
1233
1234 The primary design goal of distributed gateway ports is to allow as
1235 much traffic as possible to be handled locally on the hypervisor where
1236 a VM or container resides. Whenever possible, packets from the VM or
1237 container to the outside world should be processed completely on that
1238 VM’s or container’s hypervisor, eventually traversing a localnet port
1239 instance or a tunnel to the physical network or a different OVN deploy‐
1240 ment. Whenever possible, packets from the outside world to a VM or con‐
1241 tainer should be directed through the physical network directly to the
1242 VM’s or container’s hypervisor.
1243
1244 In order to allow for the distributed processing of packets described
1245 in the paragraph above, distributed gateway ports need to be logical
1246 patch ports that effectively reside on every hypervisor, rather than
1247 l3gateway ports that are bound to a particular chassis. However, the
1248 flows associated with distributed gateway ports often need to be asso‐
1249 ciated with physical locations, for the following reasons:
1250
1251 · The physical network that the localnet port is attached
1252 to typically uses L2 learning. Any Ethernet address used
1253 over the distributed gateway port must be restricted to a
1254 single physical location so that upstream L2 learning is
1255 not confused. Traffic sent out the distributed gateway
1256 port towards the localnet port with a specific Ethernet
1257 address must be sent out one specific instance of the
1258 distributed gateway port on one specific chassis. Traffic
1259 received from the localnet port (or from a VIF on the
1260 same logical switch as the localnet port) with a specific
1261 Ethernet address must be directed to the logical switch’s
1262 patch port instance on that specific chassis.
1263
1264 Due to the implications of L2 learning, the Ethernet
1265 address and IP address of the distributed gateway port
1266 need to be restricted to a single physical location. For
1267 this reason, the user must specify one chassis associated
1268 with the distributed gateway port. Note that traffic
1269 traversing the distributed gateway port using other Eth‐
1270 ernet addresses and IP addresses (e.g. one-to-one NAT) is
1271 not restricted to this chassis.
1272
1273 Replies to ARP and ND requests must be restricted to a
1274 single physical location, where the Ethernet address in
1275 the reply resides. This includes ARP and ND replies for
1276 the IP address of the distributed gateway port, which are
1277 restricted to the chassis that the user associated with
1278 the distributed gateway port.
1279
1280 · In order to support one-to-many SNAT (aka IP masquerad‐
1281 ing), where multiple logical IP addresses spread across
1282 multiple chassis are mapped to a single external IP
1283 address, it will be necessary to handle some of the logi‐
1284 cal router processing on a specific chassis in a central‐
1285 ized manner. Since the SNAT external IP address is typi‐
1286 cally the distributed gateway port IP address, and for
1287 simplicity, the same chassis associated with the distrib‐
1288 uted gateway port is used.
1289
1290 The details of flow restrictions to specific chassis are described in
1291 the ovn-northd documentation.
1292
1293 While most of the physical location dependent aspects of distributed
1294 gateway ports can be handled by restricting some flows to specific
1295 chassis, one additional mechanism is required. When a packet leaves the
1296 ingress pipeline and the logical egress port is the distributed gateway
1297 port, one of two different sets of actions is required at table 32:
1298
1299 · If the packet can be handled locally on the sender’s
1300 hypervisor (e.g. one-to-one NAT traffic), then the packet
1301 should just be resubmitted locally to table 33, in the
1302 normal manner for distributed logical patch ports.
1303
1304 · However, if the packet needs to be handled on the chassis
1305 associated with the distributed gateway port (e.g. one-
1306 to-many SNAT traffic or non-NAT traffic), then table 32
1307 must send the packet on a tunnel port to that chassis.
1308
1309 In order to trigger the second set of actions, the chassisredirect type
1310 of southbound Port_Binding has been added. Setting the logical egress
1311 port to the type chassisredirect logical port is simply a way to indi‐
1312 cate that although the packet is destined for the distributed gateway
1313 port, it needs to be redirected to a different chassis. At table 32,
1314 packets with this logical egress port are sent to a specific chassis,
1315 in the same way that table 32 directs packets whose logical egress port
1316 is a VIF or a type l3gateway port to different chassis. Once the packet
1317 arrives at that chassis, table 33 resets the logical egress port to the
1318 value representing the distributed gateway port. For each distributed
1319 gateway port, there is one type chassisredirect port, in addition to
1320 the distributed logical patch port representing the distributed gateway
1321 port.
1322
1323 High Availability for Distributed Gateway Ports
1324
1325 OVN allows you to specify a prioritized list of chassis for a distrib‐
1326 uted gateway port. This is done by associating multiple Gateway_Chassis
1327 rows with a Logical_Router_Port in the OVN_Northbound database.
1328
1329 When multiple chassis have been specified for a gateway, all chassis
1330 that may send packets to that gateway will enable BFD on tunnels to all
1331 configured gateway chassis. The current master chassis for the gateway
1332 is the highest priority gateway chassis that is currently viewed as
1333 active based on BFD status.
1334
1335 For more information on L3 gateway high availability, please refer to
1336 http://docs.ovn.org/en/latest/topics/high-availability.
1337
1338 Restrictions of Distributed Gateway Ports
1339
1340 Distributed gateway ports are used to connect to an external network,
1341 which can be a physical network modeled by a logical switch with a
1342 localnet port, and can also be a logical switch that interconnects dif‐
1343 ferent OVN deployments (see OVN Deployments Interconnection). Usually
1344 there can be many logical routers connected to the same external logi‐
1345 cal switch, as shown in below diagram.
1346
1347 +--LS-EXT-+
1348 | | |
1349 | | |
1350 LR1 ... LRn
1351
1352
1353 In this diagram, there are n logical routers connected to a logical
1354 switch LS-EXT, each with a distributed gateway port, so that traffic
1355 sent to external world is redirected to the gateway chassis that is
1356 assigned to the distributed gateway port of respective logical router.
1357
1358 In the logical topology, nothing can prevent an user to add a route
1359 between the logical routers via the connected distributed gateway ports
1360 on LS-EXT. However, the route works only if the LS-EXT is a physical
1361 network (modeled by a logical switch with a localnet port). In that
1362 case the packet will be delivered between the gateway chassises through
1363 the localnet port via physical network. If the LS-EXT is a regular log‐
1364 ical switch (backed by tunneling only, as in the use case of OVN inter‐
1365 connection), then the packet will be dropped on the source gateway
1366 chassis. The limitation is due the fact that distributed gateway ports
1367 are tied to physical location, and without physical network connection,
1368 we will end up with either dropping the packet or transferring it over
1369 the tunnels which could cause bigger problems such as broadcast packets
1370 being redirect repeatedly by different gateway chassises.
1371
1372 With the limitation in mind, if a user do want the direct connectivity
1373 between the logical routers, it is better to create an internal logical
1374 switch connected to the logical routers with regular logical router
1375 ports, which are completely distributed and the packets don’t have to
1376 leave a chassis unless necessary, which is more optimal than routing
1377 via the distributed gateway ports.
1378
1379 ARP request and ND NS packet processing
1380
1381 Due to the fact that ARP requests and ND NA packets are usually broad‐
1382 cast packets, for performance reasons, OVN deals with requests that
1383 target OVN owned IP addresses (i.e., IP addresses configured on the
1384 router ports, VIPs, NAT IPs) in a specific way and only forwards them
1385 to the logical router that owns the target IP address. This behavior is
1386 different than that of traditional switches and implies that other
1387 routers/hosts connected to the logical switch will not learn the MAC/IP
1388 binding from the request packet.
1389
1390 All other ARP and ND packets are flooded in the L2 broadcast domain and
1391 to all attached logical patch ports.
1392
1393 Multiple localnet logical switches connected to a Logical Router
1394 It is possible to have multiple logical switches each with a localnet
1395 port (representing physical networks) connected to a logical router, in
1396 which one localnet logical switch may provide the external connectivity
1397 via a distributed gateway port and rest of the localnet logical
1398 switches use VLAN tagging in the physical network. It is expected that
1399 ovn-bridge-mappings is configured appropriately on the chassis for all
1400 these localnet networks.
1401
1402 East West routing
1403
1404 East-West routing between these localnet VLAN tagged logical switches
1405 work almost the same way as normal logical switches. When the VM sends
1406 such a packet, then:
1407
1408 1. It first enters the ingress pipeline, and then egress pipe‐
1409 line of the source localnet logical switch datapath. It then
1410 enters the ingress pipeline of the logical router datapath
1411 via the logical router port in the source chassis.
1412
1413 2. Routing decision is taken.
1414
1415 3. From the router datapath, packet enters the ingress pipeline
1416 and then egress pipeline of the destination localnet logical
1417 switch datapath and goes out of the integration bridge to
1418 the provider bridge ( belonging to the destination logical
1419 switch) via the localnet port. While sending the packet to
1420 provider bridge, we also replace router port MAC as source
1421 MAC with a chassis unique MAC.
1422
1423 This chassis unique MAC is configured as global ovs config
1424 on each chassis (eg. via "ovs-vsctl set open . external-ids:
1425 ovn-chassis-mac-mappings="phys:aa:bb:cc:dd:ee:$i$i""). For
1426 more details, see ovn-controller(8).
1427
1428 If the above is not configured, then source MAC would be the
1429 router port MAC. This could create problem if we have more
1430 than one chassis. This is because, since the router port is
1431 distributed, the same (MAC,VLAN) tuple will seen by physical
1432 network from other chassis as well, which could cause these
1433 issues:
1434
1435 · Continuous MAC moves in top-of-rack switch (ToR).
1436
1437 · ToR dropping the traffic, which is causing continuous
1438 MAC moves.
1439
1440 · ToR blocking the ports from which MAC moves are hap‐
1441 pening.
1442
1443 4. The destination chassis receives the packet via the localnet
1444 port and sends it to the integration bridge. Before entering
1445 the integration bridge the source mac of the packet will be
1446 replaced with router port mac again. The packet enters the
1447 ingress pipeline and then egress pipeline of the destination
1448 localnet logical switch and finally gets delivered to the
1449 destination VM port.
1450
1451 External traffic
1452
1453 The following happens when a VM sends an external traffic (which
1454 requires NATting) and the chassis hosting the VM doesn’t have a dis‐
1455 tributed gateway port.
1456
1457 1. The packet first enters the ingress pipeline, and then
1458 egress pipeline of the source localnet logical switch data‐
1459 path. It then enters the ingress pipeline of the logical
1460 router datapath via the logical router port in the source
1461 chassis.
1462
1463 2. Routing decision is taken. Since the gateway router or the
1464 distributed gateway port doesn’t reside in the source chas‐
1465 sis, the traffic is redirected to the gateway chassis via
1466 the tunnel port.
1467
1468 3. The gateway chassis receives the packet via the tunnel port
1469 and the packet enters the egress pipeline of the logical
1470 router datapath. NAT rules are applied here. The packet then
1471 enters the ingress pipeline and then egress pipeline of the
1472 localnet logical switch datapath which provides external
1473 connectivity and finally goes out via the localnet port of
1474 the logical switch which provides external connectivity.
1475
1476 Although this works, the VM traffic is tunnelled when sent from the
1477 compute chassis to the gateway chassis. In order for it to work prop‐
1478 erly, the MTU of the localnet logical switches must be lowered to
1479 account for the tunnel encapsulation.
1480
1481 Centralized routing for localnet VLAN tagged logical switches connected to
1482 a Logical Router
1483 To overcome the tunnel encapsulation problem described in the previous
1484 section, OVN supports the option of enabling centralized routing for
1485 localnet VLAN tagged logical switches. CMS can configure the option
1486 options:reside-on-redirect-chassis to true for each Logical_Router_Port
1487 which connects to the localnet VLAN tagged logical switches. This
1488 causes the gateway chassis (hosting the distributed gateway port) to
1489 handle all the routing for these networks, making it centralized. It
1490 will reply to the ARP requests for the logical router port IPs.
1491
1492 If the logical router doesn’t have a distributed gateway port connect‐
1493 ing to the localnet logical switch which provides external connectiv‐
1494 ity, then this option is ignored by OVN.
1495
1496 The following happens when a VM sends an east-west traffic which needs
1497 to be routed:
1498
1499 1. The packet first enters the ingress pipeline, and then
1500 egress pipeline of the source localnet logical switch data‐
1501 path and is sent out via a localnet port of the source
1502 localnet logical switch (instead of sending it to router
1503 pipeline).
1504
1505 2. The gateway chassis receives the packet via a localnet port
1506 of the source localnet logical switch and sends it to the
1507 integration bridge. The packet then enters the ingress pipe‐
1508 line, and then egress pipeline of the source localnet logi‐
1509 cal switch datapath and enters the ingress pipeline of the
1510 logical router datapath.
1511
1512 3. Routing decision is taken.
1513
1514 4. From the router datapath, packet enters the ingress pipeline
1515 and then egress pipeline of the destination localnet logical
1516 switch datapath. It then goes out of the integration bridge
1517 to the provider bridge ( belonging to the destination logi‐
1518 cal switch) via a localnet port.
1519
1520 5. The destination chassis receives the packet via a localnet
1521 port and sends it to the integration bridge. The packet
1522 enters the ingress pipeline and then egress pipeline of the
1523 destination localnet logical switch and finally delivered to
1524 the destination VM port.
1525
1526 The following happens when a VM sends an external traffic which
1527 requires NATting:
1528
1529 1. The packet first enters the ingress pipeline, and then
1530 egress pipeline of the source localnet logical switch data‐
1531 path and is sent out via a localnet port of the source
1532 localnet logical switch (instead of sending it to router
1533 pipeline).
1534
1535 2. The gateway chassis receives the packet via a localnet port
1536 of the source localnet logical switch and sends it to the
1537 integration bridge. The packet then enters the ingress pipe‐
1538 line, and then egress pipeline of the source localnet logi‐
1539 cal switch datapath and enters the ingress pipeline of the
1540 logical router datapath.
1541
1542 3. Routing decision is taken and NAT rules are applied.
1543
1544 4. From the router datapath, packet enters the ingress pipeline
1545 and then egress pipeline of the localnet logical switch
1546 datapath which provides external connectivity. It then goes
1547 out of the integration bridge to the provider bridge
1548 (belonging to the logical switch which provides external
1549 connectivity) via a localnet port.
1550
1551 The following happens for the reverse external traffic.
1552
1553 1. The gateway chassis receives the packet from a localnet port
1554 of the logical switch which provides external connectivity.
1555 The packet then enters the ingress pipeline and then egress
1556 pipeline of the localnet logical switch (which provides
1557 external connectivity). The packet then enters the ingress
1558 pipeline of the logical router datapath.
1559
1560 2. The ingress pipeline of the logical router datapath applies
1561 the unNATting rules. The packet then enters the ingress
1562 pipeline and then egress pipeline of the source localnet
1563 logical switch. Since the source VM doesn’t reside in the
1564 gateway chassis, the packet is sent out via a localnet port
1565 of the source logical switch.
1566
1567 3. The source chassis receives the packet via a localnet port
1568 and sends it to the integration bridge. The packet enters
1569 the ingress pipeline and then egress pipeline of the source
1570 localnet logical switch and finally gets delivered to the
1571 source VM port.
1572
1573 As an alternative to reside-on-redirect-chassis, OVN supports VLAN-
1574 based redirection. Whereas reside-on-redirect-chassis centralizes all
1575 router functionality, VLAN-based redirection only changes how OVN redi‐
1576 rects packets to the gateway chassis. By setting options:redirect-type
1577 to bridged on a distributed gateway port, OVN redirects packets to the
1578 gateway chassis using the localnet port of the router’s peer logical
1579 switch, instead of a tunnel.
1580
1581 Following happens for bridged redirection:
1582
1583 1. On compute chassis, packet passes though logical router’s
1584 ingress pipeline.
1585
1586 2. If logical outport is gateway chassis attached router port
1587 then packet is "redirected" to gateway chassis using peer
1588 logical switch’s localnet port.
1589
1590 3. This redirected packet has destination mac as router port
1591 mac (the one to which gateway chassis is attached). Its VLAN
1592 id is that of localnet port (peer logical switch of the log‐
1593 ical router port).
1594
1595 4. On the gateway chassis packet will enter the logical router
1596 pipeline again and this time it will passthrough egress
1597 pipeline as well.
1598
1599 5. Reverse traffic packet flows stays the same.
1600
1601 Some guidelines and expections with bridged redirection:
1602
1603 1. Since router port mac is destination mac, hence it has to be
1604 ensured that physical network learns it on ONLY from the
1605 gateway chassis. Which means that ovn-chassis-mac-mappings
1606 should be configure on all the compute nodes, so that physi‐
1607 cal network never learn router port mac from compute nodes.
1608
1609 2. Since packet enters logical router ingress pipeline twice
1610 (once on compute chassis and again on gateway chassis),
1611 hence ttl will be decremented twice.
1612
1613 3. Default redirection type continues to be overlay. User can
1614 switch the redirect-type between bridged and overlay by
1615 changing the value of options:redirect-type
1616
1617 Life Cycle of a VTEP gateway
1618 A gateway is a chassis that forwards traffic between the OVN-managed
1619 part of a logical network and a physical VLAN, extending a tunnel-based
1620 logical network into a physical network.
1621
1622 The steps below refer often to details of the OVN and VTEP database
1623 schemas. Please see ovn-sb(5), ovn-nb(5) and vtep(5), respectively, for
1624 the full story on these databases.
1625
1626 1. A VTEP gateway’s life cycle begins with the administrator
1627 registering the VTEP gateway as a Physical_Switch table
1628 entry in the VTEP database. The ovn-controller-vtep con‐
1629 nected to this VTEP database, will recognize the new VTEP
1630 gateway and create a new Chassis table entry for it in the
1631 OVN_Southbound database.
1632
1633 2. The administrator can then create a new Logical_Switch table
1634 entry, and bind a particular vlan on a VTEP gateway’s port
1635 to any VTEP logical switch. Once a VTEP logical switch is
1636 bound to a VTEP gateway, the ovn-controller-vtep will detect
1637 it and add its name to the vtep_logical_switches column of
1638 the Chassis table in the OVN_Southbound database. Note, the
1639 tunnel_key column of VTEP logical switch is not filled at
1640 creation. The ovn-controller-vtep will set the column when
1641 the correponding vtep logical switch is bound to an OVN log‐
1642 ical network.
1643
1644 3. Now, the administrator can use the CMS to add a VTEP logical
1645 switch to the OVN logical network. To do that, the CMS must
1646 first create a new Logical_Switch_Port table entry in the
1647 OVN_Northbound database. Then, the type column of this entry
1648 must be set to "vtep". Next, the vtep-logical-switch and
1649 vtep-physical-switch keys in the options column must also be
1650 specified, since multiple VTEP gateways can attach to the
1651 same VTEP logical switch. Next, the addresses column of this
1652 logical port must be set to "unknown", it will add a prior‐
1653 ity 0 entry in "ls_in_l2_lkup" stage of logical switch
1654 ingress pipeline. So, traffic with unrecorded mac by OVN
1655 would go through the Logical_Switch_Port to physical net‐
1656 work.
1657
1658 4. The newly created logical port in the OVN_Northbound data‐
1659 base and its configuration will be passed down to the
1660 OVN_Southbound database as a new Port_Binding table entry.
1661 The ovn-controller-vtep will recognize the change and bind
1662 the logical port to the corresponding VTEP gateway chassis.
1663 Configuration of binding the same VTEP logical switch to a
1664 different OVN logical networks is not allowed and a warning
1665 will be generated in the log.
1666
1667 5. Beside binding to the VTEP gateway chassis, the ovn-con‐
1668 troller-vtep will update the tunnel_key column of the VTEP
1669 logical switch to the corresponding Datapath_Binding table
1670 entry’s tunnel_key for the bound OVN logical network.
1671
1672 6. Next, the ovn-controller-vtep will keep reacting to the con‐
1673 figuration change in the Port_Binding in the OVN_Northbound
1674 database, and updating the Ucast_Macs_Remote table in the
1675 VTEP database. This allows the VTEP gateway to understand
1676 where to forward the unicast traffic coming from the
1677 extended external network.
1678
1679 7. Eventually, the VTEP gateway’s life cycle ends when the
1680 administrator unregisters the VTEP gateway from the VTEP
1681 database. The ovn-controller-vtep will recognize the event
1682 and remove all related configurations (Chassis table entry
1683 and port bindings) in the OVN_Southbound database.
1684
1685 8. When the ovn-controller-vtep is terminated, all related con‐
1686 figurations in the OVN_Southbound database and the VTEP
1687 database will be cleaned, including Chassis table entries
1688 for all registered VTEP gateways and their port bindings,
1689 and all Ucast_Macs_Remote table entries and the Logi‐
1690 cal_Switch tunnel keys.
1691
1692 OVN Deployments Interconnection
1693 It is not uncommon for an operator to deploy multiple OVN clusters, for
1694 two main reasons. Firstly, an operator may prefer to deploy one OVN
1695 cluster for each availability zone, e.g. in different physical regions,
1696 to avoid single point of failure. Secondly, there is always an upper
1697 limit for a single OVN control plane to scale.
1698
1699 Although the control planes of the different availability zone (AZ)s
1700 are independent from each other, the workloads from different AZs may
1701 need to communicate across the zones. The OVN interconnection feature
1702 provides a native way to interconnect different AZs by L3 routing
1703 through transit overlay networks between logical routers of different
1704 AZs.
1705
1706 A global OVN Interconnection Northbound database is introduced for the
1707 operator (probably through CMS systems) to configure transit logical
1708 switches that connect logical routers from different AZs. A transit
1709 switch is similar to a regular logical switch, but it is used for
1710 interconnection purpose only. Typically, each transit switch can be
1711 used to connect all logical routers that belong to same tenant across
1712 all AZs.
1713
1714 A dedicated daemon process ovn-ic, OVN interconnection controller, in
1715 each AZ will consume this data and populate corresponding logical
1716 switches to their own northbound databases for each AZ, so that logical
1717 routers can be connected to the transit switch by creating patch port
1718 pairs in their northbound databases. Any router ports connected to the
1719 transit switches are considered interconnection ports, which will be
1720 exchanged between AZs.
1721
1722 Physically, when workloads from different AZs communicate, packets need
1723 to go through multiple hops: source chassis, source gateway, destina‐
1724 tion gateway and destination chassis. All these hops are connected
1725 through tunnels so that the packets never leave overlay networks. A
1726 distributed gateway port is required to connect the logical router to a
1727 transit switch, with a gateway chassis specified, so that the traffic
1728 can be forwarded through the gateway chassis.
1729
1730 A global OVN Interconnection Southbound database is introduced for
1731 exchanging control plane information between the AZs. The data in this
1732 database is populated and consumed by the ovn-ic, of each AZ. The main
1733 information in this database includes:
1734
1735 · Datapath bindings for transit switches, which mainly con‐
1736 tains the tunnel keys generated for each transit switch.
1737 Separate key ranges are reserved for transit switches so
1738 that they will never conflict with any tunnel keys
1739 locally assigned for datapaths within each AZ.
1740
1741 · Availability zones, which are registerd by ovn-ic from
1742 each AZ.
1743
1744 · Gateways. Each AZ specifies chassises that are supposed
1745 to work as interconnection gateways, and the ovn-ic will
1746 populate this information to the interconnection south‐
1747 bound DB. The ovn-ic from all the other AZs will learn
1748 the gateways and populate to their own southbound DB as a
1749 chassis.
1750
1751 · Port bindings for logical switch ports created on the
1752 transit switch. Each AZ maintains their logical router to
1753 transit switch connections independently, but ovn-ic
1754 automatically populates local port bindings on transit
1755 switches to the global interconnection southbound DB, and
1756 learns remote port bindings from other AZs back to its
1757 own northbound and southbound DBs, so that logical flows
1758 can be produced and then translated to OVS flows locally,
1759 which finally enables data plane communication.
1760
1761 · Routes that are advertised between different AZs. If
1762 enabled, routes are automatically exchanged by ovn-ic.
1763 Both static routes and directly connected subnets are
1764 advertised. Options in options column of the NB_Global
1765 table of OVN_NB database control the behavior of route
1766 advertisement, such as enable/disable the advertis‐
1767 ing/learning routes, whether default routes are adver‐
1768 tised/learned, and blacklisted CIDRs. See ovn-nb(5) for
1769 more details.
1770
1771 The tunnel keys for transit switch datapaths and related port bindings
1772 must be agreed across all AZs. This is ensured by generating and stor‐
1773 ing the keys in the global interconnection southbound database. Any
1774 ovn-ic from any AZ can allocate the key, but race conditions are solved
1775 by enforcing unique index for the column in the database.
1776
1777 Once each AZ’s NB and SB databases are populated with interconnection
1778 switches and ports, and agreed upon the tunnel keys, data plane commu‐
1779 nication between the AZs are established.
1780
1781 When VXLAN tunneling is enabled in an OVN cluster, due to the limited
1782 range available for VNIs, Interconnection feature is not supported.
1783
1784 A day in the life of a packet crossing AZs
1785
1786 1. An IP packet is sent out from a VIF on a hypervisor (HV1) of
1787 AZ1, with destination IP belonging to a VIF in AZ2.
1788
1789 2. In HV1’s OVS flow tables, the packet goes through logical
1790 switch and logical router pipelines, and in a logical router
1791 pipeline, the routing stage finds out the next hop for the
1792 destination IP, which belongs to a remote logical router
1793 port in AZ2, and the output port, which is a chassis-redi‐
1794 rect port located on an interconnection gateway (GW1 in
1795 AZ1), so HV1 sends the packet to GW1 through tunnel.
1796
1797 3. On GW1, it continues with the logical router pipe line and
1798 switches to the transit switch’s pipeline through the peer
1799 port of the chassis redirect port. In the transit switch’s
1800 pipeline it outputs to the remote logical port which is
1801 located on a gateway (GW2) in AZ2, so the GW1 sends the
1802 packet to GW2 in tunnel.
1803
1804 4. On GW2, it continues with the transit switch pipeline and
1805 switches to the logical router pipeline through the peer
1806 port, which is a chassis redirect port that is located on
1807 GW2. The logical router pipeline then forwards the packet to
1808 relevant logical pipelines according to the destination IP
1809 address, and figures out the MAC and location of the desti‐
1810 nation VIF port - a hypervisor (HV2). The GW2 then sends the
1811 packet to HV2 in tunnel.
1812
1813 5. On HV2, the packet is delivered to the final destination VIF
1814 port by the logical switch egress pipeline, just the same
1815 way as for intra-AZ communications.
1816
1817 Native OVN services for external logical ports
1818 To support OVN native services (like DHCP/IPv6 RA/DNS lookup) to the
1819 cloud resources which are external, OVN supports external logical
1820 ports.
1821
1822 Below are some of the use cases where external ports can be used.
1823
1824 · VMs connected to SR-IOV nics - Traffic from these VMs by
1825 passes the kernel stack and local ovn-controller do not
1826 bind these ports and cannot serve the native services.
1827
1828 · When CMS supports provisioning baremetal servers.
1829
1830 OVN will provide the native services if CMS has done the below configu‐
1831 ration in the OVN Northbound Database.
1832
1833 · A row is created in Logical_Switch_Port, configuring the
1834 addresses column and setting the type to external.
1835
1836 · ha_chassis_group column is configured.
1837
1838 · The HA chassis which belongs to the HA chassis group has
1839 the ovn-bridge-mappings configured and has proper L2 con‐
1840 nectivity so that it can receive the DHCP and other
1841 related request packets from these external resources.
1842
1843 · The Logical_Switch of this port has a localnet port.
1844
1845 · Native OVN services are enabled by configuring the DHCP
1846 and other options like the way it is done for the normal
1847 logical ports.
1848
1849 It is recommended to use the same HA chassis group for all the external
1850 ports of a logical switch. Otherwise, the physical switch might see MAC
1851 flap issue when different chassis provide the native services. For
1852 example when supporting native DHCPv4 service, DHCPv4 server mac (con‐
1853 figured in options:server_mac column in table DHCP_Options) originating
1854 from different ports can cause MAC flap issue. The MAC of the logical
1855 router IP(s) can also flap if the same HA chassis group is not set for
1856 all the external ports of a logical switch.
1857
1859 Role-Based Access Controls for the Southbound DB
1860 In order to provide additional security against the possibility of an
1861 OVN chassis becoming compromised in such a way as to allow rogue soft‐
1862 ware to make arbitrary modifications to the southbound database state
1863 and thus disrupt the OVN network, role-based access controls (see
1864 ovsdb-server(1) for additional details) are provided for the southbound
1865 database.
1866
1867 The implementation of role-based access controls (RBAC) requires the
1868 addition of two tables to an OVSDB schema: the RBAC_Role table, which
1869 is indexed by role name and maps the the names of the various tables
1870 that may be modifiable for a given role to individual rows in a permis‐
1871 sions table containing detailed permission information for that role,
1872 and the permission table itself which consists of rows containing the
1873 following information:
1874
1875 Table Name
1876 The name of the associated table. This column exists pri‐
1877 marily as an aid for humans reading the contents of this
1878 table.
1879
1880 Auth Criteria
1881 A set of strings containing the names of columns (or col‐
1882 umn:key pairs for columns containing string:string maps).
1883 The contents of at least one of the columns or column:key
1884 values in a row to be modified, inserted, or deleted must
1885 be equal to the ID of the client attempting to act on the
1886 row in order for the authorization check to pass. If the
1887 authorization criteria is empty, authorization checking
1888 is disabled and all clients for the role will be treated
1889 as authorized.
1890
1891 Insert/Delete
1892 Row insertion/deletion permission; boolean value indicat‐
1893 ing whether insertion and deletion of rows is allowed for
1894 the associated table. If true, insertion and deletion of
1895 rows is allowed for authorized clients.
1896
1897 Updatable Columns
1898 A set of strings containing the names of columns or col‐
1899 umn:key pairs that may be updated or mutated by autho‐
1900 rized clients. Modifications to columns within a row are
1901 only permitted when the authorization check for the
1902 client passes and all columns to be modified are included
1903 in this set of modifiable columns.
1904
1905 RBAC configuration for the OVN southbound database is maintained by
1906 ovn-northd. With RBAC enabled, modifications are only permitted for the
1907 Chassis, Encap, Port_Binding, and MAC_Binding tables, and are
1908 restricted as follows:
1909
1910 Chassis
1911 Authorization: client ID must match the chassis name.
1912
1913 Insert/Delete: authorized row insertion and deletion are
1914 permitted.
1915
1916 Update: The columns nb_cfg, external_ids, encaps, and
1917 vtep_logical_switches may be modified when authorized.
1918
1919 Encap Authorization: client ID must match the chassis name.
1920
1921 Insert/Delete: row insertion and row deletion are permit‐
1922 ted.
1923
1924 Update: The columns type, options, and ip can be modi‐
1925 fied.
1926
1927 Port_Binding
1928 Authorization: disabled (all clients are considered
1929 authorized. A future enhancement may add columns (or keys
1930 to external_ids) in order to control which chassis are
1931 allowed to bind each port.
1932
1933 Insert/Delete: row insertion/deletion are not permitted
1934 (ovn-northd maintains rows in this table.
1935
1936 Update: Only modifications to the chassis column are per‐
1937 mitted.
1938
1939 MAC_Binding
1940 Authorization: disabled (all clients are considered to be
1941 authorized).
1942
1943 Insert/Delete: row insertion/deletion are permitted.
1944
1945 Update: The columns logical_port, ip, mac, and datapath
1946 may be modified by ovn-controller.
1947
1948 Enabling RBAC for ovn-controller connections to the southbound database
1949 requires the following steps:
1950
1951 1. Creating SSL certificates for each chassis with the certifi‐
1952 cate CN field set to the chassis name (e.g. for a chassis
1953 with external-ids:system-id=chassis-1, via the command
1954 "ovs-pki -u req+sign chassis-1 switch").
1955
1956 2. Configuring each ovn-controller to use SSL when connecting
1957 to the southbound database (e.g. via "ovs-vsctl set open .
1958 external-ids:ovn-remote=ssl:x.x.x.x:6642").
1959
1960 3. Configuring a southbound database SSL remote with "ovn-con‐
1961 troller" role (e.g. via "ovn-sbctl set-connection
1962 role=ovn-controller pssl:6642").
1963
1964 Encrypt Tunnel Traffic with IPsec
1965 OVN tunnel traffic goes through physical routers and switches. These
1966 physical devices could be untrusted (devices in public network) or
1967 might be compromised. Enabling encryption to the tunnel traffic can
1968 prevent the traffic data from being monitored and manipulated.
1969
1970 The tunnel traffic is encrypted with IPsec. The CMS sets the ipsec col‐
1971 umn in the northbound NB_Global table to enable or disable IPsec encry‐
1972 tion. If ipsec is true, all OVN tunnels will be encrypted. If ipsec is
1973 false, no OVN tunnels will be encrypted.
1974
1975 When CMS updates the ipsec column in the northbound NB_Global table,
1976 ovn-northd copies the value to the ipsec column in the southbound
1977 SB_Global table. ovn-controller in each chassis monitors the southbound
1978 database and sets the options of the OVS tunnel interface accordingly.
1979 OVS tunnel interface options are monitored by the ovs-monitor-ipsec
1980 daemon which configures IKE daemon to set up IPsec connections.
1981
1982 Chassis authenticates each other by using certificate. The authentica‐
1983 tion succeeds if the other end in tunnel presents a certificate signed
1984 by a trusted CA and the common name (CN) matches the expected chassis
1985 name. The SSL certificates used in role-based access controls (RBAC)
1986 can be used in IPsec. Or use ovs-pki to create different certificates.
1987 The certificate is required to be x.509 version 3, and with CN field
1988 and subjectAltName field being set to the chassis name.
1989
1990 The CA certificate, chassis certificate and private key are required to
1991 be installed in each chassis before enabling IPsec. Please see
1992 ovs-vswitchd.conf.db(5) for setting up CA based IPsec authentication.
1993
1995 Tunnel Encapsulations
1996 In general, OVN annotates logical network packets that it sends from
1997 one hypervisor to another with the following three pieces of metadata,
1998 which are encoded in an encapsulation-specific fashion:
1999
2000 · 24-bit logical datapath identifier, from the tunnel_key
2001 column in the OVN Southbound Datapath_Binding table.
2002
2003 · 15-bit logical ingress port identifier. ID 0 is reserved
2004 for internal use within OVN. IDs 1 through 32767, inclu‐
2005 sive, may be assigned to logical ports (see the tun‐
2006 nel_key column in the OVN Southbound Port_Binding table).
2007
2008 · 16-bit logical egress port identifier. IDs 0 through
2009 32767 have the same meaning as for logical ingress ports.
2010 IDs 32768 through 65535, inclusive, may be assigned to
2011 logical multicast groups (see the tunnel_key column in
2012 the OVN Southbound Multicast_Group table).
2013
2014 When VXLAN is enabled on any hypervisor in a cluster, datapath and
2015 egress port identifier ranges are reduced to 12-bits. This is done
2016 because only STT and Geneve provide the large space for metadata (over
2017 32 bits per packet). To accommodate for VXLAN, 24 bits available are
2018 split as follows:
2019
2020 · 12-bit logical datapath identifier, derived from the tun‐
2021 nel_key column in the OVN Southbound Datapath_Binding ta‐
2022 ble.
2023
2024 · 12-bit logical egress port identifier. IDs 0 through
2025 32767 have the same meaning as for logical ingress ports.
2026 IDs 32768 through 65535, inclusive, may be assigned to
2027 logical multicast groups (see the tunnel_key column in
2028 the OVN Southbound Multicast_Group table).
2029
2030 · No logical ingress port identifier.
2031
2032 The limited space available for metadata when VXLAN tunnels are enabled
2033 in a cluster put the following functional limitations onto features
2034 available to users:
2035
2036 · The maximum number of networks is reduced to 4096.
2037
2038 · The maximum number of ports per network is reduced to
2039 4096. (Including multicast group ports.)
2040
2041 · ACLs matching against logical ingress port identifiers
2042 are not supported.
2043
2044 · OVN interconnection feature is not supported.
2045
2046 In addition to functional limitations described above, the following
2047 should be considered before enabling it in your cluster:
2048
2049 · STT and Geneve use randomized UDP or TCP source ports
2050 that allows efficient distribution among multiple paths
2051 in environments that use ECMP in their underlay.
2052
2053 · NICs are available to offload STT and Geneve encapsula‐
2054 tion and decapsulation.
2055
2056 Due to its flexibility, the preferred encapsulation between hypervisors
2057 is Geneve. For Geneve encapsulation, OVN transmits the logical datapath
2058 identifier in the Geneve VNI. OVN transmits the logical ingress and
2059 logical egress ports in a TLV with class 0x0102, type 0x80, and a
2060 32-bit value encoded as follows, from MSB to LSB:
2061
2062 1 15 16
2063 +---+------------+-----------+
2064 |rsv|ingress port|egress port|
2065 +---+------------+-----------+
2066 0
2067
2068
2069 Environments whose NICs lack Geneve offload may prefer STT encapsula‐
2070 tion for performance reasons. For STT encapsulation, OVN encodes all
2071 three pieces of logical metadata in the STT 64-bit tunnel ID as fol‐
2072 lows, from MSB to LSB:
2073
2074 9 15 16 24
2075 +--------+------------+-----------+--------+
2076 |reserved|ingress port|egress port|datapath|
2077 +--------+------------+-----------+--------+
2078 0
2079
2080
2081 For connecting to gateways, in addition to Geneve and STT, OVN supports
2082 VXLAN, because only VXLAN support is common on top-of-rack (ToR)
2083 switches. Currently, gateways have a feature set that matches the capa‐
2084 bilities as defined by the VTEP schema, so fewer bits of metadata are
2085 necessary. In the future, gateways that do not support encapsulations
2086 with large amounts of metadata may continue to have a reduced feature
2087 set.
2088
2089
2090
2091OVN 20.12.0 OVN Architecture ovn-architecture(7)