1NM-CLOUD-SETUP(8) Automatic Network Configuratio NM-CLOUD-SETUP(8)
2
3
4
6 nm-cloud-setup - Overview of Automatic Network Configuration in Cloud
7
9 When running a virtual machine in a public cloud environment, it is
10 desirable to automatically configure the network of that VM. In simple
11 setups, the VM only has one network interface and the public cloud
12 supports automatic configuration via DHCP, DHCP6 or IPv6 autoconf.
13 However, the virtual machine might have multiple network interfaces, or
14 multiple IP addresses and IP subnets on one interface which cannot be
15 configured via DHCP. Also, the administrator may reconfigure the
16 network while the machine is running. NetworkManager's nm-cloud-setup
17 is a tool that automatically picks up such configuration in cloud
18 environments and updates the network configuration of the host.
19
20 Multiple cloud providers are supported. See the section called
21 “SUPPORTED CLOUD PROVIDERS”.
22
24 The goal of nm-cloud-setup is to be configuration-less and work
25 automatically. All you need is to opt-in to the desired cloud providers
26 (see the section called “ENVIRONMENT VARIABLES”) and run
27 /usr/libexec/nm-cloud-setup.
28
29 Usually this is done by enabling the nm-cloud-setup.service systemd
30 service and let it run periodically. For that there is both a
31 nm-cloud-setup.timer systemd timer and a NetworkManager dispatcher
32 script.
33
35 nm-cloud-setup configures the network by fetching the configuration
36 from the well-known meta data server of the cloud provider. That means,
37 it already needs the network configured to the point where it can reach
38 the meta data server. Commonly that means, that a simple connection
39 profile is activated that possibly uses DHCP to get the primary IP
40 address. NetworkManager will create such a profile for ethernet devices
41 automatically if it is not configured otherwise via "no-auto-default"
42 setting in NetworkManager.conf. One possible alternative may be to
43 create such an initial profile with nmcli device connect "$DEVICE" or
44 nmcli connection add type ethernet ....
45
46 By setting the user-data org.freedesktop.nm-cloud-setup.skip=yes on the
47 profile, nm-cloud-setup will skip the device.
48
49 nm-cloud-setup modifies the run time configuration akin to nmcli device
50 modify. With this approach, the configuration is not persisted and only
51 preserved until the device disconnects.
52
53 /usr/libexec/nm-cloud-setup
54 The binary /usr/libexec/nm-cloud-setup does most of the work. It
55 supports no command line arguments but can be configured via
56 environment variables. See the section called “ENVIRONMENT VARIABLES”
57 for the supported environment variables.
58
59 By default, all cloud providers are disabled unless you opt-in by
60 enabling one or several providers. If cloud providers are enabled, the
61 program tries to fetch the host's configuration from a meta data server
62 of the cloud via HTTP. If configuration could be not fetched, no cloud
63 provider are detected and the program quits. If host configuration is
64 obtained, the corresponding cloud provider is successfully detected.
65 Then the network of the host will be configured.
66
67 It is intended to re-run nm-cloud-setup every time when the
68 configuration (maybe) changes. The tool is idempotent, so it should be
69 OK to also run it more often than necessary. You could run
70 /usr/libexec/nm-cloud-setup directly. However it may be preferable to
71 restart the nm-cloud-setup systemd service instead or use the timer or
72 dispatcher script to run it periodically (see below).
73
74 nm-cloud-setup.service systemd unit
75 Usually /usr/libexec/nm-cloud-setup is not run directly, but only by
76 systemctl restart nm-cloud-setup.service. This ensures that the tool
77 only runs once at any time. It also allows to integrate with the
78 nm-cloud-setup systemd timer, and to enable/disable the service via
79 systemd.
80
81 As you need to set environment variable to configure nm-cloud-setup
82 binary, you can do so via systemd override files. Try systemctl edit
83 nm-cloud-setup.service.
84
85 nm-cloud-setup.timer systemd timer
86 /usr/libexec/nm-cloud-setup is intended to run whenever an update is
87 necessary. For example, during boot when when changing the network
88 configuration of the virtual machine via the cloud provider.
89
90 One way to do this, is by enabling the nm-cloud-setup.timer systemd
91 timer with systemctl enable --now nm-cloud-setup.timer.
92
93 /usr/lib/NetworkManager/dispatcher.d/90-nm-cloud-setup.sh
94 There is also a NetworkManager dispatcher script that will run for
95 example when an interface is activated by NetworkManager. Together with
96 the nm-cloud-setup.timer systemd timer this script is to automatically
97 pick up changes to the network.
98
99 The dispatcher script will do nothing, unless the systemd service is
100 enabled. To use the dispatcher script you should therefor run systemctl
101 enable nm-cloud-setup.service once.
102
104 The following environment variables are used to configure
105 /usr/libexec/nm-cloud-setup. You may want to configure them with a
106 drop-in for the systemd service. For example by calling systemctl edit
107 nm-cloud-setup.service and configuring [Service] Environment=, as
108 described in systemd.exec(5) manual.
109
110 • NM_CLOUD_SETUP_LOG: control the logging verbosity. Set it to one of
111 TRACE, DEBUG, INFO, WARN, ERR or OFF. The program will print
112 message on stdout and the default level is WARN. When run as
113 systemd service, the log will be collected by journald can can be
114 seen with journalctl.
115
116 • NM_CLOUD_SETUP_AZURE: boolean, whether Microsoft Azure support is
117 enabled. Defaults to no.
118
119 • NM_CLOUD_SETUP_EC2: boolean, whether Amazon EC2 (AWS) support is
120 enabled. Defaults to no.
121
122 • NM_CLOUD_SETUP_GCP: boolean, whether Google GCP support is enabled.
123 Defaults to no.
124
125 • NM_CLOUD_SETUP_ALIYUN: boolean, whether Alibaba Cloud (Aliyun)
126 support is enabled. Defaults to no.
127
129 Enable debug logging by setting NM_CLOUD_SETUP_LOG environment variable
130 to TRACE.
131
132 In the common case where nm-cloud-setup is running as systemd service,
133 this can be done via systemctl edit nm-cloud-setup.service and add
134 Environment=NM_CLOUD_SETUP_LOG=TRACE to the [Service] section.
135 Afterwards, the log can be found in syslog via journalctl. You may also
136 want to enable debug logging in NetworkManager as descibed in the
137 DEBUGGING section in NetworkManager(5) manual. When sharing logs, it's
138 best to share complete logs and not preemptively filter for
139 NetworkManager or nm-cloud-setup logs.
140
142 As detailed before, nm-cloud-setup needs to be explicitly enabled. As
143 it runs as a systemd service and timer, that basically means to enable
144 and configure those. This can be done by dropping the correct files and
145 symlinks to disk.
146
147 The following example enables nm-cloud-setup for Amazon EC2 cloud:
148
149 dnf install -y NetworkManager-cloud-setup
150
151 mkdir -p /etc/systemd/system/nm-cloud-setup.service.d
152 cat > /etc/systemd/system/nm-cloud-setup.service.d/10-enable-ec2.conf << EOF
153 [Service]
154 Environment=NM_CLOUD_SETUP_EC2=yes
155 EOF
156
157 # systemctl enable nm-cloud-setup.service
158 mkdir -p /etc/systemd/system/NetworkManager.service.wants/
159 ln -s /usr/lib/systemd/system/nm-cloud-setup.service /etc/systemd/system/NetworkManager.service.wants/nm-cloud-setup.service
160
161 # systemctl enable nm-cloud-setup.timer
162 mkdir -p /etc/systemd/system/timers.target.wants/
163 ln -s /etc/systemd/system/timers.target.wants/nm-cloud-setup.timer /usr/lib/systemd/system/nm-cloud-setup.timer
164
165 # systemctl daemon-reload
166
167
168
170 Amazon EC2 (AWS)
171 For AWS, the tools tries to fetch configuration from
172 http://169.254.169.254/. Currently, it only configures IPv4 and does
173 nothing about IPv6. It will do the following.
174
175 • First fetch http://169.254.169.254/latest/meta-data/ to determine
176 whether the expected API is present. This determines whether EC2
177 environment is detected and whether to proceed to configure the
178 host using EC2 meta data.
179
180 • Fetch
181 http://169.254.169.254/2018-09-24/meta-data/network/interfaces/macs/
182 to get the list of available interface. Interfaces are identified
183 by their MAC address.
184
185 • Then for each interface fetch
186 http://169.254.169.254/2018-09-24/meta-data/network/interfaces/macs/$MAC/subnet-ipv4-cidr-block
187 and
188 http://169.254.169.254/2018-09-24/meta-data/network/interfaces/macs/$MAC/local-ipv4s.
189 Thereby we get a list of local IPv4 addresses and one CIDR subnet
190 block.
191
192 • Then nm-cloud-setup iterates over all interfaces for which it could
193 fetch IP configuration. If no ethernet device for the respective
194 MAC address is found, it is skipped. Also, if the device is
195 currently not activated in NetworkManager or if the currently
196 activated profile has a user-data
197 org.freedesktop.nm-cloud-setup.skip=yes, it is skipped.
198
199 If only one interface and one address is configured, then the tool
200 does nothing and leaves the automatic configuration that was
201 obtained via DHCP.
202
203 Otherwise, the tool will change the runtime configuration of the
204 device.
205
206 • Add static IPv4 addresses for all the configured addresses from
207 local-ipv4s with prefix length according to
208 subnet-ipv4-cidr-block. For example, we might have here 2 IP
209 addresses like "172.16.5.3/24,172.16.5.4/24".
210
211 • Choose a route table 30400 + the index of the interface and add
212 a default route 0.0.0.0/0. The gateway is the first IP address
213 in the CIDR subnet block. For example, we might get a route
214 "0.0.0.0/0 172.16.5.1 10 table=30400".
215
216 Also choose a route table 30200 + the interface index. This
217 contains a direct routes to the subnets of this interface.
218
219 • Finally, add a policy routing rule for each address. For
220 example "priority 30200 from 172.16.5.3/32 table 30200,
221 priority 30200 from 172.16.5.4/32 table 30200". and "priority
222 30400 from 172.16.5.3/32 table 30400, priority 30400 from
223 172.16.5.4/32 table 30400" The 30200+ rules select the table to
224 reach the subnet directly, while the 30400+ rules use the
225 default route. Also add a rule "priority 30350 table main
226 suppress_prefixlength 0". This has a priority between the two
227 previous rules and causes a lookup of routes in the main table
228 while ignoring the default route. The purpose of this is so
229 that other specific routes in the main table are honored over
230 the default route in table 30400+.
231
232 With above example, this roughly corresponds for interface eth0 to
233 nmcli device modify "eth0" ipv4.addresses
234 "172.16.5.3/24,172.16.5.4/24" ipv4.routes "172.16.5.0/24 0.0.0.0 10
235 table=30200, 0.0.0.0/0 172.16.5.1 10 table=30400"
236 ipv4.routing-rules "priority 30200 from 172.16.5.3/32 table 30200,
237 priority 30200 from 172.16.5.4/32 table 30200, priority 20350 table
238 main suppress_prefixlength 0, priority 30400 from 172.16.5.3/32
239 table 30400, priority 30400 from 172.16.5.4/32 table 30400". Note
240 that this replaces the previous addresses, routes and rules with
241 the new information. But also note that this only changes the run
242 time configuration of the device. The connection profile on disk is
243 not affected.
244
245 Google Cloud Platform (GCP)
246 For GCP, the meta data is fetched from URIs starting with
247 http://metadata.google.internal/computeMetadata/v1/ with a HTTP header
248 "Metadata-Flavor: Google". Currently, the tool only configures IPv4 and
249 does nothing about IPv6. It will do the following.
250
251 • First fetch
252 http://metadata.google.internal/computeMetadata/v1/instance/id to
253 detect whether the tool runs on Google Cloud Platform. Only if the
254 platform is detected, it will continue fetching the configuration.
255
256 • Fetch
257 http://metadata.google.internal/computeMetadata/v1/instance/network-interfaces/
258 to get the list of available interface indexes. These indexes can
259 be used for further lookups.
260
261 • Then, for each interface fetch
262 http://metadata.google.internal/computeMetadata/v1/instance/network-interfaces/$IFACE_INDEX/mac
263 to get the corresponding MAC address of the found interfaces. The
264 MAC address is used to identify the device later on.
265
266 • Then, for each interface with a MAC address fetch
267 http://metadata.google.internal/computeMetadata/v1/instance/network-interfaces/$IFACE_INDEX/forwarded-ips/
268 and then all the found IP addresses at
269 http://metadata.google.internal/computeMetadata/v1/instance/network-interfaces/$IFACE_INDEX/forwarded-ips/$FIPS_INDEX.
270
271 • At this point, we have a list of all interfaces (by MAC address)
272 and their configured IPv4 addresses.
273
274 For each device, we lookup the currently applied connection in
275 NetworkManager. That implies, that the device is currently
276 activated in NetworkManager. If no such device was in
277 NetworkManager, or if the profile has user-data
278 org.freedesktop.nm-cloud-setup.skip=yes, we skip the device. Now
279 for each found IP address we add a static route "$FIPS_ADDR/32
280 0.0.0.0 100 type=local" and reapply the change.
281
282 The effect is not unlike calling nmcli device modify "$DEVICE"
283 ipv4.routes "$FIPS_ADDR/32 0.0.0.0 100 type=local [,...]" for all
284 relevant devices and all found addresses.
285
286 Microsoft Azure
287 For Azure, the meta data is fetched from URIs starting with
288 http://169.254.169.254/metadata/instance with a URL parameter
289 "?format=text&api-version=2017-04-02" and a HTTP header
290 "Metadata:true". Currently, the tool only configures IPv4 and does
291 nothing about IPv6. It will do the following.
292
293 • First fetch
294 http://169.254.169.254/metadata/instance?format=text&api-version=2017-04-02
295 to detect whether the tool runs on Azure Cloud. Only if the
296 platform is detected, it will continue fetching the configuration.
297
298 • Fetch
299 http://169.254.169.254/metadata/instance/network/interface/?format=text&api-version=2017-04-02
300 to get the list of available interface indexes. These indexes can
301 be used for further lookups.
302
303 • Then, for each interface fetch
304 http://169.254.169.254/metadata/instance/network/interface/$IFACE_INDEX/macAddress?format=text&api-version=2017-04-02
305 to get the corresponding MAC address of the found interfaces. The
306 MAC address is used to identify the device later on.
307
308 • Then, for each interface with a MAC address fetch
309 http://169.254.169.254/metadata/instance/network/interface/$IFACE_INDEX/ipv4/ipAddress/?format=text&api-version=2017-04-02
310 to get the list of (indexes of) IP addresses on that interface.
311
312 • Then, for each IP address index fetch the address at
313 http://169.254.169.254/metadata/instance/network/interface/$IFACE_INDEX/ipv4/ipAddress/$ADDR_INDEX/privateIpAddress?format=text&api-version=2017-04-02.
314 Also fetch the size of the subnet and prefix for the interface from
315 http://169.254.169.254/metadata/instance/network/interface/$IFACE_INDEX/ipv4/subnet/0/address/?format=text&api-version=2017-04-02.
316 and
317 http://169.254.169.254/metadata/instance/network/interface/$IFACE_INDEX/ipv4/subnet/0/prefix/?format=text&api-version=2017-04-02.
318
319 • At this point, we have a list of all interfaces (by MAC address)
320 and their configured IPv4 addresses.
321
322 Then the tool configures the system like doing for AWS environment.
323 That is, using source based policy routing with the tables/rules
324 30200/30400.
325
326 Alibaba Cloud (Aliyun)
327 For Aliyun, the tools tries to fetch configuration from
328 http://100.100.100.200/. Currently, it only configures IPv4 and does
329 nothing about IPv6. It will do the following.
330
331 • First fetch http://100.100.100.200/2016-01-01/meta-data/ to
332 determine whether the expected API is present. This determines
333 whether Aliyun environment is detected and whether to proceed to
334 configure the host using Aliyun meta data.
335
336 • Fetch
337 http://100.100.100.200/2016-01-01/meta-data/network/interfaces/macs/
338 to get the list of available interface. Interfaces are identified
339 by their MAC address.
340
341 • Then for each interface fetch
342 http://100.100.100.200/2016-01-01/meta-data/network/interfaces/macs/$MAC/vpc-cidr-block,
343 http://100.100.100.200/2016-01-01/meta-data/network/interfaces/macs/$MAC/private-ipv4s,
344 http://100.100.100.200/2016-01-01/meta-data/network/interfaces/macs/$MAC/netmask
345 and
346 http://100.100.100.200/2016-01-01/meta-data/network/interfaces/macs/$MAC/gateway.
347 Thereby we get a list of private IPv4 addresses, one CIDR subnet
348 block and private IPv4 addresses prefix.
349
350 • Then nm-cloud-setup iterates over all interfaces for which it could
351 fetch IP configuration. If no ethernet device for the respective
352 MAC address is found, it is skipped. Also, if the device is
353 currently not activated in NetworkManager or if the currently
354 activated profile has a user-data
355 org.freedesktop.nm-cloud-setup.skip=yes, it is skipped. Also, there
356 is only one interface and one IP address, the tool does nothing.
357
358 Then the tool configures the system like doing for AWS environment.
359 That is, using source based policy routing with the tables/rules
360 30200/30400. One difference to AWS is that the gateway is also
361 fetched via metadata instead of using the first IP address in the
362 subnet.
363
365 NetworkManager(8) nmcli(1)
366
367
368
369NetworkManager 1.44.2 NM-CLOUD-SETUP(8)