1NVMETCLI(8) NVMETCLI(8)
2
3
4
6 nvmetcli - Configure NVMe-over-Fabrics Target.
7
9 nvmetcli
10 nvmetcli clear
11 nvmetcli restore [filename.json]
12
14 nvmetcli is a program used for viewing, editing, saving, and starting a
15 Linux kernel NVMe Target, used for an NVMe-over-Fabrics network
16 configuration. It allows an administrator to export a storage resource
17 (such as NVMe devices, files, and volumes) to a local block device and
18 expose them to remote systems based on the NVMe-over-Fabrics
19 specification from http://www.nvmexpress.org.
20
21 nvmetcli is run as root and has two modes:
22
23 1. An interactive configuration shell
24
25 2. Command-line mode which uses an argument
26
28 The term NQN used throughout this man page is the NVMe Qualified Name
29 format which an NVMe endpoint (device, subsystem, etc) must follow to
30 guarantee a unique name under the NVMe standard. Any name in a network
31 system setup can be used, but if it does not follow the NQN format, it
32 may not be unique on an NVMe-over-Fabrics network.
33
34 Note that some of the fields set for an NVMe Target port under
35 interactive mode are defined in the "Discovery Log Page" section of
36 NVMe-over-Fabrics specification. Each NVMe Target has a discovery
37 controller mechanism that an NVMe Host can use to determine the NVM
38 subsystems it can access. nvmetcli can be used to add a new record to
39 the discovery controller upon each new subsystem entry and port entry
40 that the newly created subsystem entry binds to (see OPTIONS and
41 EXAMPLES sections). Each NVMe Host only gets to see the discovery
42 entries defined in /subsystems/[NQN NAME]/allowed_hosts and the IP port
43 it is connected to the NVMe Target. An NVMe Host can retrieve these
44 discovery logs via the nvme-cli tool
45 (https://github.com/linux-nvme/nvme-cli).
46
48 Interactive Configuration Shell
49
50 To start the interactive configuration shell, type nvmetcli on the
51 command-line. nvmetcli interacts with the Linux kernel NVMe Target
52 configfs subsystem starting at base nvmetcli directories /port,
53 /subsystem, and /host. Configuration changes entered by the
54 administrator are made immediately to the kernel target configuration.
55 The following commands can be used while in the interactive
56 configuration shell mode:
57
58 ┌───────────────────────────┬────────────────────────────┐
59 │ │ │
60 │cd │ Allows to move around the │
61 │ │ tree. │
62 ├───────────────────────────┼────────────────────────────┤
63 │ │ │
64 │ls │ Lists contents of current │
65 │ │ tree node. │
66 ├───────────────────────────┼────────────────────────────┤
67 │ │ │
68 │create [NQN name]/[#] │ Create a new object using │
69 │ │ the specified name or │
70 │ │ number. If a [NQN │
71 │ │ name]/[#] is not │
72 │ │ specified, a random entry │
73 │ │ will be used. │
74 ├───────────────────────────┼────────────────────────────┤
75 │ │ │
76 │delete [NQN name]/[#] │ Delete an object with the │
77 │ │ specified name or number. │
78 ├───────────────────────────┼────────────────────────────┤
79 │ │ │
80 │set attr │ Used under │
81 │allow_any_host=[0/1] │ /subsystems/[NQN name] to │
82 │ │ specify if any NVMe Host │
83 │ │ can connect to the │
84 │ │ subsystem. │
85 ├───────────────────────────┼────────────────────────────┤
86 │ │ │
87 │set device path=[device │ Used under │
88 │path] │ /subsystems/[NQN │
89 │ │ name]/namespaces to set │
90 │ │ the (storage) device to be │
91 │ │ used. │
92 ├───────────────────────────┼────────────────────────────┤
93 │ │ │
94 │set device nguid=[string] │ Used under │
95 │ │ /subsystems/[NQN │
96 │ │ name]/namespaces to set │
97 │ │ the unique id of the │
98 │ │ device to the defined │
99 │ │ namespace. │
100 ├───────────────────────────┼────────────────────────────┤
101 │ │ │
102 │enable/disable │ Used under │
103 │ │ /subsystems/[NQN │
104 │ │ name]/namespaces to enable │
105 │ │ and disable the namespace. │
106 ├───────────────────────────┼────────────────────────────┤
107 │ │ │
108 │set addr [discovery log │ Used under /ports/[#] to │
109 │page field]=[string] │ create a port which access │
110 │ │ is allowed. See EXAMPLES │
111 │ │ for more information. │
112 ├───────────────────────────┼────────────────────────────┤
113 │ │ │
114 │saveconfig [filename.json] │ Save the NVMe Target │
115 │ │ configuration in .json │
116 │ │ format. Without specifying │
117 │ │ the filename this will │
118 │ │ save as │
119 │ │ /etc/nvmet/config.json. │
120 │ │ This file is in JSON │
121 │ │ format and can be edited │
122 │ │ directly using a prefered │
123 │ │ file editor. │
124 ├───────────────────────────┼────────────────────────────┤
125 │ │ │
126 │exit │ Quits interactive │
127 │ │ configuration shell mode. │
128 └───────────────────────────┴────────────────────────────┘
129
130 Command Line Mode
131
132 Typing nvmetcli [cmd] on the command-line will execute a command and
133 not enter the interactive configuration shell.
134
135 ┌────────────────────────┬───────────────────────────┐
136 │ │ │
137 │restore [filename.json] │ Loads a saved NVMe Target │
138 │ │ configuration. Without │
139 │ │ specifying the filename │
140 │ │ this will use │
141 │ │ /etc/nvmet/config.json. │
142 ├────────────────────────┼───────────────────────────┤
143 │ │ │
144 │clear │ Clears a current NVMe │
145 │ │ Target configuration. │
146 └────────────────────────┴───────────────────────────┘
147
149 Make sure to run nvmetcli as root, the nvmet module is loaded, your
150 devices and all dependent modules are loaded, and configfs is mounted
151 on /sys/kernel/config using:
152
153 mount -t configs none /sys/kernel/config
154
155 The following section walks through a configuration example.
156
157 · To get started with the interactive mode and the nvmetcli command
158 prompt, type (in root):
159
160 # ./nvmetcli
161 ...>
162
163 · Create a subsystem. If you do not specify a name a NQN will be
164 generated, which is probably the best choice. We don’t do it here
165 as the name would be random:
166
167 > cd /subsystems
168 ...> create testnqn
169
170 · Add access for a specific NVMe Host by it’s NQN:
171
172 ...> cd /hosts
173 ...> create hostnqn
174 ...> cd /subsystems/testnqn
175 ...> set attr allow_any_host=0
176 ...> cd /subsystems/testnqn/allowed_hosts/
177 ...> create hostnqn
178
179 · Remove access of a subsystem by deleting the Host NQN:
180
181 ...> cd /subsystems/testnqn/allowed_hosts/
182 ...> delete hostnqn
183
184 · Alternatively this allows any Host to connect to the subsystsem.
185 Only use this in tightly controlled environments:
186
187 ...> cd /subsystems/testnqn/
188 ...> set attr allow_any_host=1
189
190 · Create a new namespace. If you do not specify a namespace ID the
191 fist unused one will be used:
192
193 ...> cd /subsystems/testnqn/namespaces
194 ...> create 1
195 ...> cd 1
196 ...> set device path=/dev/nvme0n1
197 ...> enable
198
199 Note that in the above setup the device_nguid attribute does not have
200 to be set for correct NVMe Target functionality (but to correctly match
201 a namespace to the exact device upon clear and restore operations, it
202 is advised to set the device_nguid parameter).
203
204 · Create a loopback port that can be used with nvme-loop module on
205 the same physical machine...
206
207 ...> cd /ports/
208 ...> create 1
209 ...> cd 1/
210 ...> set addr trtype=loop
211 ...> cd subsystems/
212 ...> create testnqn
213
214 · or create an RDMA (IB, RoCE, iWarp) port using IPv4 addressing.
215 4420 is the IANA assigned default port for NVMe over Fabrics using
216 RDMA:
217
218 ...> cd /ports/
219 ...> create 2
220 ...> cd 2/
221 ...> set addr trtype=rdma
222 ...> set addr adrfam=ipv4
223 ...> set addr traddr=192.168.6.68
224 ...> set addr trsvcid=4420
225 ...> cd subsystems/
226 ...> create testnqn
227
228 · or create an FC port. traddr is the WWNN/WWPN of the FC port.
229
230 ...> cd /ports/
231 ...> create 3
232 ...> cd 3/
233 ...> set addr trtype=fc
234 ...> set addr adrfam=fc
235 ...> set addr traddr=nn-0x1000000044001123:pn-0x2000000055001123
236 ...> set addr trsvcid=none
237 ...> cd subsystems/
238 ...> create testnqn
239
240 · Saving the NVMe Target configuration:
241
242 ./nvmetcli
243 ...> saveconfig test.json
244
245 · Loading an NVMe Target configuration:
246
247 ./nvmetcli restore test.json
248
249 · Clearing a current NVMe Target configuration:
250
251 ./nvmetcli clear
252
254 nvmetcli has the ability to start and stop the NVMe Target
255 configuration on boot and shutdown through the systemctl Linux utility
256 via a .service file. nvmetcli package comes with nvmet.service which
257 when installed, it can automatically restore the default, saved NVMe
258 Target configuration from /etc/nvmet/config.json. nvmet.service can be
259 installed in directories such as /lib/systemd/system.
260
261 To explicitly enable the service, type:
262
263 systemctl enable nvmet
264
265 To explicitly disable the service, type:
266
267 systemctl disable nvmet
268
269 See also systemctl(1).
270
272 This man page was written by Jay Freyensee[1]. nvmetcli was originally
273 written by Christoph Hellwig[2].
274
276 Please send patches and bug reports to
277 linux-nvme@lists.infradead.org[3] for review and acceptance.
278
280 nvmetcli is licensed under the Apache License, Version 2.0. Software
281 distributed under this license is distributed on an "AS IS" BASIS,
282 WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either expressed or
283 implied.
284
286 1. Jay Freyensee
287 mailto:james.p.freyensee@intel.com
288
289 2. Christoph Hellwig
290 mailto:hch@infradead.org
291
292 3. linux-nvme@lists.infradead.org
293 mailto:linux-nvme@lists.infradead.org
294
295
296
297 07/25/2019 NVMETCLI(8)