1NVME-CONNECT(1) NVMe Manual NVME-CONNECT(1)
2
3
4
6 nvme-connect - Connect to a Fabrics controller.
7
9 nvme connect
10 [--transport=<trtype> | -t <trtype>]
11 [--nqn=<subnqn> | -n <subnqn>]
12 [--traddr=<traddr> | -a <traddr>]
13 [--trsvcid=<trsvcid> | -s <trsvcid>]
14 [--host-traddr=<traddr> | -w <traddr>]
15 [--hostnqn=<hostnqn> | -q <hostnqn>]
16 [--nr-io-queues=<#> | -i <#>]
17 [--queue-size=<#> | -Q <#>]
18 [--keep-alive-tmo=<#> | -k <#>]
19 [--reconnect-delay=<#> | -c <#>]
20 [--ctrl-loss-tmo=<#> | -l <#>]
21
23 Create a transport connection to a remote system (specified by --traddr
24 and --trsvcid) and create a NVMe over Fabrics controller for the NVMe
25 subsystem specified by the --nqn option.
26
28 -t <trtype>, --transport=<trtype>
29 This field specifies the network fabric being used for a
30 NVMe-over-Fabrics network. Current string values include:
31
32 ┌──────┬────────────────────────────┐
33 │Value │ Definition │
34 ├──────┼────────────────────────────┤
35 │rdma │ The network fabric is an │
36 │ │ rdma network (RoCE, iWARP, │
37 │ │ Infiniband, basic rdma, │
38 │ │ etc) │
39 ├──────┼────────────────────────────┤
40 │fc │ WIP The network fabric is │
41 │ │ a Fibre Channel network. │
42 ├──────┼────────────────────────────┤
43 │loop │ Connect to a NVMe over │
44 │ │ Fabrics target on the │
45 │ │ local host │
46 └──────┴────────────────────────────┘
47
48 -n <subnqn>, --nqn <subnqn>
49 This field specifies the name for the NVMe subsystem to connect to.
50
51 -a <traddr>, --traddr=<traddr>
52 This field specifies the network address of the Controller. For
53 transports using IP addressing (e.g. rdma) this should be an
54 IP-based address (ex. IPv4).
55
56 -s <trsvcid>, --trsvcid=<trsvcid>
57 This field specifies the transport service id. For transports using
58 IP addressing (e.g. rdma) this field is the port number. By
59 default, the IP port number for the RDMA transport is 4420.
60
61 -w <traddr>, --host-traddr=<traddr>
62 This field specifies the network address used on the host to
63 connect to the Controller.
64
65 -q <hostnqn>, --hostnqn=<hostnqn>
66 Overrides the default Host NQN that identifies the NVMe Host. If
67 this option is not specified, the default is read from
68 /etc/nvme/hostnqn first. If that does not exist, the autogenerated
69 NQN value from the NVMe Host kernel module is used next. The Host
70 NQN uniquely identifies the NVMe Host.
71
72 -i <#>, --nr-io-queues=<#>
73 Overrides the default number of I/O queues create by the driver.
74
75 -Q <#>, --queue-size=<#>
76 Overrides the default number of elements in the I/O queues created
77 by the driver.
78
79 -k <#>, --keep-alive-tmo=<#>
80 Overrides the default keep alive timeout (in seconds).
81
82 -c <#>, --reconnect-delay=<#>
83 Overrides the default delay (in seconds) before reconnect is
84 attempted after a connect loss.
85
86 -l <#>, --ctrl-loss-tmo=<#>
87 Overrides the default controller loss timeout period (in seconds).
88
90 · Connect to a subsystem named
91 nqn.2014-08.com.example:nvme:nvm-subsystem-sn-d78432 on the IP4
92 address 192.168.1.3. Port 4420 is used by default:
93
94 # nvme connect --transport=rdma --traddr=192.168.1.3 \
95 --nqn=nqn.2014-08.com.example:nvme:nvm-subsystem-sn-d78432
96
97 SEE ALSO
98
99 nvme-discover(1) nvme-connect-all(1)
100
102 This was co-written by Jay Freyensee[1] and Christoph Hellwig[2] for
103 Keith Busch[3].
104
106 Part of the nvme-user suite
107
109 1. Jay Freyensee
110 mailto:james.p.freyensee@intel.com
111
112 2. Christoph Hellwig
113 mailto:hch@lst.de
114
115 3. Keith Busch
116 mailto:keith.busch@intel.com
117
118
119
120NVMe 06/05/2018 NVME-CONNECT(1)