1NVME-WDC-SMART-AD(1) NVMe Manual NVME-WDC-SMART-AD(1)
2
3
4
6 nvme-wdc-smart-add-log - Send NVMe WDC smart-add-log Vendor Unique
7 Command, return result
8
10 nvme wdc smart-add-log <device> [--interval=<NUM>, -i <NUM>] [--output-format=<normal|json> -o <normal|json>]
11
13 For the NVMe device given, send a Vendor Unique WDC smart-add-log
14 command and provide the additional smart log. The --interval option
15 will return performance statistics from the specified reporting
16 interval.
17
18 The <device> parameter is mandatory and may be either the NVMe
19 character device (ex: /dev/nvme0).
20
21 This will only work on WDC devices supporting this feature. Results for
22 any other device are undefined.
23
24 On success it returns 0, error code otherwise.
25
27 -i <NUM>, --interval=<NUM>
28 Return the statistics from specific interval, defaults to 14
29
30 -o <format>, --output-format=<format>
31 Set the reporting format to normal, or json. Only one output format
32 can be used at a time. Default is normal.
33
34 Valid Interval values and description :-
35
36 ┌──────┬────────────────────────────┐
37 │Value │ Description │
38 ├──────┼────────────────────────────┤
39 │ │ │
40 │1 │ Most recent five (5) │
41 │ │ minute accumulated set. │
42 ├──────┼────────────────────────────┤
43 │ │ │
44 │2-12 │ Previous five (5) minute │
45 │ │ accumulated sets. │
46 ├──────┼────────────────────────────┤
47 │ │ │
48 │13 │ The accumulated total of │
49 │ │ sets 1 through 12 that │
50 │ │ contain the previous hour │
51 │ │ of accumulated statistics. │
52 ├──────┼────────────────────────────┤
53 │ │ │
54 │14 │ The statistical set │
55 │ │ accumulated since │
56 │ │ power-up. │
57 ├──────┼────────────────────────────┤
58 │ │ │
59 │15 │ The statistical set │
60 │ │ accumulated during the │
61 │ │ entire lifetime of the │
62 │ │ device. │
63 └──────┴────────────────────────────┘
64
66 ┌───────────────────────────┬────────────────────────────┐
67 │Field │ Description │
68 ├───────────────────────────┼────────────────────────────┤
69 │ │ │
70 │Physical NAND bytes │ The number of bytes │
71 │written. │ written to NAND. 16 bytes │
72 │ │ - hi/lo │
73 ├───────────────────────────┼────────────────────────────┤
74 │ │ │
75 │Physical NAND bytes read │ The number of bytes read │
76 │ │ from NAND. 16 bytes - │
77 │ │ hi/lo │
78 ├───────────────────────────┼────────────────────────────┤
79 │ │ │
80 │Bad NAND Block Count │ Raw and normalized count │
81 │ │ of the number of NAND │
82 │ │ blocks that have been │
83 │ │ retired after the drives │
84 │ │ manufacturing tests (i.e. │
85 │ │ grown back blocks). 2 │
86 │ │ bytes normalized, 6 bytes │
87 │ │ raw count │
88 ├───────────────────────────┼────────────────────────────┤
89 │ │ │
90 │Uncorrectable Read Error │ Total count of NAND reads │
91 │Count │ that were not correctable │
92 │ │ by read retries, all │
93 │ │ levels of ECC, or XOR (as │
94 │ │ applicable). 8 bytes │
95 ├───────────────────────────┼────────────────────────────┤
96 │ │ │
97 │Soft ECC Error Count │ Total count of NAND reads │
98 │ │ that were not correctable │
99 │ │ by read retries, or │
100 │ │ first-level ECC. 8 bytes │
101 ├───────────────────────────┼────────────────────────────┤
102 │ │ │
103 │SSD End to End Detection │ A count of the detected │
104 │Count │ errors by the SSD end to │
105 │ │ end error correction which │
106 │ │ includes DRAM, SRAM, or │
107 │ │ other storage element │
108 │ │ ECC/CRC protection │
109 │ │ mechanism (not NAND ECC). │
110 │ │ 4 bytes │
111 ├───────────────────────────┼────────────────────────────┤
112 │ │ │
113 │SSD End to End Correction │ A count of the corrected │
114 │Count │ errors by the SSD end to │
115 │ │ end error correction which │
116 │ │ includes DRAM, SRAM, or │
117 │ │ other storage element │
118 │ │ ECC/CRC protection │
119 │ │ mechanism (not NAND ECC). │
120 │ │ 4 bytes │
121 ├───────────────────────────┼────────────────────────────┤
122 │ │ │
123 │System Data % Used │ A normalized cumulative │
124 │ │ count of the number of │
125 │ │ erase cycles per block │
126 │ │ since leaving the factory │
127 │ │ for the system (FW and │
128 │ │ metadata) area. Starts at │
129 │ │ 0 and increments. 100 │
130 │ │ indicates that the │
131 │ │ estimated endurance has │
132 │ │ been consumed. │
133 ├───────────────────────────┼────────────────────────────┤
134 │ │ │
135 │User Data Max Erase Count │ The maximum erase count │
136 │ │ across all NAND blocks in │
137 │ │ the drive. 4 bytes │
138 ├───────────────────────────┼────────────────────────────┤
139 │ │ │
140 │User Data Min Erase Count │ The minimum erase count │
141 │ │ across all NAND blocks in │
142 │ │ the drive. 4 bytes │
143 ├───────────────────────────┼────────────────────────────┤
144 │ │ │
145 │Refresh Count │ A count of the number of │
146 │ │ blocks that have been │
147 │ │ re-allocated due to │
148 │ │ background operations │
149 │ │ only. 8 bytes │
150 ├───────────────────────────┼────────────────────────────┤
151 │ │ │
152 │Program Fail Count │ Raw and normalized count │
153 │ │ of total program failures. │
154 │ │ Normalized count starts at │
155 │ │ 100 and shows the percent │
156 │ │ of remaining allowable │
157 │ │ failures. 2 bytes │
158 │ │ normalized, 6 bytes raw │
159 │ │ count │
160 ├───────────────────────────┼────────────────────────────┤
161 │ │ │
162 │User Data Erase Fail Count │ Raw and normalized count │
163 │ │ of total erase failures in │
164 │ │ the user area. Normalized │
165 │ │ count starts at 100 and │
166 │ │ shows the percent of │
167 │ │ remaining allowable │
168 │ │ failures. 2 bytes │
169 │ │ normalized, 6 bytes raw │
170 │ │ count │
171 ├───────────────────────────┼────────────────────────────┤
172 │ │ │
173 │System Area Erase Fail │ Raw and normalized count │
174 │Count │ of total erase failures in │
175 │ │ the system area. │
176 │ │ Normalized count starts at │
177 │ │ 100 and shows the percent │
178 │ │ of remaining allowable │
179 │ │ failures. 2 bytes │
180 │ │ normalized, 6 bytes raw │
181 │ │ count │
182 ├───────────────────────────┼────────────────────────────┤
183 │ │ │
184 │Thermal Throttling Status │ The current status of │
185 │ │ thermal throttling │
186 │ │ (enabled or disabled). 2 │
187 │ │ bytes │
188 ├───────────────────────────┼────────────────────────────┤
189 │ │ │
190 │Thermal Throttling Count │ A count of the number of │
191 │ │ thermal throttling events. │
192 │ │ 2 bytes │
193 ├───────────────────────────┼────────────────────────────┤
194 │ │ │
195 │PCIe Correctable Error │ Summation counter of all │
196 │Count │ PCIe correctable errors │
197 │ │ (Bad TLP, Bad DLLP, │
198 │ │ Receiver error, Replay │
199 │ │ timeouts, Replay │
200 │ │ rollovers). 8 bytes │
201 └───────────────────────────┴────────────────────────────┘
202
204 ┌───────────────────────────┬────────────────────────────┐
205 │Field │ Description │
206 ├───────────────────────────┼────────────────────────────┤
207 │ │ │
208 │Host Read Commands │ Number of host read │
209 │ │ commands received during │
210 │ │ the reporting period. │
211 ├───────────────────────────┼────────────────────────────┤
212 │ │ │
213 │Host Read Blocks │ Number of 512-byte blocks │
214 │ │ requested during the │
215 │ │ reporting period. │
216 ├───────────────────────────┼────────────────────────────┤
217 │ │ │
218 │Average Read Size │ Average Read size is │
219 │ │ calculated using (Host │
220 │ │ Read Blocks/Host Read │
221 │ │ Commands). │
222 ├───────────────────────────┼────────────────────────────┤
223 │ │ │
224 │Host Read Cache Hit │ Number of host read │
225 │Commands │ commands that serviced │
226 │ │ entirely from the on-board │
227 │ │ read cache during the │
228 │ │ reporting period. No │
229 │ │ access to the NAND flash │
230 │ │ memory was required. This │
231 │ │ count is only updated if │
232 │ │ the entire command was │
233 │ │ serviced from the cache │
234 │ │ memory. │
235 ├───────────────────────────┼────────────────────────────┤
236 │ │ │
237 │Host Read Cache Hit │ Percentage of host read │
238 │Percentage │ commands satisfied from │
239 │ │ the cache. │
240 ├───────────────────────────┼────────────────────────────┤
241 │ │ │
242 │Host Read Cache Hit Blocks │ Number of 512-byte blocks │
243 │ │ of data that have been │
244 │ │ returned for Host Read │
245 │ │ Cache Hit Commands during │
246 │ │ the reporting period. This │
247 │ │ count is only updated with │
248 │ │ the blocks returned for │
249 │ │ host read commands that │
250 │ │ were serviced entirely │
251 │ │ from cache memory. │
252 ├───────────────────────────┼────────────────────────────┤
253 │ │ │
254 │Average Read Cache Hit │ Average size of read │
255 │Size │ commands satisfied from │
256 │ │ the cache. │
257 ├───────────────────────────┼────────────────────────────┤
258 │ │ │
259 │Host Read Commands Stalled │ Number of host read │
260 │ │ commands that were stalled │
261 │ │ due to a lack of resources │
262 │ │ within the SSD during the │
263 │ │ reporting period (NAND │
264 │ │ flash command queue full, │
265 │ │ low cache page count, │
266 │ │ cache page contention, │
267 │ │ etc.). Commands are not │
268 │ │ considered stalled if the │
269 │ │ only reason for the delay │
270 │ │ was waiting for the data │
271 │ │ to be physically read from │
272 │ │ the NAND flash. It is │
273 │ │ normal to expect this │
274 │ │ count to equal zero on │
275 │ │ heavily utilized systems. │
276 ├───────────────────────────┼────────────────────────────┤
277 │ │ │
278 │Host Read Commands Stalled │ Percentage of read │
279 │Percentage │ commands that were │
280 │ │ stalled. If the figure is │
281 │ │ consistently high, then │
282 │ │ consideration should be │
283 │ │ given to spreading the │
284 │ │ data across multiple SSDs. │
285 ├───────────────────────────┼────────────────────────────┤
286 │ │ │
287 │Host Write Commands │ Number of host write │
288 │ │ commands received during │
289 │ │ the reporting period. │
290 ├───────────────────────────┼────────────────────────────┤
291 │ │ │
292 │Host Write Blocks │ Number of 512-byte blocks │
293 │ │ written during the │
294 │ │ reporting period. │
295 ├───────────────────────────┼────────────────────────────┤
296 │ │ │
297 │Average Write Size │ Average Write size │
298 │ │ calculated using (Host │
299 │ │ Write Blocks/Host Write │
300 │ │ Commands). │
301 ├───────────────────────────┼────────────────────────────┤
302 │ │ │
303 │Host Write Odd Start │ Number of host write │
304 │Commands │ commands that started on a │
305 │ │ non-aligned boundary │
306 │ │ during the reporting │
307 │ │ period. The size of the │
308 │ │ boundary alignment is │
309 │ │ normally 4K; therefore │
310 │ │ this returns the number of │
311 │ │ commands that started on a │
312 │ │ non-4K aligned boundary. │
313 │ │ The SSD requires slightly │
314 │ │ more time to process │
315 │ │ non-aligned write commands │
316 │ │ than it does to process │
317 │ │ aligned write commands. │
318 ├───────────────────────────┼────────────────────────────┤
319 │ │ │
320 │Host Write Odd Start │ Percentage of host write │
321 │Commands Percentage │ commands that started on a │
322 │ │ non-aligned boundary. If │
323 │ │ this figure is equal to or │
324 │ │ near 100%, and the NAND │
325 │ │ Read Before Write value is │
326 │ │ also high, then the user │
327 │ │ should investigate the │
328 │ │ possibility of offsetting │
329 │ │ the file system. For │
330 │ │ Microsoft Windows systems, │
331 │ │ the user can use Diskpart. │
332 │ │ For Unix-based operating │
333 │ │ systems, there is normally │
334 │ │ a method whereby file │
335 │ │ system partitions can be │
336 │ │ placed where required. │
337 ├───────────────────────────┼────────────────────────────┤
338 │ │ │
339 │Host Write Odd End │ Number of host write │
340 │Commands │ commands that ended on a │
341 │ │ non-aligned boundary │
342 │ │ during the reporting │
343 │ │ period. The size of the │
344 │ │ boundary alignment is │
345 │ │ normally 4K; therefore │
346 │ │ this returns the number of │
347 │ │ commands that ended on a │
348 │ │ non-4K aligned boundary. │
349 ├───────────────────────────┼────────────────────────────┤
350 │ │ │
351 │Host Write Odd End │ Percentage of host write │
352 │Commands Percentage │ commands that ended on a │
353 │ │ non-aligned boundary. │
354 ├───────────────────────────┼────────────────────────────┤
355 │ │ │
356 │Host Write Commands │ Number of host write │
357 │Stalled │ commands that were stalled │
358 │ │ due to a lack of resources │
359 │ │ within the SSD during the │
360 │ │ reporting period. The most │
361 │ │ likely cause is that the │
362 │ │ write data was being │
363 │ │ received faster than it │
364 │ │ could be saved to the NAND │
365 │ │ flash memory. If there was │
366 │ │ a large volume of read │
367 │ │ commands being processed │
368 │ │ simultaneously, then other │
369 │ │ causes might include the │
370 │ │ NAND flash command queue │
371 │ │ being full, low cache page │
372 │ │ count, or cache page │
373 │ │ contention, etc. It is │
374 │ │ normal to expect this │
375 │ │ count to be non-zero on │
376 │ │ heavily utilized systems. │
377 ├───────────────────────────┼────────────────────────────┤
378 │ │ │
379 │Host Write Commands │ Percentage of write │
380 │Stalled Percentage │ commands that were │
381 │ │ stalled. If the figure is │
382 │ │ consistently high, then │
383 │ │ consideration should be │
384 │ │ given to spreading the │
385 │ │ data across multiple SSDs. │
386 ├───────────────────────────┼────────────────────────────┤
387 │ │ │
388 │NAND Read Commands │ Number of read commands │
389 │ │ issued to the NAND devices │
390 │ │ during the reporting │
391 │ │ period. This figure will │
392 │ │ normally be much higher │
393 │ │ than the host read │
394 │ │ commands figure, as the │
395 │ │ data needed to satisfy a │
396 │ │ single host read command │
397 │ │ may be spread across │
398 │ │ several NAND flash │
399 │ │ devices. │
400 ├───────────────────────────┼────────────────────────────┤
401 │ │ │
402 │NAND Read Blocks │ Number of 512-byte blocks │
403 │ │ requested from NAND flash │
404 │ │ devices during the │
405 │ │ reporting period. This │
406 │ │ figure would normally be │
407 │ │ about the same as the host │
408 │ │ read blocks figure │
409 ├───────────────────────────┼────────────────────────────┤
410 │ │ │
411 │Average NAND Read Size │ Average size of NAND read │
412 │ │ commands. │
413 ├───────────────────────────┼────────────────────────────┤
414 │ │ │
415 │NAND Write Commands │ Number of write commands │
416 │ │ issued to the NAND devices │
417 │ │ during the reporting │
418 │ │ period. There is no real │
419 │ │ correlation between the │
420 │ │ number of host write │
421 │ │ commands issued and the │
422 │ │ number of NAND Write │
423 │ │ Commands. │
424 ├───────────────────────────┼────────────────────────────┤
425 │ │ │
426 │NAND Write Blocks │ Number of 512-byte blocks │
427 │ │ written to the NAND flash │
428 │ │ devices during the │
429 │ │ reporting period. This │
430 │ │ figure would normally be │
431 │ │ about the same as the host │
432 │ │ write blocks figure. │
433 ├───────────────────────────┼────────────────────────────┤
434 │ │ │
435 │Average NAND Write Size │ Average size of NAND write │
436 │ │ commands. This figure │
437 │ │ should never be greater │
438 │ │ than 128K, as this is the │
439 │ │ maximum size write that is │
440 │ │ ever issued to a NAND │
441 │ │ device. │
442 ├───────────────────────────┼────────────────────────────┤
443 │ │ │
444 │NAND Read Before Write │ This is the number of read │
445 │ │ before write operations │
446 │ │ that were required to │
447 │ │ process non-aligned host │
448 │ │ write commands during the │
449 │ │ reporting period. See Host │
450 │ │ Write Odd Start Commands │
451 │ │ and Host Write Odd End │
452 │ │ Commands. NAND Read Before │
453 │ │ Write operations have a │
454 │ │ detrimental effect on the │
455 │ │ overall performance of the │
456 │ │ device. │
457 └───────────────────────────┴────────────────────────────┘
458
460 · Has the program issue WDC smart-add-log Vendor Unique Command with
461 default interval (14) :
462
463 # nvme wdc smart-add-log /dev/nvme0
464
466 Part of the nvme-user suite.
467
468
469
470NVMe 01/08/2019 NVME-WDC-SMART-AD(1)