1USB_SUBMIT_URB(9) USB Core APIs USB_SUBMIT_URB(9)
2
3
4
6 usb_submit_urb - issue an asynchronous transfer request for an endpoint
7
9 int usb_submit_urb(struct urb * urb, gfp_t mem_flags);
10
12 urb
13 pointer to the urb describing the request
14
15 mem_flags
16 the type of memory to allocate, see kmalloc for a list of valid
17 options for this.
18
20 This submits a transfer request, and transfers control of the URB
21 describing that request to the USB subsystem. Request completion will
22 be indicated later, asynchronously, by calling the completion handler.
23 The three types of completion are success, error, and unlink (a
24 software-induced fault, also called “request cancellation”).
25
26 URBs may be submitted in interrupt context.
27
28 The caller must have correctly initialized the URB before submitting
29 it. Functions such as usb_fill_bulk_urb and usb_fill_control_urb are
30 available to ensure that most fields are correctly initialized, for the
31 particular kind of transfer, although they will not initialize any
32 transfer flags.
33
34 If the submission is successful, the complete callback from the URB
35 will be called exactly once, when the USB core and Host Controller
36 Driver (HCD) are finished with the URB. When the completion function is
37 called, control of the URB is returned to the device driver which
38 issued the request. The completion handler may then immediately free or
39 reuse that URB.
40
41 With few exceptions, USB device drivers should never access URB fields
42 provided by usbcore or the HCD until its complete is called. The
43 exceptions relate to periodic transfer scheduling. For both interrupt
44 and isochronous urbs, as part of successful URB submission
45 urb->interval is modified to reflect the actual transfer period used
46 (normally some power of two units). And for isochronous urbs,
47 urb->start_frame is modified to reflect when the URB's transfers were
48 scheduled to start.
49
50 Not all isochronous transfer scheduling policies will work, but most
51 host controller drivers should easily handle ISO queues going from now
52 until 10-200 msec into the future. Drivers should try to keep at least
53 one or two msec of data in the queue; many controllers require that new
54 transfers start at least 1 msec in the future when they are added. If
55 the driver is unable to keep up and the queue empties out, the behavior
56 for new submissions is governed by the URB_ISO_ASAP flag. If the flag
57 is set, or if the queue is idle, then the URB is always assigned to the
58 first available (and not yet expired) slot in the endpoint's schedule.
59 If the flag is not set and the queue is active then the URB is always
60 assigned to the next slot in the schedule following the end of the
61 endpoint's previous URB, even if that slot is in the past. When a
62 packet is assigned in this way to a slot that has already expired, the
63 packet is not transmitted and the corresponding
64 usb_iso_packet_descriptor's status field will return -EXDEV. If this
65 would happen to all the packets in the URB, submission fails with a
66 -EXDEV error code.
67
68 For control endpoints, the synchronous usb_control_msg call is often
69 used (in non-interrupt context) instead of this call. That is often
70 used through convenience wrappers, for the requests that are
71 standardized in the USB 2.0 specification. For bulk endpoints, a
72 synchronous usb_bulk_msg call is available.
73
75 0 on successful submissions. A negative error number otherwise.
76
78 URBs may be submitted to endpoints before previous ones complete, to
79 minimize the impact of interrupt latencies and system overhead on data
80 throughput. With that queuing policy, an endpoint's queue would never
81 be empty. This is required for continuous isochronous data streams, and
82 may also be required for some kinds of interrupt transfers. Such
83 queuing also maximizes bandwidth utilization by letting USB controllers
84 start work on later requests before driver software has finished the
85 completion processing for earlier (successful) requests.
86
87 As of Linux 2.6, all USB endpoint transfer queues support depths
88 greater than one. This was previously a HCD-specific behavior, except
89 for ISO transfers. Non-isochronous endpoint queues are inactive during
90 cleanup after faults (transfer errors or cancellation).
91
93 Periodic transfers (interrupt or isochronous) are performed repeatedly,
94 using the interval specified in the urb. Submitting the first urb to
95 the endpoint reserves the bandwidth necessary to make those transfers.
96 If the USB subsystem can't allocate sufficient bandwidth to perform the
97 periodic request, submitting such a periodic request should fail.
98
99 For devices under xHCI, the bandwidth is reserved at configuration
100 time, or when the alt setting is selected. If there is not enough bus
101 bandwidth, the configuration/alt setting request will fail. Therefore,
102 submissions to periodic endpoints on devices under xHCI should never
103 fail due to bandwidth constraints.
104
105 Device drivers must explicitly request that repetition, by ensuring
106 that some URB is always on the endpoint's queue (except possibly for
107 short periods during completion callbacks). When there is no longer an
108 urb queued, the endpoint's bandwidth reservation is canceled. This
109 means drivers can use their completion handlers to ensure they keep
110 bandwidth they need, by reinitializing and resubmitting the
111 just-completed urb until the driver longer needs that periodic
112 bandwidth.
113
115 The general rules for how to decide which mem_flags to use are the same
116 as for kmalloc. There are four different possible values; GFP_KERNEL,
117 GFP_NOFS, GFP_NOIO and GFP_ATOMIC.
118
119 GFP_NOFS is not ever used, as it has not been implemented yet.
120
121 GFP_ATOMIC is used when (a) you are inside a completion handler, an
122 interrupt, bottom half, tasklet or timer, or (b) you are holding a
123 spinlock or rwlock (does not apply to semaphores), or (c)
124 current->state != TASK_RUNNING, this is the case only after you've
125 changed it.
126
127 GFP_NOIO is used in the block io path and error handling of storage
128 devices.
129
130 All other situations use GFP_KERNEL.
131
132 Some more specific rules for mem_flags can be inferred, such as (1)
133 start_xmit, timeout, and receive methods of network drivers must use
134 GFP_ATOMIC (they are called with a spinlock held); (2) queuecommand
135 methods of scsi drivers must use GFP_ATOMIC (also called with a
136 spinlock held); (3) If you use a kernel thread with a network driver
137 you must use GFP_NOIO, unless (b) or (c) apply; (4) after you have done
138 a down you can use GFP_KERNEL, unless (b) or (c) apply or your are in a
139 storage driver's block io path; (5) USB probe and disconnect can use
140 GFP_KERNEL unless (b) or (c) apply; and (6) changing firmware on a
141 running storage or net device uses GFP_NOIO, unless b) or c) apply
142
144Kernel Hackers Manual 3.10 June 2019 USB_SUBMIT_URB(9)