1EPOLL(7) Linux Programmer's Manual EPOLL(7)
2
3
4
6 epoll - I/O event notification facility
7
9 #include <sys/epoll.h>
10
12 The epoll API performs a similar task to poll(2): monitoring multiple
13 file descriptors to see if I/O is possible on any of them. The epoll
14 API can be used either as an edge-triggered or a level-triggered inter‐
15 face and scales well to large numbers of watched file descriptors.
16
17 The central concept of the epoll API is the epoll instance, an in-ker‐
18 nel data structure which, from a user-space perspective, can be consid‐
19 ered as a container for two lists:
20
21 * The interest list (sometimes also called the epoll set): the set of
22 file descriptors that the process has registered an interest in
23 monitoring.
24
25 * The ready list: the set of file descriptors that are "ready" for
26 I/O. The ready list is a subset of (or, more precisely, a set of
27 references to) the file descriptors in the interest list that is
28 dynamically populated by the kernel as a result of I/O activity on
29 those file descriptors.
30
31 The following system calls are provided to create and manage an epoll
32 instance:
33
34 * epoll_create(2) creates a new epoll instance and returns a file
35 descriptor referring to that instance. (The more recent epoll_cre‐
36 ate1(2) extends the functionality of epoll_create(2).)
37
38 * Interest in particular file descriptors is then registered via
39 epoll_ctl(2), which adds items to the interest list of the epoll
40 instance.
41
42 * epoll_wait(2) waits for I/O events, blocking the calling thread if
43 no events are currently available. (This system call can be thought
44 of as fetching items from the ready list of the epoll instance.)
45
46 Level-triggered and edge-triggered
47 The epoll event distribution interface is able to behave both as edge-
48 triggered (ET) and as level-triggered (LT). The difference between the
49 two mechanisms can be described as follows. Suppose that this scenario
50 happens:
51
52 1. The file descriptor that represents the read side of a pipe (rfd) is
53 registered on the epoll instance.
54
55 2. A pipe writer writes 2 kB of data on the write side of the pipe.
56
57 3. A call to epoll_wait(2) is done that will return rfd as a ready file
58 descriptor.
59
60 4. The pipe reader reads 1 kB of data from rfd.
61
62 5. A call to epoll_wait(2) is done.
63
64 If the rfd file descriptor has been added to the epoll interface using
65 the EPOLLET (edge-triggered) flag, the call to epoll_wait(2) done in
66 step 5 will probably hang despite the available data still present in
67 the file input buffer; meanwhile the remote peer might be expecting a
68 response based on the data it already sent. The reason for this is
69 that edge-triggered mode delivers events only when changes occur on the
70 monitored file descriptor. So, in step 5 the caller might end up wait‐
71 ing for some data that is already present inside the input buffer. In
72 the above example, an event on rfd will be generated because of the
73 write done in 2 and the event is consumed in 3. Since the read opera‐
74 tion done in 4 does not consume the whole buffer data, the call to
75 epoll_wait(2) done in step 5 might block indefinitely.
76
77 An application that employs the EPOLLET flag should use nonblocking
78 file descriptors to avoid having a blocking read or write starve a task
79 that is handling multiple file descriptors. The suggested way to use
80 epoll as an edge-triggered (EPOLLET) interface is as follows:
81
82 i with nonblocking file descriptors; and
83
84 ii by waiting for an event only after read(2) or write(2)
85 return EAGAIN.
86
87 By contrast, when used as a level-triggered interface (the default,
88 when EPOLLET is not specified), epoll is simply a faster poll(2), and
89 can be used wherever the latter is used since it shares the same seman‐
90 tics.
91
92 Since even with edge-triggered epoll, multiple events can be generated
93 upon receipt of multiple chunks of data, the caller has the option to
94 specify the EPOLLONESHOT flag, to tell epoll to disable the associated
95 file descriptor after the receipt of an event with epoll_wait(2). When
96 the EPOLLONESHOT flag is specified, it is the caller's responsibility
97 to rearm the file descriptor using epoll_ctl(2) with EPOLL_CTL_MOD.
98
99 If multiple threads (or processes, if child processes have inherited
100 the epoll file descriptor across fork(2)) are blocked in epoll_wait(2)
101 waiting on the same the same epoll file descriptor and a file descrip‐
102 tor in the interest list that is marked for edge-triggered (EPOLLET)
103 notification becomes ready, just one of the threads (or processes) is
104 awoken from epoll_wait(2). This provides a useful optimization for
105 avoiding "thundering herd" wake-ups in some scenarios.
106
107 Interaction with autosleep
108 If the system is in autosleep mode via /sys/power/autosleep and an
109 event happens which wakes the device from sleep, the device driver will
110 keep the device awake only until that event is queued. To keep the
111 device awake until the event has been processed, it is necessary to use
112 the epoll_ctl(2) EPOLLWAKEUP flag.
113
114 When the EPOLLWAKEUP flag is set in the events field for a struct
115 epoll_event, the system will be kept awake from the moment the event is
116 queued, through the epoll_wait(2) call which returns the event until
117 the subsequent epoll_wait(2) call. If the event should keep the system
118 awake beyond that time, then a separate wake_lock should be taken
119 before the second epoll_wait(2) call.
120
121 /proc interfaces
122 The following interfaces can be used to limit the amount of kernel mem‐
123 ory consumed by epoll:
124
125 /proc/sys/fs/epoll/max_user_watches (since Linux 2.6.28)
126 This specifies a limit on the total number of file descriptors
127 that a user can register across all epoll instances on the sys‐
128 tem. The limit is per real user ID. Each registered file
129 descriptor costs roughly 90 bytes on a 32-bit kernel, and
130 roughly 160 bytes on a 64-bit kernel. Currently, the default
131 value for max_user_watches is 1/25 (4%) of the available low
132 memory, divided by the registration cost in bytes.
133
134 Example for suggested usage
135 While the usage of epoll when employed as a level-triggered interface
136 does have the same semantics as poll(2), the edge-triggered usage
137 requires more clarification to avoid stalls in the application event
138 loop. In this example, listener is a nonblocking socket on which lis‐
139 ten(2) has been called. The function do_use_fd() uses the new ready
140 file descriptor until EAGAIN is returned by either read(2) or write(2).
141 An event-driven state machine application should, after having received
142 EAGAIN, record its current state so that at the next call to
143 do_use_fd() it will continue to read(2) or write(2) from where it
144 stopped before.
145
146 #define MAX_EVENTS 10
147 struct epoll_event ev, events[MAX_EVENTS];
148 int listen_sock, conn_sock, nfds, epollfd;
149
150 /* Code to set up listening socket, 'listen_sock',
151 (socket(), bind(), listen()) omitted */
152
153 epollfd = epoll_create1(0);
154 if (epollfd == -1) {
155 perror("epoll_create1");
156 exit(EXIT_FAILURE);
157 }
158
159 ev.events = EPOLLIN;
160 ev.data.fd = listen_sock;
161 if (epoll_ctl(epollfd, EPOLL_CTL_ADD, listen_sock, &ev) == -1) {
162 perror("epoll_ctl: listen_sock");
163 exit(EXIT_FAILURE);
164 }
165
166 for (;;) {
167 nfds = epoll_wait(epollfd, events, MAX_EVENTS, -1);
168 if (nfds == -1) {
169 perror("epoll_wait");
170 exit(EXIT_FAILURE);
171 }
172
173 for (n = 0; n < nfds; ++n) {
174 if (events[n].data.fd == listen_sock) {
175 conn_sock = accept(listen_sock,
176 (struct sockaddr *) &addr, &addrlen);
177 if (conn_sock == -1) {
178 perror("accept");
179 exit(EXIT_FAILURE);
180 }
181 setnonblocking(conn_sock);
182 ev.events = EPOLLIN | EPOLLET;
183 ev.data.fd = conn_sock;
184 if (epoll_ctl(epollfd, EPOLL_CTL_ADD, conn_sock,
185 &ev) == -1) {
186 perror("epoll_ctl: conn_sock");
187 exit(EXIT_FAILURE);
188 }
189 } else {
190 do_use_fd(events[n].data.fd);
191 }
192 }
193 }
194
195 When used as an edge-triggered interface, for performance reasons, it
196 is possible to add the file descriptor inside the epoll interface
197 (EPOLL_CTL_ADD) once by specifying (EPOLLIN|EPOLLOUT). This allows you
198 to avoid continuously switching between EPOLLIN and EPOLLOUT calling
199 epoll_ctl(2) with EPOLL_CTL_MOD.
200
201 Questions and answers
202 0. What is the key used to distinguish the file descriptors registered
203 in an interest list?
204
205 The key is the combination of the file descriptor number and the
206 open file description (also known as an "open file handle", the
207 kernel's internal representation of an open file).
208
209 1. What happens if you register the same file descriptor on an epoll
210 instance twice?
211
212 You will probably get EEXIST. However, it is possible to add a
213 duplicate (dup(2), dup2(2), fcntl(2) F_DUPFD) file descriptor to
214 the same epoll instance. This can be a useful technique for fil‐
215 tering events, if the duplicate file descriptors are registered
216 with different events masks.
217
218 2. Can two epoll instances wait for the same file descriptor? If so,
219 are events reported to both epoll file descriptors?
220
221 Yes, and events would be reported to both. However, careful pro‐
222 gramming may be needed to do this correctly.
223
224 3. Is the epoll file descriptor itself poll/epoll/selectable?
225
226 Yes. If an epoll file descriptor has events waiting, then it will
227 indicate as being readable.
228
229 4. What happens if one attempts to put an epoll file descriptor into
230 its own file descriptor set?
231
232 The epoll_ctl(2) call fails (EINVAL). However, you can add an
233 epoll file descriptor inside another epoll file descriptor set.
234
235 5. Can I send an epoll file descriptor over a UNIX domain socket to
236 another process?
237
238 Yes, but it does not make sense to do this, since the receiving
239 process would not have copies of the file descriptors in the inter‐
240 est list.
241
242 6. Will closing a file descriptor cause it to be removed from all
243 epoll interest lists?
244
245 Yes, but be aware of the following point. A file descriptor is a
246 reference to an open file description (see open(2)). Whenever a
247 file descriptor is duplicated via dup(2), dup2(2), fcntl(2)
248 F_DUPFD, or fork(2), a new file descriptor referring to the same
249 open file description is created. An open file description contin‐
250 ues to exist until all file descriptors referring to it have been
251 closed.
252
253 A file descriptor is removed from an interest list only after all
254 the file descriptors referring to the underlying open file descrip‐
255 tion have been closed. This means that even after a file descrip‐
256 tor that is part of an interest list has been closed, events may be
257 reported for that file descriptor if other file descriptors refer‐
258 ring to the same underlying file description remain open. To pre‐
259 vent this happening, the file descriptor must be explicitly removed
260 from the interest list (using epoll_ctl(2) EPOLL_CTL_DEL) before it
261 is duplicated. Alternatively, the application must ensure that all
262 file descriptors are closed (which may be difficult if file
263 descriptors were duplicated behind the scenes by library functions
264 that used dup(2) or fork(2)).
265
266 7. If more than one event occurs between epoll_wait(2) calls, are they
267 combined or reported separately?
268
269 They will be combined.
270
271 8. Does an operation on a file descriptor affect the already collected
272 but not yet reported events?
273
274 You can do two operations on an existing file descriptor. Remove
275 would be meaningless for this case. Modify will reread available
276 I/O.
277
278 9. Do I need to continuously read/write a file descriptor until EAGAIN
279 when using the EPOLLET flag (edge-triggered behavior)?
280
281 Receiving an event from epoll_wait(2) should suggest to you that
282 such file descriptor is ready for the requested I/O operation. You
283 must consider it ready until the next (nonblocking) read/write
284 yields EAGAIN. When and how you will use the file descriptor is
285 entirely up to you.
286
287 For packet/token-oriented files (e.g., datagram socket, terminal in
288 canonical mode), the only way to detect the end of the read/write
289 I/O space is to continue to read/write until EAGAIN.
290
291 For stream-oriented files (e.g., pipe, FIFO, stream socket), the
292 condition that the read/write I/O space is exhausted can also be
293 detected by checking the amount of data read from / written to the
294 target file descriptor. For example, if you call read(2) by asking
295 to read a certain amount of data and read(2) returns a lower number
296 of bytes, you can be sure of having exhausted the read I/O space
297 for the file descriptor. The same is true when writing using
298 write(2). (Avoid this latter technique if you cannot guarantee
299 that the monitored file descriptor always refers to a stream-ori‐
300 ented file.)
301
302 Possible pitfalls and ways to avoid them
303 o Starvation (edge-triggered)
304
305 If there is a large amount of I/O space, it is possible that by trying
306 to drain it the other files will not get processed causing starvation.
307 (This problem is not specific to epoll.)
308
309 The solution is to maintain a ready list and mark the file descriptor
310 as ready in its associated data structure, thereby allowing the appli‐
311 cation to remember which files need to be processed but still round
312 robin amongst all the ready files. This also supports ignoring subse‐
313 quent events you receive for file descriptors that are already ready.
314
315 o If using an event cache...
316
317 If you use an event cache or store all the file descriptors returned
318 from epoll_wait(2), then make sure to provide a way to mark its closure
319 dynamically (i.e., caused by a previous event's processing). Suppose
320 you receive 100 events from epoll_wait(2), and in event #47 a condition
321 causes event #13 to be closed. If you remove the structure and
322 close(2) the file descriptor for event #13, then your event cache might
323 still say there are events waiting for that file descriptor causing
324 confusion.
325
326 One solution for this is to call, during the processing of event 47,
327 epoll_ctl(EPOLL_CTL_DEL) to delete file descriptor 13 and close(2),
328 then mark its associated data structure as removed and link it to a
329 cleanup list. If you find another event for file descriptor 13 in your
330 batch processing, you will discover the file descriptor had been previ‐
331 ously removed and there will be no confusion.
332
334 The epoll API was introduced in Linux kernel 2.5.44. Support was added
335 to glibc in version 2.3.2.
336
338 The epoll API is Linux-specific. Some other systems provide similar
339 mechanisms, for example, FreeBSD has kqueue, and Solaris has /dev/poll.
340
342 The set of file descriptors that is being monitored via an epoll file
343 descriptor can be viewed via the entry for the epoll file descriptor in
344 the process's /proc/[pid]/fdinfo directory. See proc(5) for further
345 details.
346
347 The kcmp(2) KCMP_EPOLL_TFD operation can be used to test whether a file
348 descriptor is present in an epoll instance.
349
351 epoll_create(2), epoll_create1(2), epoll_ctl(2), epoll_wait(2),
352 poll(2), select(2)
353
355 This page is part of release 5.02 of the Linux man-pages project. A
356 description of the project, information about reporting bugs, and the
357 latest version of this page, can be found at
358 https://www.kernel.org/doc/man-pages/.
359
360
361
362Linux 2019-03-06 EPOLL(7)