1epoll(7) Miscellaneous Information Manual epoll(7)
2
3
4
6 epoll - I/O event notification facility
7
9 #include <sys/epoll.h>
10
12 The epoll API performs a similar task to poll(2): monitoring multiple
13 file descriptors to see if I/O is possible on any of them. The epoll
14 API can be used either as an edge-triggered or a level-triggered inter‐
15 face and scales well to large numbers of watched file descriptors.
16
17 The central concept of the epoll API is the epoll instance, an in-ker‐
18 nel data structure which, from a user-space perspective, can be consid‐
19 ered as a container for two lists:
20
21 • The interest list (sometimes also called the epoll set): the set of
22 file descriptors that the process has registered an interest in mon‐
23 itoring.
24
25 • The ready list: the set of file descriptors that are "ready" for
26 I/O. The ready list is a subset of (or, more precisely, a set of
27 references to) the file descriptors in the interest list. The ready
28 list is dynamically populated by the kernel as a result of I/O ac‐
29 tivity on those file descriptors.
30
31 The following system calls are provided to create and manage an epoll
32 instance:
33
34 • epoll_create(2) creates a new epoll instance and returns a file de‐
35 scriptor referring to that instance. (The more recent epoll_cre‐
36 ate1(2) extends the functionality of epoll_create(2).)
37
38 • Interest in particular file descriptors is then registered via
39 epoll_ctl(2), which adds items to the interest list of the epoll in‐
40 stance.
41
42 • epoll_wait(2) waits for I/O events, blocking the calling thread if
43 no events are currently available. (This system call can be thought
44 of as fetching items from the ready list of the epoll instance.)
45
46 Level-triggered and edge-triggered
47 The epoll event distribution interface is able to behave both as edge-
48 triggered (ET) and as level-triggered (LT). The difference between the
49 two mechanisms can be described as follows. Suppose that this scenario
50 happens:
51
52 (1) The file descriptor that represents the read side of a pipe (rfd)
53 is registered on the epoll instance.
54
55 (2) A pipe writer writes 2 kB of data on the write side of the pipe.
56
57 (3) A call to epoll_wait(2) is done that will return rfd as a ready
58 file descriptor.
59
60 (4) The pipe reader reads 1 kB of data from rfd.
61
62 (5) A call to epoll_wait(2) is done.
63
64 If the rfd file descriptor has been added to the epoll interface using
65 the EPOLLET (edge-triggered) flag, the call to epoll_wait(2) done in
66 step 5 will probably hang despite the available data still present in
67 the file input buffer; meanwhile the remote peer might be expecting a
68 response based on the data it already sent. The reason for this is
69 that edge-triggered mode delivers events only when changes occur on the
70 monitored file descriptor. So, in step 5 the caller might end up wait‐
71 ing for some data that is already present inside the input buffer. In
72 the above example, an event on rfd will be generated because of the
73 write done in 2 and the event is consumed in 3. Since the read opera‐
74 tion done in 4 does not consume the whole buffer data, the call to
75 epoll_wait(2) done in step 5 might block indefinitely.
76
77 An application that employs the EPOLLET flag should use nonblocking
78 file descriptors to avoid having a blocking read or write starve a task
79 that is handling multiple file descriptors. The suggested way to use
80 epoll as an edge-triggered (EPOLLET) interface is as follows:
81
82 (1) with nonblocking file descriptors; and
83
84 (2) by waiting for an event only after read(2) or write(2) return EA‐
85 GAIN.
86
87 By contrast, when used as a level-triggered interface (the default,
88 when EPOLLET is not specified), epoll is simply a faster poll(2), and
89 can be used wherever the latter is used since it shares the same seman‐
90 tics.
91
92 Since even with edge-triggered epoll, multiple events can be generated
93 upon receipt of multiple chunks of data, the caller has the option to
94 specify the EPOLLONESHOT flag, to tell epoll to disable the associated
95 file descriptor after the receipt of an event with epoll_wait(2). When
96 the EPOLLONESHOT flag is specified, it is the caller's responsibility
97 to rearm the file descriptor using epoll_ctl(2) with EPOLL_CTL_MOD.
98
99 If multiple threads (or processes, if child processes have inherited
100 the epoll file descriptor across fork(2)) are blocked in epoll_wait(2)
101 waiting on the same epoll file descriptor and a file descriptor in the
102 interest list that is marked for edge-triggered (EPOLLET) notification
103 becomes ready, just one of the threads (or processes) is awoken from
104 epoll_wait(2). This provides a useful optimization for avoiding "thun‐
105 dering herd" wake-ups in some scenarios.
106
107 Interaction with autosleep
108 If the system is in autosleep mode via /sys/power/autosleep and an
109 event happens which wakes the device from sleep, the device driver will
110 keep the device awake only until that event is queued. To keep the de‐
111 vice awake until the event has been processed, it is necessary to use
112 the epoll_ctl(2) EPOLLWAKEUP flag.
113
114 When the EPOLLWAKEUP flag is set in the events field for a struct
115 epoll_event, the system will be kept awake from the moment the event is
116 queued, through the epoll_wait(2) call which returns the event until
117 the subsequent epoll_wait(2) call. If the event should keep the system
118 awake beyond that time, then a separate wake_lock should be taken be‐
119 fore the second epoll_wait(2) call.
120
121 /proc interfaces
122 The following interfaces can be used to limit the amount of kernel mem‐
123 ory consumed by epoll:
124
125 /proc/sys/fs/epoll/max_user_watches (since Linux 2.6.28)
126 This specifies a limit on the total number of file descriptors
127 that a user can register across all epoll instances on the sys‐
128 tem. The limit is per real user ID. Each registered file de‐
129 scriptor costs roughly 90 bytes on a 32-bit kernel, and roughly
130 160 bytes on a 64-bit kernel. Currently, the default value for
131 max_user_watches is 1/25 (4%) of the available low memory, di‐
132 vided by the registration cost in bytes.
133
134 Example for suggested usage
135 While the usage of epoll when employed as a level-triggered interface
136 does have the same semantics as poll(2), the edge-triggered usage re‐
137 quires more clarification to avoid stalls in the application event
138 loop. In this example, listener is a nonblocking socket on which lis‐
139 ten(2) has been called. The function do_use_fd() uses the new ready
140 file descriptor until EAGAIN is returned by either read(2) or write(2).
141 An event-driven state machine application should, after having received
142 EAGAIN, record its current state so that at the next call to
143 do_use_fd() it will continue to read(2) or write(2) from where it
144 stopped before.
145
146 #define MAX_EVENTS 10
147 struct epoll_event ev, events[MAX_EVENTS];
148 int listen_sock, conn_sock, nfds, epollfd;
149
150 /* Code to set up listening socket, 'listen_sock',
151 (socket(), bind(), listen()) omitted. */
152
153 epollfd = epoll_create1(0);
154 if (epollfd == -1) {
155 perror("epoll_create1");
156 exit(EXIT_FAILURE);
157 }
158
159 ev.events = EPOLLIN;
160 ev.data.fd = listen_sock;
161 if (epoll_ctl(epollfd, EPOLL_CTL_ADD, listen_sock, &ev) == -1) {
162 perror("epoll_ctl: listen_sock");
163 exit(EXIT_FAILURE);
164 }
165
166 for (;;) {
167 nfds = epoll_wait(epollfd, events, MAX_EVENTS, -1);
168 if (nfds == -1) {
169 perror("epoll_wait");
170 exit(EXIT_FAILURE);
171 }
172
173 for (n = 0; n < nfds; ++n) {
174 if (events[n].data.fd == listen_sock) {
175 conn_sock = accept(listen_sock,
176 (struct sockaddr *) &addr, &addrlen);
177 if (conn_sock == -1) {
178 perror("accept");
179 exit(EXIT_FAILURE);
180 }
181 setnonblocking(conn_sock);
182 ev.events = EPOLLIN | EPOLLET;
183 ev.data.fd = conn_sock;
184 if (epoll_ctl(epollfd, EPOLL_CTL_ADD, conn_sock,
185 &ev) == -1) {
186 perror("epoll_ctl: conn_sock");
187 exit(EXIT_FAILURE);
188 }
189 } else {
190 do_use_fd(events[n].data.fd);
191 }
192 }
193 }
194
195 When used as an edge-triggered interface, for performance reasons, it
196 is possible to add the file descriptor inside the epoll interface
197 (EPOLL_CTL_ADD) once by specifying (EPOLLIN|EPOLLOUT). This allows you
198 to avoid continuously switching between EPOLLIN and EPOLLOUT calling
199 epoll_ctl(2) with EPOLL_CTL_MOD.
200
201 Questions and answers
202 • What is the key used to distinguish the file descriptors registered
203 in an interest list?
204
205 The key is the combination of the file descriptor number and the
206 open file description (also known as an "open file handle", the ker‐
207 nel's internal representation of an open file).
208
209 • What happens if you register the same file descriptor on an epoll
210 instance twice?
211
212 You will probably get EEXIST. However, it is possible to add a du‐
213 plicate (dup(2), dup2(2), fcntl(2) F_DUPFD) file descriptor to the
214 same epoll instance. This can be a useful technique for filtering
215 events, if the duplicate file descriptors are registered with dif‐
216 ferent events masks.
217
218 • Can two epoll instances wait for the same file descriptor? If so,
219 are events reported to both epoll file descriptors?
220
221 Yes, and events would be reported to both. However, careful pro‐
222 gramming may be needed to do this correctly.
223
224 • Is the epoll file descriptor itself poll/epoll/selectable?
225
226 Yes. If an epoll file descriptor has events waiting, then it will
227 indicate as being readable.
228
229 • What happens if one attempts to put an epoll file descriptor into
230 its own file descriptor set?
231
232 The epoll_ctl(2) call fails (EINVAL). However, you can add an epoll
233 file descriptor inside another epoll file descriptor set.
234
235 • Can I send an epoll file descriptor over a UNIX domain socket to an‐
236 other process?
237
238 Yes, but it does not make sense to do this, since the receiving
239 process would not have copies of the file descriptors in the inter‐
240 est list.
241
242 • Will closing a file descriptor cause it to be removed from all epoll
243 interest lists?
244
245 Yes, but be aware of the following point. A file descriptor is a
246 reference to an open file description (see open(2)). Whenever a
247 file descriptor is duplicated via dup(2), dup2(2), fcntl(2) F_DUPFD,
248 or fork(2), a new file descriptor referring to the same open file
249 description is created. An open file description continues to exist
250 until all file descriptors referring to it have been closed.
251
252 A file descriptor is removed from an interest list only after all
253 the file descriptors referring to the underlying open file descrip‐
254 tion have been closed. This means that even after a file descriptor
255 that is part of an interest list has been closed, events may be re‐
256 ported for that file descriptor if other file descriptors referring
257 to the same underlying file description remain open. To prevent
258 this happening, the file descriptor must be explicitly removed from
259 the interest list (using epoll_ctl(2) EPOLL_CTL_DEL) before it is
260 duplicated. Alternatively, the application must ensure that all
261 file descriptors are closed (which may be difficult if file descrip‐
262 tors were duplicated behind the scenes by library functions that
263 used dup(2) or fork(2)).
264
265 • If more than one event occurs between epoll_wait(2) calls, are they
266 combined or reported separately?
267
268 They will be combined.
269
270 • Does an operation on a file descriptor affect the already collected
271 but not yet reported events?
272
273 You can do two operations on an existing file descriptor. Remove
274 would be meaningless for this case. Modify will reread available
275 I/O.
276
277 • Do I need to continuously read/write a file descriptor until EAGAIN
278 when using the EPOLLET flag (edge-triggered behavior)?
279
280 Receiving an event from epoll_wait(2) should suggest to you that
281 such file descriptor is ready for the requested I/O operation. You
282 must consider it ready until the next (nonblocking) read/write
283 yields EAGAIN. When and how you will use the file descriptor is en‐
284 tirely up to you.
285
286 For packet/token-oriented files (e.g., datagram socket, terminal in
287 canonical mode), the only way to detect the end of the read/write
288 I/O space is to continue to read/write until EAGAIN.
289
290 For stream-oriented files (e.g., pipe, FIFO, stream socket), the
291 condition that the read/write I/O space is exhausted can also be de‐
292 tected by checking the amount of data read from / written to the
293 target file descriptor. For example, if you call read(2) by asking
294 to read a certain amount of data and read(2) returns a lower number
295 of bytes, you can be sure of having exhausted the read I/O space for
296 the file descriptor. The same is true when writing using write(2).
297 (Avoid this latter technique if you cannot guarantee that the moni‐
298 tored file descriptor always refers to a stream-oriented file.)
299
300 Possible pitfalls and ways to avoid them
301 • Starvation (edge-triggered)
302
303 If there is a large amount of I/O space, it is possible that by try‐
304 ing to drain it the other files will not get processed causing star‐
305 vation. (This problem is not specific to epoll.)
306
307 The solution is to maintain a ready list and mark the file descrip‐
308 tor as ready in its associated data structure, thereby allowing the
309 application to remember which files need to be processed but still
310 round robin amongst all the ready files. This also supports ignor‐
311 ing subsequent events you receive for file descriptors that are al‐
312 ready ready.
313
314 • If using an event cache...
315
316 If you use an event cache or store all the file descriptors returned
317 from epoll_wait(2), then make sure to provide a way to mark its clo‐
318 sure dynamically (i.e., caused by a previous event's processing).
319 Suppose you receive 100 events from epoll_wait(2), and in event #47
320 a condition causes event #13 to be closed. If you remove the struc‐
321 ture and close(2) the file descriptor for event #13, then your event
322 cache might still say there are events waiting for that file de‐
323 scriptor causing confusion.
324
325 One solution for this is to call, during the processing of event 47,
326 epoll_ctl(EPOLL_CTL_DEL) to delete file descriptor 13 and close(2),
327 then mark its associated data structure as removed and link it to a
328 cleanup list. If you find another event for file descriptor 13 in
329 your batch processing, you will discover the file descriptor had
330 been previously removed and there will be no confusion.
331
333 Some other systems provide similar mechanisms; for example, FreeBSD has
334 kqueue, and Solaris has /dev/poll.
335
337 Linux.
338
340 Linux 2.5.44. glibc 2.3.2.
341
343 The set of file descriptors that is being monitored via an epoll file
344 descriptor can be viewed via the entry for the epoll file descriptor in
345 the process's /proc/pid/fdinfo directory. See proc(5) for further de‐
346 tails.
347
348 The kcmp(2) KCMP_EPOLL_TFD operation can be used to test whether a file
349 descriptor is present in an epoll instance.
350
352 epoll_create(2), epoll_create1(2), epoll_ctl(2), epoll_wait(2),
353 poll(2), select(2)
354
355
356
357Linux man-pages 6.04 2023-03-30 epoll(7)