1GET_USER_PAGES(9) Memory Management in Linux GET_USER_PAGES(9)
2
3
4
6 get_user_pages - pin user pages in memory
7
9 int get_user_pages(struct task_struct * tsk, struct mm_struct * mm,
10 unsigned long start, int nr_pages, int write,
11 int force, struct page ** pages,
12 struct vm_area_struct ** vmas);
13
15 tsk
16 the task_struct to use for page fault accounting, or NULL if faults
17 are not to be recorded.
18
19 mm
20 mm_struct of target mm
21
22 start
23 starting user address
24
25 nr_pages
26 number of pages from start to pin
27
28 write
29 whether pages will be written to by the caller
30
31 force
32 whether to force write access even if user mapping is readonly.
33 This will result in the page being COWed even in MAP_SHARED
34 mappings. You do not want this.
35
36 pages
37 array that receives pointers to the pages pinned. Should be at
38 least nr_pages long. Or NULL, if caller only intends to ensure the
39 pages are faulted in.
40
41 vmas
42 array of pointers to vmas corresponding to each page. Or NULL if
43 the caller does not require them.
44
46 Returns number of pages pinned. This may be fewer than the number
47 requested. If nr_pages is 0 or negative, returns 0. If no pages were
48 pinned, returns -errno. Each page returned must be released with a
49 put_page call when it is finished with. vmas will only remain valid
50 while mmap_sem is held.
51
52 Must be called with mmap_sem held for read or write.
53
54 get_user_pages walks a process´s page tables and takes a reference to
55 each struct page that each user address corresponds to at a given
56 instant. That is, it takes the page that would be accessed if a user
57 thread accesses the given user virtual address at that instant.
58
59 This does not guarantee that the page exists in the user mappings when
60 get_user_pages returns, and there may even be a completely different
61 page there in some cases (eg. if mmapped pagecache has been invalidated
62 and subsequently re faulted). However it does guarantee that the page
63 won´t be freed completely. And mostly callers simply care that the page
64 contains data that was valid *at some point in time*. Typically, an IO
65 or similar operation cannot guarantee anything stronger anyway because
66 locks can´t be held over the syscall boundary.
67
68 If write=0, the page must not be written to. If the page is written to,
69 set_page_dirty (or set_page_dirty_lock, as appropriate) must be called
70 after the page is finished with, and before put_page is called.
71
72 get_user_pages is typically used for fewer-copy IO operations, to get a
73 handle on the memory by some means other than accesses via the user
74 virtual addresses. The pages may be submitted for DMA to devices or
75 accessed via their kernel linear mapping (via the kmap APIs). Care
76 should be taken to use the correct cache flushing APIs.
77
78 See also get_user_pages_fast, for performance critical applications.
79
81Kernel Hackers Manual 2.6. June 2019 GET_USER_PAGES(9)