1FSS(7)                   Device and Network Interfaces                  FSS(7)
2
3
4

NAME

6       FSS - Fair share scheduler
7

DESCRIPTION

9       The  fair  share  scheduler (FSS) guarantees application performance by
10       explicitly allocating shares of CPU  resources  to  projects.  A  share
11       indicates  a  project's entitlement to available CPU resources. Because
12       shares are meaningful only in comparison with other  project's  shares,
13       the absolute quantity of shares is not important. Any number that is in
14       proportion with the desired CPU entitlement can be used.
15
16
17       The goals of the FSS scheduler differ from the traditional time-sharing
18       scheduling  class  (TS). In addition to scheduling individual LWPs, the
19       FSS scheduler schedules projects against each other, making it impossi‐
20       ble  for  any project to acquire more CPU cycles simply by running more
21       processes concurrently.
22
23
24       A project's entitlement is individually calculated by FSS independently
25       for each processor set if the project contains processes bound to them.
26       If a project is running on more than one processor  set,  it  can  have
27       different entitlements on every set. A project's entitlement is defined
28       as a ratio between the number of shares given to a project and the  sum
29       of  shares of all active projects running on the same processor set. An
30       active project is one  that  has  at  least  one  running  or  runnable
31       process.  Entitlements  are  recomputed  whenever  any  project becomes
32       active or inactive, or whenever the number of shares is changed.
33
34
35       Processor sets represent virtual machines in the FSS  scheduling  class
36       and  processes  are scheduled independently in each processor set. That
37       is, processes compete with each other only if they are running  on  the
38       same  processor  set.  When a processor set is destroyed, all processes
39       that were bound to it are moved to the  default  processor  set,  which
40       always  exists.  Empty processor sets (that is, sets without processors
41       in them) have no impact on the FSS scheduler behavior.
42
43
44       If a processor set contains a mix of TS/IA and FSS processes, the fair‐
45       ness  of  the  FSS  scheduling  class  can be compromised because these
46       classes use the same range of priorities.  Fairness  is  most  signifi‐
47       cantly  affected  if  processes  running in the TS scheduling class are
48       CPU-intensive and are bound to processors within the processor set.  As
49       a  result, you should avoid having processes from TS/IA and FSS classes
50       share the same processor set. RT and FSS processes use disjoint  prior‐
51       ity ranges and therefore can share processor sets.
52
53
54       As  projects execute, their CPU usage is accumulated over time. The FSS
55       scheduler periodically decays CPU usages of every project by  multiply‐
56       ing  it  with  a  decay factor, ensuring that more recent CPU usage has
57       greater weight when taken into account for scheduling. The  FSS  sched‐
58       uler  continually  adjusts  priorities  of  all  processes to make each
59       project's relative CPU usage converge with its entitlement.
60
61
62       While FSS is designed to fairly allocate cycles over a  long-term  time
63       period,  it  is possible that projects will not receive their allocated
64       shares worth of CPU cycles due to uneven demand. This  makes  one-shot,
65       instantaneous analysis of FSS performance data unreliable.
66
67
68       Note  that share is not the same as utilization. A project may be allo‐
69       cated 50% of the system, although on the average,  it  uses  just  20%.
70       Shares  serve  to cap a project's CPU usage only when there is competi‐
71       tion from other projects running on the same processor set. When  there
72       is  no competition, utilization may be larger than entitlement based on
73       shares. Allocating a small share to a busy project slows  it  down  but
74       does not prevent it from completing its work if the system is not satu‐
75       rated.
76
77
78       The configuration of CPU shares is managed by  the  name  server  as  a
79       property of the project(4) database. In the following example, an entry
80       in the /etc/project file sets the number of shares for project  x-files
81       to 10:
82
83         x-files:100::::project.cpu-shares=(privileged,10,none)
84
85
86
87       Projects with undefined number of shares are given one share each. This
88       means that such projects are treated with  equal  importance.  Projects
89       with  0 shares only run when there are no projects with non-zero shares
90       competing for the same processor set. The maximum number of shares that
91       can be assigned to one project is 65535.
92
93
94       You can use the prctl(1) command to determine the current share assign‐
95       ment for a given project:
96
97         $ prctl -n project.cpu-shares -i project x-files
98
99
100
101       or to change the amount of shares if you have root privileges:
102
103         # prctl -r -n project.cpu-shares -v 5 -i project x-files
104
105
106
107       See the prctl(1) man page for additional information on how  to  modify
108       and  examine resource controls associated with active processes, tasks,
109       or projects on the system. See resource_controls(5) for  a  description
110       of  the  resource  controls  supported  in  the  current release of the
111       Solaris operating system.
112
113
114       By default, project system (project ID 0) includes all  system  daemons
115       started  by  initialization  scripts  and  has an "unlimited" amount of
116       shares. That is, it is always scheduled first no matter how many shares
117       are given to other projects.
118
119
120       The following command sets FSS as the default scheduler for the system:
121
122         # dispadmin -d FSS
123
124
125
126       This change will take effect on the next reboot. Alternatively, you can
127       move processes from the time-share scheduling class  (as  well  as  the
128       special  case of init) into the FSS class without changing your default
129       scheduling class and rebooting by becoming root,  and  then  using  the
130       priocntl(1) command, as shown in the following example:
131
132         # priocntl -s -c FSS -i class TS
133         # priocntl -s -c FSS -i pid 1
134
135

CONFIGURING SCHEDULER WITH DISPADMIN

137       You  can  use  the  dispadmin(1M)  command  to examine and tune the FSS
138       scheduler's time quantum value. Time quantum is the amount of time that
139       a thread is allowed to run before it must relinquish the processor. The
140       following example dumps the current time quantum  for  the  fair  share
141       scheduler:
142
143         $ dispadmin -g -c FSS
144              #
145              # Fair Share Scheduler Configuration
146              #
147              RES=1000
148              #
149              # Time Quantum
150              #
151              QUANTUM=110
152
153
154
155       The  value of the QUANTUM represents some fraction of a second with the
156       fractional value determied by the reciprocal value  of  RES.  With  the
157       default  value  of  RES = 1000, the reciprocal of 1000 is .001, or mil‐
158       liseconds. Thus, by default, the  QUANTUM  value  represents  the  time
159       quantum in milliseconds.
160
161
162       If  you  change  the  RES value using dispadmin with the -r option, you
163       also change the QUANTUM value. For example, instead of quantum  of  110
164       with  RES of 1000, a quantum of 11 with a RES of 100 results. The frac‐
165       tional unit is different while the amount of time is the same.
166
167
168       You can use the -s option to change the time quantum value.  Note  that
169       such  changes are not preserved across reboot. Please refer to the dis‐
170       padmin(1M) man page for additional information.
171

ATTRIBUTES

173       See attributes(5) for descriptions of the following attributes:
174
175
176
177
178       ┌───────────────────────────────────────────────────────────┐
179       │ATTRIBUTE TYPE                ATTRIBUTE VALUE              │
180       │Architecture                  SUNWcsu                      │
181       └───────────────────────────────────────────────────────────┘
182

SEE ALSO

184       prctl(1),   priocntl(1),   dispadmin(1M),   psrset(1M),    priocntl(2),
185       project(4), attributes(5), resource_controls(5)
186
187
188       System  Administration Guide:  Virtualization Using the Solaris Operat‐
189       ing System
190
191
192
193SunOS 5.11                        1 Oct 2004                            FSS(7)
Impressum