summary refs log tree commit diff
path: root/kernel/sched/loadavg.c
diff options
context:
space:
mode:
authorIngo Molnar <mingo@kernel.org>2018-03-03 14:01:12 +0100
committerIngo Molnar <mingo@kernel.org>2018-03-03 15:50:21 +0100
commit97fb7a0a8944bd6d2c5634e1e0fa689a5c40bc22 (patch)
tree4993de40ba9dc0cf76d2233b8292a771d8c41941 /kernel/sched/loadavg.c
parentc2e513821d5df5e772287f6d0c23fd17b7c2bb1a (diff)
downloadlinux-97fb7a0a8944bd6d2c5634e1e0fa689a5c40bc22.tar.gz
sched: Clean up and harmonize the coding style of the scheduler code base
A good number of small style inconsistencies have accumulated
in the scheduler core, so do a pass over them to harmonize
all these details:

 - fix speling in comments,

 - use curly braces for multi-line statements,

 - remove unnecessary parentheses from integer literals,

 - capitalize consistently,

 - remove stray newlines,

 - add comments where necessary,

 - remove invalid/unnecessary comments,

 - align structure definitions and other data types vertically,

 - add missing newlines for increased readability,

 - fix vertical tabulation where it's misaligned,

 - harmonize preprocessor conditional block labeling
   and vertical alignment,

 - remove line-breaks where they uglify the code,

 - add newline after local variable definitions,

No change in functionality:

  md5:
     1191fa0a890cfa8132156d2959d7e9e2  built-in.o.before.asm
     1191fa0a890cfa8132156d2959d7e9e2  built-in.o.after.asm

Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Mike Galbraith <efault@gmx.de>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: linux-kernel@vger.kernel.org
Signed-off-by: Ingo Molnar <mingo@kernel.org>
Diffstat (limited to 'kernel/sched/loadavg.c')
-rw-r--r--kernel/sched/loadavg.c30
1 files changed, 15 insertions, 15 deletions
diff --git a/kernel/sched/loadavg.c b/kernel/sched/loadavg.c
index 89a989e4d758..a398e7e28a8a 100644
--- a/kernel/sched/loadavg.c
+++ b/kernel/sched/loadavg.c
@@ -32,29 +32,29 @@
  * Due to a number of reasons the above turns in the mess below:
  *
  *  - for_each_possible_cpu() is prohibitively expensive on machines with
- *    serious number of cpus, therefore we need to take a distributed approach
+ *    serious number of CPUs, therefore we need to take a distributed approach
  *    to calculating nr_active.
  *
  *        \Sum_i x_i(t) = \Sum_i x_i(t) - x_i(t_0) | x_i(t_0) := 0
  *                      = \Sum_i { \Sum_j=1 x_i(t_j) - x_i(t_j-1) }
  *
  *    So assuming nr_active := 0 when we start out -- true per definition, we
- *    can simply take per-cpu deltas and fold those into a global accumulate
+ *    can simply take per-CPU deltas and fold those into a global accumulate
  *    to obtain the same result. See calc_load_fold_active().
  *
- *    Furthermore, in order to avoid synchronizing all per-cpu delta folding
+ *    Furthermore, in order to avoid synchronizing all per-CPU delta folding
  *    across the machine, we assume 10 ticks is sufficient time for every
- *    cpu to have completed this task.
+ *    CPU to have completed this task.
  *
  *    This places an upper-bound on the IRQ-off latency of the machine. Then
  *    again, being late doesn't loose the delta, just wrecks the sample.
  *
- *  - cpu_rq()->nr_uninterruptible isn't accurately tracked per-cpu because
- *    this would add another cross-cpu cacheline miss and atomic operation
- *    to the wakeup path. Instead we increment on whatever cpu the task ran
- *    when it went into uninterruptible state and decrement on whatever cpu
+ *  - cpu_rq()->nr_uninterruptible isn't accurately tracked per-CPU because
+ *    this would add another cross-CPU cacheline miss and atomic operation
+ *    to the wakeup path. Instead we increment on whatever CPU the task ran
+ *    when it went into uninterruptible state and decrement on whatever CPU
  *    did the wakeup. This means that only the sum of nr_uninterruptible over
- *    all cpus yields the correct result.
+ *    all CPUs yields the correct result.
  *
  *  This covers the NO_HZ=n code, for extra head-aches, see the comment below.
  */
@@ -115,11 +115,11 @@ calc_load(unsigned long load, unsigned long exp, unsigned long active)
  * Handle NO_HZ for the global load-average.
  *
  * Since the above described distributed algorithm to compute the global
- * load-average relies on per-cpu sampling from the tick, it is affected by
+ * load-average relies on per-CPU sampling from the tick, it is affected by
  * NO_HZ.
  *
  * The basic idea is to fold the nr_active delta into a global NO_HZ-delta upon
- * entering NO_HZ state such that we can include this as an 'extra' cpu delta
+ * entering NO_HZ state such that we can include this as an 'extra' CPU delta
  * when we read the global state.
  *
  * Obviously reality has to ruin such a delightfully simple scheme:
@@ -146,9 +146,9 @@ calc_load(unsigned long load, unsigned long exp, unsigned long active)
  *    busy state.
  *
  *    This is solved by pushing the window forward, and thus skipping the
- *    sample, for this cpu (effectively using the NO_HZ-delta for this cpu which
+ *    sample, for this CPU (effectively using the NO_HZ-delta for this CPU which
  *    was in effect at the time the window opened). This also solves the issue
- *    of having to deal with a cpu having been in NO_HZ for multiple LOAD_FREQ
+ *    of having to deal with a CPU having been in NO_HZ for multiple LOAD_FREQ
  *    intervals.
  *
  * When making the ILB scale, we should try to pull this in as well.
@@ -299,7 +299,7 @@ calc_load_n(unsigned long load, unsigned long exp,
 }
 
 /*
- * NO_HZ can leave us missing all per-cpu ticks calling
+ * NO_HZ can leave us missing all per-CPU ticks calling
  * calc_load_fold_active(), but since a NO_HZ CPU folds its delta into
  * calc_load_nohz per calc_load_nohz_start(), all we need to do is fold
  * in the pending NO_HZ delta if our NO_HZ period crossed a load cycle boundary.
@@ -363,7 +363,7 @@ void calc_global_load(unsigned long ticks)
 		return;
 
 	/*
-	 * Fold the 'old' NO_HZ-delta to include all NO_HZ cpus.
+	 * Fold the 'old' NO_HZ-delta to include all NO_HZ CPUs.
 	 */
 	delta = calc_load_nohz_fold();
 	if (delta)