Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

rcu: Make call_rcu() lazy only when CONFIG_RCU_LAZY is enabled #21

Open
wants to merge 2 commits into
base: rcu-dev
Choose a base branch
from

Conversation

kernel-patches-bot
Copy link

Pull request for series with
subject: rcu: Make call_rcu() lazy only when CONFIG_RCU_LAZY is enabled
version: 1
url: https://patchwork.kernel.org/project/rcu/list/?series=686650

@kernel-patches-bot
Copy link
Author

Master branch: 00c153b
series: https://patchwork.kernel.org/project/rcu/list/?series=686650
version: 1

Pull request is NOT updated. Failed to apply https://patchwork.kernel.org/project/rcu/list/?series=686650
error message:

Cmd('git') failed due to: exit code(128)
  cmdline: git am -3
  stdout: 'Applying: rcu: Make call_rcu() lazy only when CONFIG_RCU_LAZY is enabled
Using index info to reconstruct a base tree...
M	kernel/rcu/tree.c
Falling back to patching base and 3-way merge...
Auto-merging kernel/rcu/tree.c
CONFLICT (content): Merge conflict in kernel/rcu/tree.c
Patch failed at 0001 rcu: Make call_rcu() lazy only when CONFIG_RCU_LAZY is enabled
When you have resolved this problem, run "git am --continue".
If you prefer to skip this patch, run "git am --skip" instead.
To restore the original branch and stop patching, run "git am --abort".'
  stderr: 'error: Failed to merge in the changes.
hint: Use 'git am --show-current-patch=diff' to see the failed patch'

conflict:

diff --cc kernel/rcu/tree.c
index 282002e62cf3,97ef602da3d5..000000000000
--- a/kernel/rcu/tree.c
+++ b/kernel/rcu/tree.c
@@@ -2851,9 -2812,91 +2851,95 @@@ void call_rcu(struct rcu_head *head, rc
  		local_irq_restore(flags);
  	}
  }
++<<<<<<< HEAD
++=======
+ 
+ #ifdef CONFIG_RCU_LAZY
+ /**
+  * call_rcu_flush() - Queue RCU callback for invocation after grace period, and
+  * flush all lazy callbacks (including the new one) to the main ->cblist while
+  * doing so.
+  *
+  * @head: structure to be used for queueing the RCU updates.
+  * @func: actual callback function to be invoked after the grace period
+  *
+  * The callback function will be invoked some time after a full grace
+  * period elapses, in other words after all pre-existing RCU read-side
+  * critical sections have completed.
+  *
+  * Use this API instead of call_rcu() if you don't want the callback to be
+  * invoked after very long periods of time, which can happen on systems without
+  * memory pressure and on systems which are lightly loaded or mostly idle.
+  * This function will cause callbacks to be invoked sooner than later at the
+  * expense of extra power. Other than that, this function is identical to, and
+  * reuses call_rcu()'s logic. Refer to call_rcu() for more details about memory
+  * ordering and other functionality.
+  */
+ void call_rcu_flush(struct rcu_head *head, rcu_callback_t func)
+ {
+ 	return __call_rcu_common(head, func, false);
+ }
+ EXPORT_SYMBOL_GPL(call_rcu_flush);
+ 
+ /**
+  * call_rcu() - Queue an RCU callback for invocation after a grace period.
+  * By default the callbacks are 'lazy' and are kept hidden from the main
+  * ->cblist to prevent starting of grace periods too soon.
+  * If you desire grace periods to start very soon, use call_rcu_flush().
+  *
+  * @head: structure to be used for queueing the RCU updates.
+  * @func: actual callback function to be invoked after the grace period
+  *
+  * The callback function will be invoked some time after a full grace
+  * period elapses, in other words after all pre-existing RCU read-side
+  * critical sections have completed.  However, the callback function
+  * might well execute concurrently with RCU read-side critical sections
+  * that started after call_rcu() was invoked.
+  *
+  * RCU read-side critical sections are delimited by rcu_read_lock()
+  * and rcu_read_unlock(), and may be nested.  In addition, but only in
+  * v5.0 and later, regions of code across which interrupts, preemption,
+  * or softirqs have been disabled also serve as RCU read-side critical
+  * sections.  This includes hardware interrupt handlers, softirq handlers,
+  * and NMI handlers.
+  *
+  * Note that all CPUs must agree that the grace period extended beyond
+  * all pre-existing RCU read-side critical section.  On systems with more
+  * than one CPU, this means that when "func()" is invoked, each CPU is
+  * guaranteed to have executed a full memory barrier since the end of its
+  * last RCU read-side critical section whose beginning preceded the call
+  * to call_rcu().  It also means that each CPU executing an RCU read-side
+  * critical section that continues beyond the start of "func()" must have
+  * executed a memory barrier after the call_rcu() but before the beginning
+  * of that RCU read-side critical section.  Note that these guarantees
+  * include CPUs that are offline, idle, or executing in user mode, as
+  * well as CPUs that are executing in the kernel.
+  *
+  * Furthermore, if CPU A invoked call_rcu() and CPU B invoked the
+  * resulting RCU callback function "func()", then both CPU A and CPU B are
+  * guaranteed to execute a full memory barrier during the time interval
+  * between the call to call_rcu() and the invocation of "func()" -- even
+  * if CPU A and CPU B are the same CPU (but again only if the system has
+  * more than one CPU).
+  *
+  * Implementation of these memory-ordering guarantees is described here:
+  * Documentation/RCU/Design/Memory-Ordering/Tree-RCU-Memory-Ordering.rst.
+  */
+ void call_rcu(struct rcu_head *head, rcu_callback_t func)
+ {
+ 	return __call_rcu_common(head, func, true);
+ }
++>>>>>>> rcu: Make call_rcu() lazy only when CONFIG_RCU_LAZY is enabled
  EXPORT_SYMBOL_GPL(call_rcu);
+ #else
+ void call_rcu(struct rcu_head *head, rcu_callback_t func)
+ {
+ 	return __call_rcu_common(head, func, false);
+ }
+ EXPORT_SYMBOL_GPL(call_rcu);
+ #endif
  
 +
  /* Maximum number of jiffies to wait before draining a batch. */
  #define KFREE_DRAIN_JIFFIES (5 * HZ)
  #define KFREE_N_BATCHES 2

@kernel-patches-bot
Copy link
Author

Master branch: fa70e60
series: https://patchwork.kernel.org/project/rcu/list/?series=686936
version: 3

@kernel-patches-bot
Copy link
Author

Master branch: fa70e60
series: https://patchwork.kernel.org/project/rcu/list/?series=686936
version: 3

@kernel-patches-bot
Copy link
Author

Master branch: fa70e60
series: https://patchwork.kernel.org/project/rcu/list/?series=686936
version: 3

@kernel-patches-bot
Copy link
Author

Master branch: fa70e60
series: https://patchwork.kernel.org/project/rcu/list/?series=686936
version: 3

@kernel-patches-bot
Copy link
Author

Master branch: fa70e60
series: https://patchwork.kernel.org/project/rcu/list/?series=686936
version: 3

@kernel-patches-bot
Copy link
Author

Master branch: fa70e60
series: https://patchwork.kernel.org/project/rcu/list/?series=686936
version: 3

@kernel-patches-bot
Copy link
Author

Master branch: fa70e60
series: https://patchwork.kernel.org/project/rcu/list/?series=686936
version: 3

@kernel-patches-bot
Copy link
Author

Master branch: fa70e60
series: https://patchwork.kernel.org/project/rcu/list/?series=686936
version: 3

@kernel-patches-bot
Copy link
Author

Master branch: fa70e60
series: https://patchwork.kernel.org/project/rcu/list/?series=686936
version: 3

@kernel-patches-bot
Copy link
Author

Master branch: fa70e60
series: https://patchwork.kernel.org/project/rcu/list/?series=686936
version: 3

@kernel-patches-bot
Copy link
Author

Master branch: fa70e60
series: https://patchwork.kernel.org/project/rcu/list/?series=686936
version: 3

@kernel-patches-bot
Copy link
Author

Master branch: fa70e60
series: https://patchwork.kernel.org/project/rcu/list/?series=686936
version: 3

@kernel-patches-bot
Copy link
Author

Master branch: fa70e60
series: https://patchwork.kernel.org/project/rcu/list/?series=686936
version: 3

@kernel-patches-bot
Copy link
Author

Master branch: fa70e60
series: https://patchwork.kernel.org/project/rcu/list/?series=686936
version: 3

Currently, regardless of whether the CONFIG_RCU_LAZY is enabled,
invoke the call_rcu() is always lazy, it also means that when
CONFIG_RCU_LAZY is disabled, invoke the call_rcu_flush() is also
lazy. therefore, this commit make call_rcu() lazy only when
CONFIG_RCU_LAZY is enabled.

Signed-off-by: Zqiang <[email protected]>
Acked-by: Joel Fernandes (Google) <[email protected]>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants