Skip to content

Commit a09b8e5

Browse files
kudureranganathsmb49
authored andcommitted
softirq: Allow raising SCHED_SOFTIRQ from SMP-call-function on RT kernel
BugLink: https://bugs.launchpad.net/bugs/2102118 commit 6675ce2 upstream. do_softirq_post_smp_call_flush() on PREEMPT_RT kernels carries a WARN_ON_ONCE() for any SOFTIRQ being raised from an SMP-call-function. Since do_softirq_post_smp_call_flush() is called with preempt disabled, raising a SOFTIRQ during flush_smp_call_function_queue() can lead to longer preempt disabled sections. Since commit b2a02fc ("smp: Optimize send_call_function_single_ipi()") IPIs to an idle CPU in TIF_POLLING_NRFLAG mode can be optimized out by instead setting TIF_NEED_RESCHED bit in idle task's thread_info and relying on the flush_smp_call_function_queue() in the idle-exit path to run the SMP-call-function. To trigger an idle load balancing, the scheduler queues nohz_csd_function() responsible for triggering an idle load balancing on a target nohz idle CPU and sends an IPI. Only now, this IPI is optimized out and the SMP-call-function is executed from flush_smp_call_function_queue() in do_idle() which can raise a SCHED_SOFTIRQ to trigger the balancing. So far, this went undetected since, the need_resched() check in nohz_csd_function() would make it bail out of idle load balancing early as the idle thread does not clear TIF_POLLING_NRFLAG before calling flush_smp_call_function_queue(). The need_resched() check was added with the intent to catch a new task wakeup, however, it has recently discovered to be unnecessary and will be removed in the subsequent commit after which nohz_csd_function() can raise a SCHED_SOFTIRQ from flush_smp_call_function_queue() to trigger an idle load balance on an idle target in TIF_POLLING_NRFLAG mode. nohz_csd_function() bails out early if "idle_cpu()" check for the target CPU, and does not lock the target CPU's rq until the very end, once it has found tasks to run on the CPU and will not inhibit the wakeup of, or running of a newly woken up higher priority task. Account for this and prevent a WARN_ON_ONCE() when SCHED_SOFTIRQ is raised from flush_smp_call_function_queue(). Signed-off-by: K Prateek Nayak <kprateek.nayak@amd.com> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Link: https://lore.kernel.org/r/20241119054432.6405-2-kprateek.nayak@amd.com Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org> Signed-off-by: Koichiro Den <koichiro.den@canonical.com> Signed-off-by: Stefan Bader <stefan.bader@canonical.com>
1 parent df20e80 commit a09b8e5

File tree

1 file changed

+11
-4
lines changed

1 file changed

+11
-4
lines changed

kernel/softirq.c

Lines changed: 11 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -279,17 +279,24 @@ static inline void invoke_softirq(void)
279279
wakeup_softirqd();
280280
}
281281

282+
#define SCHED_SOFTIRQ_MASK BIT(SCHED_SOFTIRQ)
283+
282284
/*
283285
* flush_smp_call_function_queue() can raise a soft interrupt in a function
284-
* call. On RT kernels this is undesired and the only known functionality
285-
* in the block layer which does this is disabled on RT. If soft interrupts
286-
* get raised which haven't been raised before the flush, warn so it can be
286+
* call. On RT kernels this is undesired and the only known functionalities
287+
* are in the block layer which is disabled on RT, and in the scheduler for
288+
* idle load balancing. If soft interrupts get raised which haven't been
289+
* raised before the flush, warn if it is not a SCHED_SOFTIRQ so it can be
287290
* investigated.
288291
*/
289292
void do_softirq_post_smp_call_flush(unsigned int was_pending)
290293
{
291-
if (WARN_ON_ONCE(was_pending != local_softirq_pending()))
294+
unsigned int is_pending = local_softirq_pending();
295+
296+
if (unlikely(was_pending != is_pending)) {
297+
WARN_ON_ONCE(was_pending != (is_pending & ~SCHED_SOFTIRQ_MASK));
292298
invoke_softirq();
299+
}
293300
}
294301

295302
#else /* CONFIG_PREEMPT_RT */

0 commit comments

Comments
 (0)