diff options
| author | Tejun Heo <tj@kernel.org> | 2026-03-07 04:53:32 -1000 |
|---|---|---|
| committer | Tejun Heo <tj@kernel.org> | 2026-03-07 04:53:32 -1000 |
| commit | 57ccf5ccdc56954f2a91a7f66684fd31c566bde5 (patch) | |
| tree | 5d0c81fca66a4e1138aec57b0df39548c8dc29ae /kernel | |
| parent | 2a0596d516870951ce0e8edf510e48c87cb80761 (diff) | |
sched_ext: Fix enqueue_task_scx() truncation of upper enqueue flags
enqueue_task_scx() takes int enq_flags from the sched_class interface.
SCX enqueue flags starting at bit 32 (SCX_ENQ_PREEMPT and above) are
silently truncated when passed through activate_task(). extra_enq_flags
was added as a workaround - storing high bits in rq->scx.extra_enq_flags
and OR-ing them back in enqueue_task_scx(). However, the OR target is
still the int parameter, so the high bits are lost anyway.
The current impact is limited as the only affected flag is SCX_ENQ_PREEMPT
which is informational to the BPF scheduler - its loss means the scheduler
doesn't know about preemption but doesn't cause incorrect behavior.
Fix by renaming the int parameter to core_enq_flags and introducing a
u64 enq_flags local that merges both sources. All downstream functions
already take u64 enq_flags.
Fixes: f0e1a0643a59 ("sched_ext: Implement BPF extensible scheduler class")
Cc: stable@vger.kernel.org # v6.12+
Acked-by: Andrea Righi <arighi@nvidia.com>
Signed-off-by: Tejun Heo <tj@kernel.org>
Diffstat (limited to 'kernel')
| -rw-r--r-- | kernel/sched/ext.c | 5 |
1 files changed, 2 insertions, 3 deletions
diff --git a/kernel/sched/ext.c b/kernel/sched/ext.c index f323df7be180..174e3650d7fe 100644 --- a/kernel/sched/ext.c +++ b/kernel/sched/ext.c @@ -1470,16 +1470,15 @@ static void clr_task_runnable(struct task_struct *p, bool reset_runnable_at) p->scx.flags |= SCX_TASK_RESET_RUNNABLE_AT; } -static void enqueue_task_scx(struct rq *rq, struct task_struct *p, int enq_flags) +static void enqueue_task_scx(struct rq *rq, struct task_struct *p, int core_enq_flags) { struct scx_sched *sch = scx_root; int sticky_cpu = p->scx.sticky_cpu; + u64 enq_flags = core_enq_flags | rq->scx.extra_enq_flags; if (enq_flags & ENQUEUE_WAKEUP) rq->scx.flags |= SCX_RQ_IN_WAKEUP; - enq_flags |= rq->scx.extra_enq_flags; - if (sticky_cpu >= 0) p->scx.sticky_cpu = -1; |
