path: root/include
diff options
authorPaul Mackerras <>2009-05-28 22:18:17 +1000
committerIngo Molnar <>2009-05-28 15:03:50 +0200
commitc93f7669098eb97c5376e5396e3dfb734c17df4f (patch)
tree0c17f812277320fcefd4f441e1db7a7f862752a5 /include
parent63299f057fbce47da895e8865cba7e9c3eb01a20 (diff)
perf_counter: Fix race in attaching counters to tasks and exiting
Commit 564c2b21 ("perf_counter: Optimize context switch between identical inherited contexts") introduced a race where it is possible that a counter being attached to a task could get attached to the wrong task, if the task is one that has inherited its context from another task via fork. This happens because the optimized context switch could switch the context to another task after find_get_context has read task->perf_counter_ctxp. In fact, it's possible that the context could then get freed, if the other task then exits. This fixes the problem by protecting both the context switch and the critical code in find_get_context with spinlocks. The context switch locks the cxt->lock of both the outgoing and incoming contexts before swapping them. That means that once code such as find_get_context has obtained the spinlock for the context associated with a task, the context can't get swapped to another task. However, the context may have been swapped in the interval between reading task->perf_counter_ctxp and getting the lock, so it is necessary to check and retry. To make sure that none of the contexts being looked at in find_get_context can get freed, this changes the context freeing code to use RCU. Thus an rcu_read_lock() is sufficient to ensure that no contexts can get freed. This part of the patch is lifted from a patch posted by Peter Zijlstra. This also adds a check to make sure that we can't add a counter to a task that is exiting. There is also a race between perf_counter_exit_task and find_get_context; this solves the race by moving the get_ctx that was in perf_counter_alloc into the locked region in find_get_context, so that once find_get_context has got the context for a task, it won't get freed even if the task calls perf_counter_exit_task. It doesn't matter if new top-level (non-inherited) counters get attached to the context after perf_counter_exit_task has detached the context from the task. They will just stay there and never get scheduled in until the counters' fds get closed, and then perf_release will remove them from the context and eventually free the context. With this, we are now doing the unclone in find_get_context rather than when a counter was added to or removed from a context (actually, we were missing the unclone_ctx() call when adding a counter to a context). We don't need to unclone when removing a counter from a context because we have no way to remove a counter from a cloned context. This also takes out the smp_wmb() in find_get_context, which Peter Zijlstra pointed out was unnecessary because the cmpxchg implies a full barrier anyway. Signed-off-by: Paul Mackerras <> Acked-by: Peter Zijlstra <> Cc: Corey Ashford <> Cc: Mike Galbraith <> Cc: Marcelo Tosatti <> Cc: Arnaldo Carvalho de Melo <> Cc: John Kacur <> LKML-Reference: <> Signed-off-by: Ingo Molnar <>
Diffstat (limited to 'include')
1 files changed, 3 insertions, 2 deletions
diff --git a/include/linux/perf_counter.h b/include/linux/perf_counter.h
index a65ddc580514..717bf3b59ba4 100644
--- a/include/linux/perf_counter.h
+++ b/include/linux/perf_counter.h
@@ -541,8 +541,9 @@ struct perf_counter_context {
* been cloned (inherited) from a common ancestor.
struct perf_counter_context *parent_ctx;
- u32 parent_gen;
- u32 generation;
+ u64 parent_gen;
+ u64 generation;
+ struct rcu_head rcu_head;