|author||Will Deacon <firstname.lastname@example.org>||2016-06-08 15:10:57 +0100|
|committer||Will Deacon <email@example.com>||2016-06-15 09:51:36 +0100|
arm64: spinlock: fix spin_unlock_wait for LSE atomics
Commit d86b8da04dfa ("arm64: spinlock: serialise spin_unlock_wait against concurrent lockers") fixed spin_unlock_wait for LL/SC-based atomics under the premise that the LSE atomics (in particular, the LDADDA instruction) are indivisible. Unfortunately, these instructions are only indivisible when used with the -AL (full ordering) suffix and, consequently, the same issue can theoretically be observed with LSE atomics, where a later (in program order) load can be speculated before the write portion of the atomic operation. This patch fixes the issue by performing a CAS of the lock once we've established that it's unlocked, in much the same way as the LL/SC code. Fixes: d86b8da04dfa ("arm64: spinlock: serialise spin_unlock_wait against concurrent lockers") Signed-off-by: Will Deacon <firstname.lastname@example.org>
Diffstat (limited to 'arch/arm64/include/asm')
1 files changed, 7 insertions, 3 deletions
diff --git a/arch/arm64/include/asm/spinlock.h b/arch/arm64/include/asm/spinlock.h
index aac64d55cb22..d5c894253e73 100644
@@ -43,13 +43,17 @@ static inline void arch_spin_unlock_wait(arch_spinlock_t *lock)
"2: ldaxr %w0, %2\n"
" eor %w1, %w0, %w0, ror #16\n"
" cbnz %w1, 1b\n"
+ /* Serialise against any concurrent lockers */
/* LL/SC */
" stxr %w1, %w0, %2\n"
-" cbnz %w1, 2b\n", /* Serialise against any concurrent lockers */
- /* LSE atomics */
+ /* LSE atomics */
+" mov %w1, %w0\n"
+" cas %w0, %w0, %2\n"
+" eor %w1, %w1, %w0\n")
+" cbnz %w1, 2b\n"
: "=&r" (lockval), "=&r" (tmp), "+Q" (*lock)