2024-07-29 14:00:02 -07:00
|
|
|
/* SPDX-License-Identifier: GPL-2.0-only */
|
riscv/spinlock: Strengthen implementations with fences
Current implementations map locking operations using .rl and .aq
annotations. However, this mapping is unsound w.r.t. the kernel
memory consistency model (LKMM) [1]:
Referring to the "unlock-lock-read-ordering" test reported below,
Daniel wrote:
"I think an RCpc interpretation of .aq and .rl would in fact
allow the two normal loads in P1 to be reordered [...]
The intuition would be that the amoswap.w.aq can forward from
the amoswap.w.rl while that's still in the store buffer, and
then the lw x3,0(x4) can also perform while the amoswap.w.rl
is still in the store buffer, all before the l1 x1,0(x2)
executes. That's not forbidden unless the amoswaps are RCsc,
unless I'm missing something.
Likewise even if the unlock()/lock() is between two stores.
A control dependency might originate from the load part of
the amoswap.w.aq, but there still would have to be something
to ensure that this load part in fact performs after the store
part of the amoswap.w.rl performs globally, and that's not
automatic under RCpc."
Simulation of the RISC-V memory consistency model confirmed this
expectation.
In order to "synchronize" LKMM and RISC-V's implementation, this
commit strengthens the implementations of the locking operations
by replacing .rl and .aq with the use of ("lightweigth") fences,
resp., "fence rw, w" and "fence r , rw".
C unlock-lock-read-ordering
{}
/* s initially owned by P1 */
P0(int *x, int *y)
{
WRITE_ONCE(*x, 1);
smp_wmb();
WRITE_ONCE(*y, 1);
}
P1(int *x, int *y, spinlock_t *s)
{
int r0;
int r1;
r0 = READ_ONCE(*y);
spin_unlock(s);
spin_lock(s);
r1 = READ_ONCE(*x);
}
exists (1:r0=1 /\ 1:r1=0)
[1] https://marc.info/?l=linux-kernel&m=151930201102853&w=2
https://groups.google.com/a/groups.riscv.org/forum/#!topic/isa-dev/hKywNHBkAXM
https://marc.info/?l=linux-kernel&m=151633436614259&w=2
Signed-off-by: Andrea Parri <parri.andrea@gmail.com>
Cc: Palmer Dabbelt <palmer@sifive.com>
Cc: Albert Ou <albert@sifive.com>
Cc: Daniel Lustig <dlustig@nvidia.com>
Cc: Alan Stern <stern@rowland.harvard.edu>
Cc: Will Deacon <will.deacon@arm.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Boqun Feng <boqun.feng@gmail.com>
Cc: Nicholas Piggin <npiggin@gmail.com>
Cc: David Howells <dhowells@redhat.com>
Cc: Jade Alglave <j.alglave@ucl.ac.uk>
Cc: Luc Maranget <luc.maranget@inria.fr>
Cc: "Paul E. McKenney" <paulmck@linux.vnet.ibm.com>
Cc: Akira Yokosawa <akiyks@gmail.com>
Cc: Ingo Molnar <mingo@kernel.org>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: linux-riscv@lists.infradead.org
Cc: linux-kernel@vger.kernel.org
Signed-off-by: Palmer Dabbelt <palmer@sifive.com>
2018-03-09 05:13:20 -07:00
|
|
|
#ifndef _ASM_RISCV_FENCE_H
|
|
|
|
#define _ASM_RISCV_FENCE_H
|
|
|
|
|
2024-02-17 06:13:16 -07:00
|
|
|
#define RISCV_FENCE_ASM(p, s) "\tfence " #p "," #s "\n"
|
|
|
|
#define RISCV_FENCE(p, s) \
|
|
|
|
({ __asm__ __volatile__ (RISCV_FENCE_ASM(p, s) : : : "memory"); })
|
|
|
|
|
riscv/spinlock: Strengthen implementations with fences
Current implementations map locking operations using .rl and .aq
annotations. However, this mapping is unsound w.r.t. the kernel
memory consistency model (LKMM) [1]:
Referring to the "unlock-lock-read-ordering" test reported below,
Daniel wrote:
"I think an RCpc interpretation of .aq and .rl would in fact
allow the two normal loads in P1 to be reordered [...]
The intuition would be that the amoswap.w.aq can forward from
the amoswap.w.rl while that's still in the store buffer, and
then the lw x3,0(x4) can also perform while the amoswap.w.rl
is still in the store buffer, all before the l1 x1,0(x2)
executes. That's not forbidden unless the amoswaps are RCsc,
unless I'm missing something.
Likewise even if the unlock()/lock() is between two stores.
A control dependency might originate from the load part of
the amoswap.w.aq, but there still would have to be something
to ensure that this load part in fact performs after the store
part of the amoswap.w.rl performs globally, and that's not
automatic under RCpc."
Simulation of the RISC-V memory consistency model confirmed this
expectation.
In order to "synchronize" LKMM and RISC-V's implementation, this
commit strengthens the implementations of the locking operations
by replacing .rl and .aq with the use of ("lightweigth") fences,
resp., "fence rw, w" and "fence r , rw".
C unlock-lock-read-ordering
{}
/* s initially owned by P1 */
P0(int *x, int *y)
{
WRITE_ONCE(*x, 1);
smp_wmb();
WRITE_ONCE(*y, 1);
}
P1(int *x, int *y, spinlock_t *s)
{
int r0;
int r1;
r0 = READ_ONCE(*y);
spin_unlock(s);
spin_lock(s);
r1 = READ_ONCE(*x);
}
exists (1:r0=1 /\ 1:r1=0)
[1] https://marc.info/?l=linux-kernel&m=151930201102853&w=2
https://groups.google.com/a/groups.riscv.org/forum/#!topic/isa-dev/hKywNHBkAXM
https://marc.info/?l=linux-kernel&m=151633436614259&w=2
Signed-off-by: Andrea Parri <parri.andrea@gmail.com>
Cc: Palmer Dabbelt <palmer@sifive.com>
Cc: Albert Ou <albert@sifive.com>
Cc: Daniel Lustig <dlustig@nvidia.com>
Cc: Alan Stern <stern@rowland.harvard.edu>
Cc: Will Deacon <will.deacon@arm.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Boqun Feng <boqun.feng@gmail.com>
Cc: Nicholas Piggin <npiggin@gmail.com>
Cc: David Howells <dhowells@redhat.com>
Cc: Jade Alglave <j.alglave@ucl.ac.uk>
Cc: Luc Maranget <luc.maranget@inria.fr>
Cc: "Paul E. McKenney" <paulmck@linux.vnet.ibm.com>
Cc: Akira Yokosawa <akiyks@gmail.com>
Cc: Ingo Molnar <mingo@kernel.org>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: linux-riscv@lists.infradead.org
Cc: linux-kernel@vger.kernel.org
Signed-off-by: Palmer Dabbelt <palmer@sifive.com>
2018-03-09 05:13:20 -07:00
|
|
|
#ifdef CONFIG_SMP
|
2024-02-17 06:13:16 -07:00
|
|
|
#define RISCV_ACQUIRE_BARRIER RISCV_FENCE_ASM(r, rw)
|
|
|
|
#define RISCV_RELEASE_BARRIER RISCV_FENCE_ASM(rw, w)
|
|
|
|
#define RISCV_FULL_BARRIER RISCV_FENCE_ASM(rw, rw)
|
riscv/spinlock: Strengthen implementations with fences
Current implementations map locking operations using .rl and .aq
annotations. However, this mapping is unsound w.r.t. the kernel
memory consistency model (LKMM) [1]:
Referring to the "unlock-lock-read-ordering" test reported below,
Daniel wrote:
"I think an RCpc interpretation of .aq and .rl would in fact
allow the two normal loads in P1 to be reordered [...]
The intuition would be that the amoswap.w.aq can forward from
the amoswap.w.rl while that's still in the store buffer, and
then the lw x3,0(x4) can also perform while the amoswap.w.rl
is still in the store buffer, all before the l1 x1,0(x2)
executes. That's not forbidden unless the amoswaps are RCsc,
unless I'm missing something.
Likewise even if the unlock()/lock() is between two stores.
A control dependency might originate from the load part of
the amoswap.w.aq, but there still would have to be something
to ensure that this load part in fact performs after the store
part of the amoswap.w.rl performs globally, and that's not
automatic under RCpc."
Simulation of the RISC-V memory consistency model confirmed this
expectation.
In order to "synchronize" LKMM and RISC-V's implementation, this
commit strengthens the implementations of the locking operations
by replacing .rl and .aq with the use of ("lightweigth") fences,
resp., "fence rw, w" and "fence r , rw".
C unlock-lock-read-ordering
{}
/* s initially owned by P1 */
P0(int *x, int *y)
{
WRITE_ONCE(*x, 1);
smp_wmb();
WRITE_ONCE(*y, 1);
}
P1(int *x, int *y, spinlock_t *s)
{
int r0;
int r1;
r0 = READ_ONCE(*y);
spin_unlock(s);
spin_lock(s);
r1 = READ_ONCE(*x);
}
exists (1:r0=1 /\ 1:r1=0)
[1] https://marc.info/?l=linux-kernel&m=151930201102853&w=2
https://groups.google.com/a/groups.riscv.org/forum/#!topic/isa-dev/hKywNHBkAXM
https://marc.info/?l=linux-kernel&m=151633436614259&w=2
Signed-off-by: Andrea Parri <parri.andrea@gmail.com>
Cc: Palmer Dabbelt <palmer@sifive.com>
Cc: Albert Ou <albert@sifive.com>
Cc: Daniel Lustig <dlustig@nvidia.com>
Cc: Alan Stern <stern@rowland.harvard.edu>
Cc: Will Deacon <will.deacon@arm.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Boqun Feng <boqun.feng@gmail.com>
Cc: Nicholas Piggin <npiggin@gmail.com>
Cc: David Howells <dhowells@redhat.com>
Cc: Jade Alglave <j.alglave@ucl.ac.uk>
Cc: Luc Maranget <luc.maranget@inria.fr>
Cc: "Paul E. McKenney" <paulmck@linux.vnet.ibm.com>
Cc: Akira Yokosawa <akiyks@gmail.com>
Cc: Ingo Molnar <mingo@kernel.org>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: linux-riscv@lists.infradead.org
Cc: linux-kernel@vger.kernel.org
Signed-off-by: Palmer Dabbelt <palmer@sifive.com>
2018-03-09 05:13:20 -07:00
|
|
|
#else
|
|
|
|
#define RISCV_ACQUIRE_BARRIER
|
|
|
|
#define RISCV_RELEASE_BARRIER
|
2024-02-17 06:13:02 -07:00
|
|
|
#define RISCV_FULL_BARRIER
|
riscv/spinlock: Strengthen implementations with fences
Current implementations map locking operations using .rl and .aq
annotations. However, this mapping is unsound w.r.t. the kernel
memory consistency model (LKMM) [1]:
Referring to the "unlock-lock-read-ordering" test reported below,
Daniel wrote:
"I think an RCpc interpretation of .aq and .rl would in fact
allow the two normal loads in P1 to be reordered [...]
The intuition would be that the amoswap.w.aq can forward from
the amoswap.w.rl while that's still in the store buffer, and
then the lw x3,0(x4) can also perform while the amoswap.w.rl
is still in the store buffer, all before the l1 x1,0(x2)
executes. That's not forbidden unless the amoswaps are RCsc,
unless I'm missing something.
Likewise even if the unlock()/lock() is between two stores.
A control dependency might originate from the load part of
the amoswap.w.aq, but there still would have to be something
to ensure that this load part in fact performs after the store
part of the amoswap.w.rl performs globally, and that's not
automatic under RCpc."
Simulation of the RISC-V memory consistency model confirmed this
expectation.
In order to "synchronize" LKMM and RISC-V's implementation, this
commit strengthens the implementations of the locking operations
by replacing .rl and .aq with the use of ("lightweigth") fences,
resp., "fence rw, w" and "fence r , rw".
C unlock-lock-read-ordering
{}
/* s initially owned by P1 */
P0(int *x, int *y)
{
WRITE_ONCE(*x, 1);
smp_wmb();
WRITE_ONCE(*y, 1);
}
P1(int *x, int *y, spinlock_t *s)
{
int r0;
int r1;
r0 = READ_ONCE(*y);
spin_unlock(s);
spin_lock(s);
r1 = READ_ONCE(*x);
}
exists (1:r0=1 /\ 1:r1=0)
[1] https://marc.info/?l=linux-kernel&m=151930201102853&w=2
https://groups.google.com/a/groups.riscv.org/forum/#!topic/isa-dev/hKywNHBkAXM
https://marc.info/?l=linux-kernel&m=151633436614259&w=2
Signed-off-by: Andrea Parri <parri.andrea@gmail.com>
Cc: Palmer Dabbelt <palmer@sifive.com>
Cc: Albert Ou <albert@sifive.com>
Cc: Daniel Lustig <dlustig@nvidia.com>
Cc: Alan Stern <stern@rowland.harvard.edu>
Cc: Will Deacon <will.deacon@arm.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Boqun Feng <boqun.feng@gmail.com>
Cc: Nicholas Piggin <npiggin@gmail.com>
Cc: David Howells <dhowells@redhat.com>
Cc: Jade Alglave <j.alglave@ucl.ac.uk>
Cc: Luc Maranget <luc.maranget@inria.fr>
Cc: "Paul E. McKenney" <paulmck@linux.vnet.ibm.com>
Cc: Akira Yokosawa <akiyks@gmail.com>
Cc: Ingo Molnar <mingo@kernel.org>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: linux-riscv@lists.infradead.org
Cc: linux-kernel@vger.kernel.org
Signed-off-by: Palmer Dabbelt <palmer@sifive.com>
2018-03-09 05:13:20 -07:00
|
|
|
#endif
|
|
|
|
|
|
|
|
#endif /* _ASM_RISCV_FENCE_H */
|