X-Git-Url: http://ftp.safe.ca/?a=blobdiff_plain;f=Documentation%2Fatomic_ops.txt;h=4ef245010457fc9301f21f6e78f2b1fc16c1d014;hb=5a31bec014449dc9ca994e4c1dbf2802b7ca458a;hp=d46306fea230cda4a21ee19da45337ab39898918;hpb=8d7b52dfc9b0c672a3c39a82b896c8eedabb2a63;p=safe%2Fjmp%2Flinux-2.6 diff --git a/Documentation/atomic_ops.txt b/Documentation/atomic_ops.txt index d46306f..4ef2450 100644 --- a/Documentation/atomic_ops.txt +++ b/Documentation/atomic_ops.txt @@ -186,7 +186,8 @@ If the atomic value v is not equal to u, this function adds a to v, and returns non zero. If v is equal to u then it returns zero. This is done as an atomic operation. -atomic_add_unless requires explicit memory barriers around the operation. +atomic_add_unless requires explicit memory barriers around the operation +unless it fails (returns 0). atomic_inc_not_zero, equivalent to atomic_add_unless(v, 1, 0) @@ -418,6 +419,20 @@ brothers: */ smp_mb__after_clear_bit(); +There are two special bitops with lock barrier semantics (acquire/release, +same as spinlocks). These operate in the same way as their non-_lock/unlock +postfixed variants, except that they are to provide acquire/release semantics, +respectively. This means they can be used for bit_spin_trylock and +bit_spin_unlock type operations without specifying any more barriers. + + int test_and_set_bit_lock(unsigned long nr, unsigned long *addr); + void clear_bit_unlock(unsigned long nr, unsigned long *addr); + void __clear_bit_unlock(unsigned long nr, unsigned long *addr); + +The __clear_bit_unlock version is non-atomic, however it still implements +unlock barrier semantics. This can be useful if the lock itself is protecting +the other bits in the word. + Finally, there are non-atomic versions of the bitmask operations provided. They are used in contexts where some other higher-level SMP locking scheme is being used to protect the bitmask, and thus less