X-Git-Url: http://ftp.safe.ca/?a=blobdiff_plain;f=Documentation%2Fatomic_ops.txt;h=396bec3b74ed12ca1ddbfe980f9ffd4ca4d91308;hb=53de33427fa3d7dd62cc5ec75ce0d4e6c6d602dd;hp=f20c10c2858fc7c60f3003a77c905bf108dfddd7;hpb=26333576fd0d0b52f6e4025c5aded97e188bdd44;p=safe%2Fjmp%2Flinux-2.6 diff --git a/Documentation/atomic_ops.txt b/Documentation/atomic_ops.txt index f20c10c..396bec3 100644 --- a/Documentation/atomic_ops.txt +++ b/Documentation/atomic_ops.txt @@ -186,7 +186,8 @@ If the atomic value v is not equal to u, this function adds a to v, and returns non zero. If v is equal to u then it returns zero. This is done as an atomic operation. -atomic_add_unless requires explicit memory barriers around the operation. +atomic_add_unless requires explicit memory barriers around the operation +unless it fails (returns 0). atomic_inc_not_zero, equivalent to atomic_add_unless(v, 1, 0) @@ -228,10 +229,10 @@ kernel. It is the use of atomic counters to implement reference counting, and it works such that once the counter falls to zero it can be guaranteed that no other entity can be accessing the object: -static void obj_list_add(struct obj *obj) +static void obj_list_add(struct obj *obj, struct list_head *head) { obj->active = 1; - list_add(&obj->list); + list_add(&obj->list, head); } static void obj_list_del(struct obj *obj)