You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Atomically clear the nr-th bit starting from addr.
void change_bit(int nr, void *addr)
Atomically flip the value of the nr-th bit starting from addr.
int test_and_set_bit(int nr, void *addr)
Atomically set the nr-th bit starting from addr and return the previous value.
int test_and_clear_bit(int nr, void *addr)
Atomically clear the nr-th bit starting from addr and return the previous value.
int test_and_change_bit(int nr, void *addr)
Atomically flip the nr-th bit starting from addr and return the previous value.
int test_bit(int nr, void *addr)
Atomically return the value of the nr-th bit starting from addr.
arch/x86/include/asm/bitops.h
/* * These have to be done with inline assembly: that way the bit-setting * is guaranteed to be atomic. All bit operations return 0 if the bit * was cleared before the operation and != 0 if it was not. * * bit 0 is the LSB of addr; bit 32 is the LSB of (addr+1). */#if__GNUC__<4|| (__GNUC__==4&&__GNUC_MINOR__<1)
/* Technically wrong, but this avoids compilation errors on some gcc versions. */#defineBITOP_ADDR(x) "=m" (*(volatile long *) (x))
#else#defineBITOP_ADDR(x) "+m" (*(volatile long *) (x))
#endif#defineADDR BITOP_ADDR(addr)
/* * We do the locked ops that don't return the old value as * a mask operation on a byte. */#defineIS_IMMEDIATE(nr) (__builtin_constant_p(nr))
#defineCONST_MASK_ADDR(nr, addr) BITOP_ADDR((void *)(addr) + ((nr)>>3))
#defineCONST_MASK(nr) (1 << ((nr) & 7))
/** * set_bit - Atomically set a bit in memory * @nr: the bit to set * @addr: the address to start counting from * * This function is atomic and may not be reordered. See __set_bit() * if you do not require the atomic guarantees. * * Note: there are no guarantees that this function will not be reordered * on non x86 architectures, so if you are writing portable code, * make sure not to rely on its reordering guarantees. * * Note that @nr may be almost arbitrarily large; this function is not * restricted to acting on a single-word quantity. */static__always_inlinevoidset_bit(longnr, volatileunsigned long*addr)
{
if (IS_IMMEDIATE(nr)) {
asmvolatile(LOCK_PREFIX"orb %1,%0"
: CONST_MASK_ADDR(nr, addr)
: "iq" ((u8)CONST_MASK(nr))
: "memory");
} else {
asmvolatile(LOCK_PREFIX"bts %1,%0"
: BITOP_ADDR(addr) : "Ir" (nr) : "memory");
}
}
/** * __set_bit - Set a bit in memory * @nr: the bit to set * @addr: the address to start counting from * * Unlike set_bit(), this function is non-atomic and may be reordered. * If it's called on the same region of memory simultaneously, the effect * may be that only one operation succeeds. */static__always_inlinevoid__set_bit(longnr, volatileunsigned long*addr)
{
asmvolatile("bts %1,%0" : ADDR : "Ir" (nr) : "memory");
}
...__
Initializes the dynamically created semaphore to the given count
down_interruptible (struct semaphore *)
Tries to acquire the given semaphore and enter interruptible sleep if it is contended
down(struct semaphore *)
Tries to acquire the given semaphore and enter uninterruptible sleep if it is contended
down_trylock(struct semaphore *)
Tries to acquire the given semaphore and immediately return nonzero if it is contended
up(struct semaphore *)
Releases the given semaphore and wakes a waiting task, if any
读-写信号量(Reader-Writer Semaphores)
所有读-写信号量都是互斥信号量,它们的引用计数等于1。
它们只对写者互斥,不对读者。
只要没有写者,并发持有读锁的读者不限。
只能有唯一的写者(在没有读者时)可以获得写锁。
所有读-写信号量的睡眠都不会被信号打断,所以只有一个版本的down。
特有操作downgrade_write(),动态将获取的写锁转为读锁。
除非代码中的读和写可以明白无误地分割开来,否则最好不使用它。
/*动态版本为init_rwsem(struct rw_semaphore *sem)*/staticDECLARE_RWSEM(mr_rwsem);
/* attempt to acquire the semaphore for reading ... */down_read(&mr_rwsem);
/* critical region (read only) ... *//* release the semaphore */up_read(&mr_rwsem);
/* ... *//* attempt to acquire the semaphore for writing ... */down_write(&mr_rwsem);
/* critical region (read and write) ... *//* release the semaphore */up_write(&mr_sem);
lock_kernel();
/** Critical section, synchronized against all other BKL users...* Note, you can safely sleep here and the lock will be transparently* released. When you reschedule, the lock will be transparently* reacquired. This implies you will not deadlock, but you still do* not want to sleep if you need the lock to protect data here!*/unlock_kernel();
BKL Methods
Function
Description
lock_kernel ()
Acquires the BKL.
unlock_ kernel()
Releases the BKL.
kernel_ locked()
Returns nonzero if the lock is held and zero otherwise. (UP always returns nonzero.)
顺序锁(Sequential Locks)
Seq锁:对写者有利的锁。
Seq锁实现原理:
主要依靠一个序列计数器。
当有疑义的数据写入时,会得到一个锁,并且序列值会增加。
在读取数据之前和之后,序列值都被读取。
如果读取的序列值相同,说明读操作过程中没有被写打断过。
如果读取的值是偶数,那就表明写操作没有发生。
只要没有其他写者,写锁总是能够被成功获得。
读者不会影响写锁。
挂起的写者会不断地使得读操作循环,直到不再有任何写者持有锁为止。
选用Seq锁的理想场景:
数据存在很多读者
数据写者很少
虽然写者很少,但是希望写优先于读,而且不允许读者让写者饥饿。
数据很简单,如简单结构,甚至是简单的整型。在某些场合是不能使用原子变量的。
Seq锁使用实例
数据结构
include/linux/seqlock.h
...
typedefstruct {
structseqcountseqcount;
spinlock_tlock;
} seqlock_t;
/* * These macros triggered gcc-3.x compile-time problems. We think these are * OK now. Be cautious. */#define__SEQLOCK_UNLOCKED(lockname) \
{ \
.seqcount = SEQCNT_ZERO(lockname), \
.lock = __SPIN_LOCK_UNLOCKED(lockname) \
}
#defineseqlock_init(x) \
do { \
seqcount_init(&(x)->seqcount); \
spin_lock_init(&(x)->lock); \
} while (0)
#defineDEFINE_SEQLOCK(x) \
seqlock_t x = __SEQLOCK_UNLOCKED(x)
...__
intcpu;
/* disable kernel preemption and set “cpu” to the current processor */cpu=get_cpu();
/* manipulate per-processor data ... *//* reenable kernel preemption, “cpu” can change and so is no longer valid */put_cpu();
Kernel Preemption-Related Methods
Function
Description
preempt_disable()
Disables kernel preemption by incrementing the preemption counter
preempt_enable()
Decrements the preemption counter and checks and services any pending reschedules if the count is now zero
preempt_enable_no_resched()
Enables kernel preemption but does not check for any pending reschedules