Download as doc, pdf, or txt
Download as doc, pdf, or txt
You are on page 1of 6

Understanding Locks, Semaphores, Latches, Mutex and Conditions By: David Fraser Locks, semaphores, latches, mutexes, and

conditions, these are five terms that are often misunderstood and confused. This is partly because they lack documentation, but the terms are often interchanged both inside and outside of Informix. They also can mean different things within the Informix product than they might have meant in other classes that you have attended. This adds just another level of confusion. The intention is to first define the basics of each of these terms in relation to Informix and then see how each of them is currently used within IDS. Some discussion will also be directed toward how these were used in past versions as a way to aid understanding. Locks Within the context of Informix a lock is used to reserve access to a database object. This can be a database, table, page, row, key, or a number of bytes on a page. These locks are tracked in a table in shared memory. Access to this table is controlled by mutexes It is possible for multiple sessions to have a lock on the same database resource. These locks can be seen by using the onstat -k command. More about locking is covered in the next chapter. Semaphores A semaphore is an operating system resource. Semaphores are created in sets, with the creating program specifying how many semaphores will be in the set. This is done by an oninit process using the semget( ) call. The creating program passes two arguments, a key to be associated with the semaphore group and the requested number of semaphores. Once the semaphores are created it is up to the creating program to manage them. There are operating system limitations, which are tunable, on the number of semaphores in a set and the total number of semaphores that can exist on the system at any given time. Operations are permitted on the semaphores using the semctl( ) and semop( ) commands. One of the more common way to use a semaphore is to put a process to sleep on the semaphore waiting for a particular event to wake it up. Informix uses semaphores in this way for a VP process that does not have any further work to do. The UNIX ipcs command shows information about the semaphores in use on the system. There is no direct way to tell which ones are associated with a particular IDS instance. Here is an example of the output from ipcs:
% ipcs -as IPC status from T ID KEY Semaphores: s 0 00000000 s 1 00000000 s 2 00000000 s 3 00000000 <running system> as of Wed Feb 30 12:49:12 2008 MODE OWNER GROUP CREATOR CGROUP NSEMS --ra-ra-----ra-ra-ra--ra-ra-ra--ra-ra-raroot root root root informix informix informix informix root root root root informix informix informix informix 8 25 25 2

Latches A latch was the first method that was used in Informix products to protect shared memory resources from being accessed by multiple users at one time. Each process accessing shared memory has to know and follow this same protocol to prevent corruption. The latch structure looks like this:
typedef struct { VOLATILE mt_primitive_lock_t plock; /* primitive lock unsigned long nwait; /* # of times had to wait */

*/

unsigned long nloops; #ifdef SPINCHECK void *holder; #endif } LOCK_T;

/* # of spin loops in wait */ /* holder of the spin lock */

Notice that the name of this latch structure is LOCK_T. This presents just one of the many potential points of confusion. This method to protect shared memory was used exclusively in pre-IDS engines. A single process obtained a plock, through an assembly routine on most machines, then the user could change the resource that this latch was protecting. Plock is defined as a char(1) on most ports. If plock was set to one then the latch was in use and the user incremented the nwait flag and set the waiting field in their user structure to the address of the resource they were waiting for. Once the user released the latch, it was their responsibility to go through the wait list in the user table and notify all of the users that were waiting for this latch. There was no order in which processes got the released latch. One user obtained the latch and the others reset themselves to wait. When a process is waiting, it has to do something. In most pre-IDS versions the engine process immediately put itself to sleep on a semaphore. It was the responsibility of the process releasing the latch to wake it. In later versions, and as multiprocessor machines became more mainstream, the process would spin in hopes of getting the latch next time it checked. This spinning is a tight loop in the code that does nothing. Every so many loops it would try for the latch again. This is more CPU intensive, but requires less operating system overhead than swapping the process in and out as it goes to sleep on the semaphore. Latches are still used just as it was in some places, but it has also been enhanced and used as part of a new mechanism. Mutexes The mutex can be thought of as a upscaled latch. The primary disadvantage of a latch was that it contained no method of queuing waiting threads. The mutex structure does this. It is really a latch structure with added information. Notice that the older LOCK_T structure is present in the new MT_MUTEX structure.
struct _MT_MUTEX { LOCK_T lock; /* mutex lock */ short flags; /* see MTSET_MUTEX defines */ short lkcount; /* lock count by same thread TCB *waiting; /* queue of waiting threads TCB *holder; /* owner of mutex lock */ LINK(MT_MUTEX) mutex_link; /* link for mutex list mint mutex_id; /* unique identifier */ char mu_name[MT_NAME_SIZE+1]; /* user supplied name /* - may be null */ short type_id; /* type identifier */ RSTAT_T *rs; /* statistics structure */ };

*/ */ */ */

The mutexes for the IDS instance are a linked list, stored in the element mutex_link. The primary element is lock, the old latch type. This is the latch that the thread must acquire to check the owner column and see if it is in use. It has also been enhanced by a queue of waiters (*waiting) which has its own latch (waitlock) to protect access to it. In addition a number of other columns have been added and some statistics. The threads are awakened and given the resource in the order that they have requested it. It is the responsibility of the thread releasing the resource to awake the first thread on the queue, change the head of the queue, and put the thread on the ready queue.

The spinning mechanisms described under latches is still used, but only to get one of the latches in the mutex. The threads are never put to sleep on a semaphore to wait since there are now appropriate VP queues for them to wait in. The statistics in the MT_MUTEX structure is shown below for reference.
typedef struct _rstat_t { unsigned long nwaits; /* number of waits unsigned long nservs; /* number of services unsigned long current_qlen; /* current queue length unsigned long total_qlen; /* total queue length unsigned long max_qlen; /* maximum queue length double wait_time; /* cumulative wait time double serv_time; /* cumulative service time */ unsigned long max_wait_time; /* maximum wait time } RSTAT_T;

*/ */ */ */ */ */ */

Conditions A condition is actually a special type of mutex. While mutexes are used to manage synchronous access to a resource, a condition defines a particular event that must occur before waiting threads can continue. A checkpoint is a good example of a condition. During a checkpoint, all user threads are queued by a condition wait structure until the completion of the checkpoint. Once the conditional test becomes true, in this case, the completion of the checkpoint, those threads waiting for the condition can proceed. Here is the structure of a condition:
struct _MT_CONDITION { LOCK_T lock; TCB *waiting; mint type_id; /* type identifier mint cond_id; /* unique identifier LINK(MT_CONDITION) cond_link; /* link for condition list */ char co_name[MT_NAME_SIZE+1]; /* user supplied name - may be null RSTAT_T *rs; /* statistics structure };

*/ */

*/ */

Like mutexes, the condition structure also contains a latch structure and a wait list. Monitoring Mutexes There are three onstats that display mutexes that are in use. The -s options only reports on a subset of the available mutexes. The -g lmx and -g wmx options show all mutexes in the instance. The first list mutexes that have an owner and the second lists mutexes with waiters. There are seldom entries on these onstat outputs. This is simply because mutexes are meant to be held for a very short time. Finding them on a onstat output would be unlikely. These are the ones that are checked for the onstat -s output: Mutex name userthreads trans lockfr lockdl What they protect userthread table transaction table removing or adding a lock to the lock free list used for sunchronization of

deadlock detection ckpt archive dbspace chunk loglog physlog physb1 physb2 flushctl altlatch timestamp trace fllushr%d LRU%d lh[%d] txlk[%d] bh[%d] bf[%d] bbf[%d] checkpoint information archive information dbspace table chunk table logical log physical log physical log buffer 1 physical log buffer 2 Page cleaner table Counter of alter tables. Time stamp changes. Traces, if turned on. page cleaner LRU queue Lock hash bucket Transaction table entries Buffer hash buckets Buffers Big buffer latches

%d is the element number in the table, and the information which comes after "trace", is for each element in the table. Each time onstat -s is executed these mutexes are checked for either an owner or a waiter. If either exists then they are shown. Here is an example of the comparison of an onstat -s and onstat -g lmx at the same point in time. Notice how the lists differ, but really the same mutexes are held internally.
onstat -s Informix Dynamic Server 2000 Version 9.40.UC2 -- On-Line -- Up 01:35:22 -- 8976 Kbytes Latches with lock or userthread set name address lock wait userthread bh[5] a050f0c 1 0 a25f180 onstat -g lmx Informix Dynamic Server 2000 Version 9.40.UC2 -- On-Line -- Up 01:35:22 -- 8976 Kbytes Locked mid 2068 2812 mutexes: addr name a050f0c a14b970

holder lkcnt hash 33 ddh chain

waiter waittime 0 33 1

It is also interesting to note that the same mutex, address a050f0c, has a different name in both lists. That is simply a differrence in the naming conventions of the two routines that print them out.

Monitoring Conditions To display a list of conditions with waiters, use the onstat -g con command. You are more likely to see certain conditions displayed than you are mutexes because the duration of a condition is an undetermined length of time. The time required in the wait queue is not based on synchronous access to a resource, but is based on an event that must occur that causes the condition to be met and, therefore, allows waiters to continue. You can also see threads and sessions that are waiting on conditions by using other onstat commands, shown above. Later, you will see examples of how to use these commands to trace a condition to a thread, session, and process Here is an simple example of the onstat output listing conditions on an inactive IDS instance:
% onstat -g con Informix Dynamic Server 2000 Version 9.40.UC2 -- On-Line -- Up 01:35:22 -- 8976 Kbytes Conditions with waiters: cid 1650 addr a4df6b0 name sm_read waiter 224 waittime 76

Monitoring Semaphores In IDS there are three ways that semaphores are used in DSA. The first is for synchronization between users connecting through a shared memory connection. The semaphores are used to signal if a message has been left in the communication segment by either the server or the client for the other. If the semaphore is set then a message is waiting. Semaphores for this use are allocated using this formula: #_shared memory_users_per_poll_thread + 2 This number is allocated for each shared memory poll thread that is configured. The semaphores for each poll thread are allocated in a separate group. The second use for semaphores is for putting VPs to sleep when they do not have any work to do. A thread in the ready queue for the VP triggers the VP to wake up. The equation for allocating these semaphores is: total = ADM + MSC + CPUVPS + AIOVPS + PIO + LIO + 2_if_MIRROR + ADT + OPT + total_NET ADT+OPT are optional. Two additional are allocated if the MIRROR flag in onconfig is set. These are for possible use with the additional physical and logical log VPs The semaphores for the VPs are allocated in one set, assuming the operating system permits it. If a VP is added or dropped a semaphore set with a single semaphore will be allocated or deallocated.

This semaphore usage is shown in onstat -g sch:


% onstat -g sch Informix Dynamic Server 2000 Version 9.40.UC2 -- On-Line -- Up 01:35:22 -- 8976 Kbytes VP Scheduler Statistics: vp 1 2 3 4 5 6 7 8 pid 17433 17434 17435 17436 17437 17438 17439 17440 class cpu adm lio pio aio msc aio str semops busy waits 3936 0 9 9 31 2 15 2 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 spins/wait

The final way that semaphores can be allocated is if the relay module is being used for communication to an older engine. In this case, two semaphores per relay module are allocated.

You might also like