Computer Science

Semaphore

Semaphore is a synchronization tool used in computer science to manage access to shared resources. It is a variable that is used to control access to a shared resource by multiple processes or threads. Semaphores can be used to prevent race conditions and ensure that only one process or thread can access a shared resource at a time.

Written by Perlego with AI-assistance

8 Key excerpts on "Semaphore"

  • Beginning Linux Programming
    • Neil Matthew, Richard Stones(Authors)
    • 2011(Publication Date)
    • Wrox
      (Publisher)
    An important step forward in this area of concurrent programming occurred when Edsger Dijkstra, a Dutch computer scientist, introduced the concept of the Semaphore. As briefly mentioned in Chapter 12, a Semaphore is a special variable that takes only whole positive numbers and upon which programs can only act atomically. In this chapter we expand on that earlier simplified definition. We show in more detail how Semaphores function, and how the more general-purpose functions can be used between separate processes, rather than the special case of multi-threaded programs you saw in Chapter 12.
    A more formal definition of a Semaphore is a special variable on which only two operations are allowed; these operations are officially termed wait and signal . Because “wait” and “signal” already have special meanings in Linux programming, we’ll use the original notation:
    • P(Semaphore variable) for wait
    • V(Semaphore variable) for signal
    These letters come from the Dutch words for wait (passeren : to pass, as in a checkpoint before the critical section) and signal (vrijgeven : to give or release, as in giving up control of the critical section). You may also come across the terms “up” and “down” used in relation to Semaphores, taken from the use of signaling flags.
    Semaphore Definition
    The simplest Semaphore is a variable that can take only the values 0 and 1, a binary Semaphore. This is the most common form. Semaphores that can take many positive values are called general Semaphores . For the remainder of this chapter, we concentrate on binary Semaphores.
    The definitions of P and V are surprisingly simple. Suppose you have a Semaphore variable sv . The two operations are then defined as follows:
    P(sv) If sv is greater than zero, decrement sv . If sv is zero, suspend execution of this process.
    V(sv) If some other process has been suspended waiting for sv , make it resume execution. If no process is suspended waiting for sv , increment sv .
    Another way of thinking about Semaphores is that the Semaphore variable, sv , is true when the critical section is available, is decremented by P(sv) so it’s false when the critical section is busy, and is incremented by V(sv) when the critical section is again available. Be aware that simply having a normal variable that you decrement and increment is not good enough, because you can’t express in C, C++, C#, or almost any conventional programming language the need to make a single, atomic operation of the test to see whether the variable is true , and if so change the variable to make it false
  • Patterns for Parallel Software Design
    • Jorge Luis Ortega-Arjona(Author)
    • 2010(Publication Date)
    • Wiley
      (Publisher)
    • Each software component should be able to enter its critical section and modify the shared variable if and only if this access is confirmed to be safe and secure. Any other software component should proceed or wait depending on whether the original component is executing the critical section.
    • The integrity of the values within the shared variable should be preserved during the entire communication.
    Solution
    Use Semaphores for synchronizing access to the critical section associated with a shared variable, process or resource. A Semaphore is a type of variable or abstract data type, normally represented by a non-negative integer and a queue, with the following atomic operations [Dij68]:
    • signal (Semaphore): If the value of the Semaphore is greater than zero, then decrement it and allow the software component to continue, else suspend the software component process, noting that it is blocked on the Semaphore.
    • wait (Semaphore): If there are no software component processes waiting on the Semaphore then increment it, else free one process, which continues at the instruction immediately following its wait ( ) instruction.
    Structure
    Figure 5.1 illustrates the concept of a Semaphore as an abstract data type with a value, a queue pointer and an interface composed of two operations: signal ( ) and wait ( ).
    Figure 5.1 : A diagram representing the Semaphore as an abstract data type
    If Semaphores are available in a programming language, their typical use is as shown in Figure 5.2 .
    Figure 5.2 : Pseudocode for typical use of a Semaphore, synchronizingthe access to shared variables
    Dynamics
    Semaphores are common synchronization mechanisms that can be used in a number of different ways. This section discusses the case in which Semaphores are used for mutual exclusion and synchronization of cooperating software components.
    Mutual exclusion. Figure 5.3 shows a UML sequence diagram showing three concurrent or parallel software components, A, B and C, which share a data structure. This shared data structure (not shown in the diagram) is protected by a Semaphore sem, which is initialized with a value of 1.
    The software component A first executes wait (sem) and enters its critical section, which accesses the shared data structure. While A stays in its critical section, B, and later C try to enter their respective critical sections for the same shared data structure, executing wait (sem). Note that the three software components can proceed concurrently, but within their critical section only one software component Figure 5.3
  • The Design and Implementation of the RT-Thread Operating System
    When the thread holding the Semaphore completes the work it is processing, it will release this Semaphore. The thread waiting on this Semaphore can be awakened, and it can then perform the next part of the work. This occasion can also be seen as using the Semaphore for the work completion flag: the thread holding the Semaphore completes its own work and then notifies the thread waiting for the Semaphore to continue the next part of the work.

    Lock

    A single lock is often applied to multiple threads accessing the same shared resource (in other words, a critical region). When a Semaphore is used as a lock, the Semaphore resource instance should normally be initialized to 1, indicating that the system has one resource available by default. Because the Semaphore value always varies between 1 and 0, this type of lock is also called a binary Semaphore. As shown in Figure 6.4 , when a thread needs to access a shared resource, it needs to obtain the resource lock first. When this thread successfully obtains the resource lock, other threads that intend to access the shared resource will suspend because they cannot obtain the resource. This is because it is already locked (Semaphore value is 0) when other threads are trying to obtain the lock. When the thread holding the Semaphore is processed and exiting the critical region, it will release the Semaphore and unlock the lock, and the first waiting thread that is suspending on the lock will be awakened to gain access to the critical region.
    FIGURE 6.4 Lock.

    Synchronization between Thread and Interrupt

    Semaphore can also be easily applied to synchronize between the thread and interrupt, such as an interrupt trigger. When interrupting a service routine, the thread needs to be notified to perform corresponding data processing. At this time, the initial value of the Semaphore can be set to 0. When the thread tries to hold this Semaphore, since the initial value of the Semaphore is 0, the thread will then suspend on this Semaphore until the Semaphore is released. When the interrupt is triggered, hardware-related actions are performed first, such as reading corresponding data from the hardware I/O port, and confirming the interrupt to clear the interrupt source, and then releasing a Semaphore to wake up the corresponding thread for subsequent data processing. For example, the processing of FinSH threads is shown in Figure 6.5
  • Real-Time Embedded Multithreading Using ThreadX
    • Edward Lamie(Author)
    • 2019(Publication Date)
    • CRC Press
      (Publisher)
    In most cases, counting Semaphores used for mutual exclusion have an initial value of 1, meaning that only one thread can access the associated resource at a time. Counting Semaphores that have values restricted to 0 or 1 are commonly called binary Semaphores. If a binary Semaphore is used, the user must prevent that same thread from performing a get operation on a Semaphore it already controls. A second get would fail and could suspend the calling thread indefinitely, as well as make the resource permanently unavailable. Counting Semaphores can also be used for event notification, as in a producer-consumer application. In this application, the consumer attempts to get the counting Semaphore before “consuming” a resource (such as data in a queue); the producer increases the Semaphore count whenever it makes something available. In other words, the producer places instances in the Semaphore and the consumer attempts to take instances from the Semaphore. Such Semaphores usually have an initial value of 0 and do not increase until the producer has something ready for the consumer. Applications can create counting Semaphores either during initialization or during runtime. The initial count of the Semaphore is specified during creation. An application may use an unlimited number of counting Semaphores. Application threads can suspend while attempting to perform a get operation on a Semaphore with a current count of zero (depending on the value of the wait option). When a put operation is performed on a Semaphore and a thread suspended on that Semaphore, the suspended thread completes its get operation and resumes. If multiple threads are suspended on the same counting Semaphore, they resume in the same order they occur on the suspended list (usually in FIFO order). An application can cause a higher-priority thread to be resumed first, if the application calls tx_Semaphore_prioritize prior to a Semaphore put call
  • The Industrial Information Technology Handbook
    • Richard Zurawski(Author)
    • 2018(Publication Date)
    • CRC Press
      (Publisher)
    An essential function of a multiprogrammed operating system is to allow processes to synchronize and exchange information; as a whole, these functions are known as IPC (InterProcess Communication). Many interprocess synchronization and communication mechanisms have been proposed and were objects of extensive theoretical study in the scientific literature. Among them, we recall:
    Semaphores
    A Semaphore, first introduced by Dijkstra in 1965, is a synchronization device with an integer value and on which the following two primitive, atomic operations are defined:
    The P operation, often called down or wait, checks if the current value of the Semaphore is greater than zero. If so, it decrements the value and returns to the caller; otherwise, the invoking process goes into the blocked state until another process performs a V on the same Semaphore;
    The V operation, also called up, post, or signal, checks whether there is any process currently blocked on the Semaphore. In this case, it wakes exactly one of them up, allowing it to complete its P; otherwise, it increments the value of the Semaphore. The V operation never blocks.
  • Embedded Software Development
    eBook - ePub

    Embedded Software Development

    The Open-Source Approach

    • Ivan Cibrario Bertolotti, Tingting Hu(Authors)
    • 2017(Publication Date)
    • CRC Press
      (Publisher)
    REE RTOS primitive that might block the caller for any reason even temporarily, or might require a context switch, must not be used within this kind of critical region. This is because blocking the only task allowed to run would completely lock up the system, and it is impossible to perform a context switch with the scheduler disabled.
    2. Protecting critical regions with a sizable execution time in this way would probably be unacceptable in many applications because it leads to a large amount of unnecessary blocking. This is especially true for high-priority tasks, because if one of them becomes ready for execution while a low-priority task is engaged in a critical region of this kind, it will not run immediately, but only at the end of the critical region itself.

    5.4 Message Passing

    The Semaphore-based synchronization and communication methods discussed in Section 5.3 rely on two distinct and mostly independent mechanisms to implement synchronization and communication among tasks. In particular:
    Semaphores are essentially able to pass a synchronization signal from one task to another. For example, a Semaphore can be used to block a task until another task has accomplished a certain action and make sure they operate together in a correct way.
    • Data transfer takes place by means of shared variables because Semaphores, by themselves, are unable to do so. Even though Semaphores do have a value, it would be impractical to use it for data transfer, due to the way Semaphores were conceived and defined.
    Seen in a different way, the role of Semaphores is to coordinate and enforce mutual exclusion and precedence constraints on task actions, in order to ensure that their access to shared variables takes place in the right sequence and at the appropriate time. On the other hand, Semaphores are not actively involved in data transfer.
  • Real-Time Systems Development with RTEMS and Multicore Processors
    • Gedare Bloom, Joel Sherrill, Tingting Hu, Ivan Cibrario Bertolotti(Authors)
    • 2020(Publication Date)
    • CRC Press
      (Publisher)
    To informally check that the fourth, and last, correctness condition is also satisfied, it is sufficient to observe that the definition of a Semaphore and its primitives completely abstracts away from any architectural details about the system and does not contain any reference to any task or processor characteristics. Similarly, Semaphore implementations are also required not to introduce such dependencies to be considered adequate for use.

    7.2.3 SYNCHRONIZATION SemaphoreS

    Besides mutual exclusion, a Semaphore is also useful to implement synchronization among tasks, for instance, when we want to enforce a precedence constraint and block a task τ2 until a certain event, generated by another task τ1 , occurs. In this case, the Semaphore is often called synchronization Semaphore.
    As shown in Figure 7.5 , when task τ1 makes use of a shared object to transfer data to τ2 , we must make sure that τ2 reads from the shared object only after τ1 has updated it completely. In its simplest form, this precedence constraint can be enforced by means of a Semaphore s initialized to zero. Then, task τ2 performs a P(s) before reading from the shared object, while τ1 calls V(s) after updating it. In this way, assuming that τ2 executes first:
    • Task τ2 blocks in P(s) because s.v is zero when it invokes the primitive.
    • When τ1 is done with the update, it executes V(s) , thus unblocking τ2 .
    FIGURE 7.5 Using a synchronization Semaphore to enforce a precedence constraint.
    In this way, τ2 has a consistent view of the shared object, which contains all the new data that τ1 wrote into it, when it eventually executes. Moreover, s.v is still at zero after both P() and V() have been completed, and hence, the Semaphore is ready for the next synchronization round.
    The example also emphasizes the important role the value of a Semaphore plays in memorizing and keeping track of past events. This becomes evident if we analyze what happens if τ1 runs before has had the opportunity of blocking:
    • When τ1 executes V(s) , it finds that s.q is empty. Therefore, it increments s.v to 1.
    • When eventually τ2 invokes P(s) it finds s at 1, meaning that the synchronization condition it would like to wait for has already been fulfilled in the past. Accordingly, it continues immediately, without blocking, after bringing s.v
  • Extreme C
    eBook - ePub

    Extreme C

    Taking you to the limit in Concurrency, OOP, and the most advanced capabilities of C

    mutual exclusion , hence, "mutex."
    In some scenarios however, you might want to have more than one thread to enter the critical section and operate on the shared resource. This is the scenario in which you should use general Semaphores .
    Before we go into an example regarding general Semaphores, let's bring up an example regarding a binary Semaphore (or a mutex). We won't be using the pthread_mutex_* functions in this example; instead, we will be using sem_* functions which are supposed to expose Semaphore-related functionalities.
    Binary Semaphores
    The following code is the solution made using Semaphores for example 15.3 . As a reminder, it involved two threads; each of them incrementing a shared integer by a different value. We wanted to protect the data integrity of the shared variable. Note that we won't be using POSIX mutexes in the following code:
    #include <stdio.h> #include <stdlib.h> // The POSIX standard header for using pthread library #include <pthread.h> // The Semaphores are not exposed through pthread.h #include <Semaphore.h> // The main pointer addressing a Semaphore object used // to synchronize the access to the shared state. sem_t *Semaphore; void* thread_body_1(void* arg) { // Obtain a pointer to the shared variable int* shared_var_ptr = (int*)arg; // Waiting for the Semaphore sem_wait(Semaphore); // Increment the shared variable by 1 by writing directly // to its memory address (*shared_var_ptr)++; printf("%d\n", *shared_var_ptr); // Release the Semaphore sem_post(Semaphore); return NULL; } void* thread_body_2(void* arg) { // Obtain a pointer to the shared variable int* shared_var_ptr = (int*)arg; // Waiting for the Semaphore sem_wait(Semaphore); // Increment the shared variable by 1 by writing directly // to its memory address (*shared_var_ptr) += 2; printf("%d\n", *shared_var_ptr); // Release the Semaphore sem_post(Semaphore); return NULL; } int main(int argc, char** argv) { // The shared variable
Index pages curate the most relevant extracts from our library of academic textbooks. They’ve been created using an in-house natural language model (NLM), each adding context and meaning to key research topics.