Computer Science

Concurrent Programming

Concurrent programming is a programming paradigm that allows multiple tasks to be executed simultaneously. It involves designing and implementing programs that can handle multiple threads of execution, each of which can run independently and concurrently with other threads. This approach can improve the performance and responsiveness of software applications.

Written by Perlego with AI-assistance

11 Key excerpts on "Concurrent Programming"

  • Creating Components
    eBook - ePub

    Creating Components

    Object Oriented, Concurrent, and Distributed Computing in Java

    The rest of this text is devoted to illustrating how to properly implement and control concurrency in a program and how to use concurrency with objects in order to simplify and organize a program. However, before the use of concurrency can be described, a working definition of concurrency, particularly in relationship to objects, must be given. Developing that working definition is the purpose of the rest of this chapter.

    1.3.2 A Definition of Concurrent Programming

    Properly defining a concurrent program is not an easy task. For example, the simplest definition would be when two or more programs are running at the same time, but this definition is far from satisfactory. For example, consider Exhibit 1 (Program1.1). This program has been described as concurrent, in that the GUI thread is running separately from the main thread and can thus set the value of the stopProgram variable outside of the calculation loop in the main thread. However, if this program is run on a computer with one Central Processing Unit (CPU), as most Windows computers are, it is impossible for more than one instruction to be run at a time; thus, by the simple definition given above, this program is not concurrent.
    Another program with this simple definition can be illustrated by the example of two computers, one running a word processor in San Francisco and another running a spreadsheet in Washington, D.C. By the definition of a concurrent program above, these are concurrent. However, because the two programs are in no way related, the fact that they are concurrent is really meaningless.
    It seems obvious that a good definition of Concurrent Programming would define the first example as concurrent and the second as not concurrent; therefore, something is fundamentally wrong with this simple definition of Concurrent Programming. In fact, the simpleminded notion of concurrency involving two activities occurring at the same time is a poor foundation on which to attempt to build a better definition of the term concurrency
  • Learning Scala Programming

    Concurrent Programming in Scala

    "Yesterday is not ours to recover, but today is to try and tomorrow is to win or lose." - Anonymous
    The idea that modern computers with multicore architectures give better performance is based on the fact that multiple processors can run separate processes simultaneously. Each process can run more than one thread to complete specific tasks. Picturing this, we can write programs with multiple threads working simultaneously to ensure better performance and responsiveness. We call this Concurrent Programming. In this chapter, our goal is to understand Scala's offerings in Concurrent Programming. There are multiple ways we can use constructs to write concurrent programs. We'll learn about them in this chapter. Let's check out what will be here for us:
    • Concurrent Programming
    • Building blocks of concurrency:
      • Process and threads
      • Synchronization and locks
      • Executor and ExecutionContext
      • Lock free programming
    • Asynchronous programming using Futures and Promises
    • Parallel Collections
    Before we start learning about the ways we can write concurrent programs, it's important to understand the underlying picture. Let's start understanding Concurrent Programming and then we'll go through the basic building blocks of concurrency.
    Passage contains an image

    Concurrent Programming

    It's a programming approach where a set of computations can be performed simultaneously. These set of computations might share the same resources such as memory. How's it different from sequential programming? In sequential programming, every computation can be performed one after another. In the case of concurrent programs, more than one computation can be performed in the same time period.
    By executing multiple computations, we can perform multiple logical operations in the program at the same time, resulting in better performance. Programs can run faster than before. This may sound cool; concurrency actually makes implementing real scenarios easier. Think about an internet browser; we can stream our favorite videos and download some content at the same time. The download thread does not affect the streaming of the video in any way. This is possible because content download and video streams on a browser tab are separate logical program parts, and hence can run simultaneously.
  • Real-Time Embedded Systems
    eBook - ePub

    Real-Time Embedded Systems

    Open-Source Operating Systems Perspective

    • Ivan Cibrario Bertolotti, Gabriele Manduchi(Authors)
    • 2017(Publication Date)
    • CRC Press
      (Publisher)
    3    

    Real-Time Concurrent Programming Principles

     

    CONTENTS

    3.1 The Role of Parallelism 3.2 Definition of Process 3.3 Process State 3.4 Process Life Cycle and Process State Diagram 3.5 Multithreading 3.6 Summary
    This chapter lays the foundation of real-time Concurrent Programming theory by introducing what is probably its most central concept, that is, the definition of process as the abstraction of an executing program. This definition is also useful to clearly distinguish between sequential and concurrent programming, and to highlight the pitfalls of the latter.
       

    3.1 The Role of Parallelism

    Most contemporary computers are able to perform more than one activity at the same time, at least apparently. This is particularly evident with personal computers, in which users ordinarily interact with many different applications at the same time through a graphics user interface. In addition, even if this aspect is often overlooked by the users themselves, the same is true also at a much finer level of detail. For example, contemporary computers are usually able to manage user interaction while they are reading and writing data to the hard disk, and are actively involved in network communication. In most cases, this is accomplished by having peripheral devices interrupt the current processor activity when they need attention. Once it has finished taking care of the interrupting devices, the processor goes back to whatever it was doing before.
    A key concept here is that all these activities are not performed in a fixed, predetermined sequence , but they all seemingly proceed in parallel , or concurrently
  • Java
    eBook - ePub

    Java

    The Comprehensive Guide

    • Christian Ullenboom(Author)
    • 2022(Publication Date)
    • SAP PRESS
      (Publisher)

    17     Introduction to Concurrent Programming

    “Because of the emergency here, people are going to try many different solutions in parallel.”—Raul Andino-Pavlovsky
    Java supports Concurrent Programming , which allows several programs to be run simultaneously. In this chapter you’ll learn how Java can use threads for running concurrent programs.

    17.1    Concurrency and Parallelism

    Computer systems solve problems in the real world, so we’ll also stay in the real world in our approach to Concurrent Programming environments. As you go through the world, you notice many things happening at the same time: The sun is shining, mopeds and cars race down the street, the radio is playing. Some people are talking, maybe some are eating, and dogs are romping around on the lawn. Not only do these things happen simultaneously, but manifold dependencies exist between these events, such as in waiting situations: At the red light, some cars are waiting, while at the green light people are crossing the street—when the signal changes, these other things also change.
    When many things happen simultaneously, we refer to an interacting system as concurrent . At the same time, processes can be executed in parallel . Some things only seem to happen in parallel, but in reality, they happen quickly one after the other. What we then perceive in these cases is called quasi-parallelism . If two people eat at the same time, for example, it is parallel, but if someone eats and breathes, it seems simultaneous from the outside, but isn’t. Swallowing and breathing are generally sequential. [ 216 ] Let’s transfer these concepts to software: The simultaneous processing of programs and use of resources is concurrent. Ultimately, depending on the technical conditions of the machine (i.e., hardware), this concurrency may actually be implemented by parallel processing—for example, across several processors or cores.
    In Java, concurrent programs are implemented by threads, where each thread corresponds to a task. Ideally, the processing of the threads also happens in parallel if the machine has multiple processors or cores. A program that is implemented concurrently can cut its working time in half with two processors or cores in parallel execution; however, work times don’t have to be halved: It’s still up to the operating system how it executes the threads.
  • Extreme C
    eBook - ePub

    Extreme C

    Taking you to the limit in Concurrency, OOP, and the most advanced capabilities of C

    Concurrency simply means having multiple pieces of logic within a program being executed simultaneously. Modern software systems are often concurrent, as programs need to run various pieces of logic at the same time. As such, concurrency is something that every program today is using to a certain extent.
    We can say that concurrency is a powerful tool that lets you write programs that can manage different tasks at the same time, and the support for it usually lies in the kernel, which is at the heart of the operating system.
    There are numerous examples in which an ordinary program manages multiple jobs simultaneously. For example, you can surf the web while downloading files. In this case, tasks are being executed in the context of the browser process concurrently. Another notable example is in a video streaming scenario, such as when you are watching a video on YouTube. The video player might be in the middle of downloading future chunks of the video while you are still watching previously downloaded chunks.
    Even simple word-processing software has several concurrent tasks running in the background. As I write this chapter on Microsoft Word, a spell checker and a formatter are running in the background. If you were to be reading this on the Kindle application on an iPad, what programs do you think might be running concurrently as part of the Kindle program?
    Having multiple programs being run at the same time sounds amazing, but as with most technology, concurrency brings along with it several headaches in addition to its benefits. Indeed, concurrency brings some of the most painful headaches in the history of computer science! These "headaches," which we will address later on, can remain hidden for a long time, even for months after a release, and they are usually hard to find, reproduce, and resolve.
    We started this section describing concurrency as having tasks being executed at the same time, or concurrently. This description implies that the tasks are being run in parallel, but that's not strictly true. Such a description is too simple, as well as inaccurate, because being concurrent is different from being parallel
  • C++ High Performance
    eBook - ePub

    C++ High Performance

    Master the art of optimizing the functioning of your C++ code, 2nd Edition

    • Bjorn Andrist, Viktor Sehr, Ben Garney(Authors)
    • 2020(Publication Date)
    • Packt Publishing
      (Publisher)
  • Sharing state between multiple threads in a safe manner is hard. Whenever we have data that can be read and written to at the same time, we need some way of protecting that data from data races. You will see many examples of this later on.
  • Concurrent programs are usually more complicated to reason about because of the multiple parallel execution flows.
  • Concurrency complicates debugging. Bugs that occur because of data races can be very hard to debug since they are dependent on how threads are scheduled. These kinds of bugs can be hard to reproduce and, in the worst-case scenario, they may even cease to exist when running the program using a debugger. Sometimes an innocent debug trace to the console can change the way a multithreaded program behaves and make the bug temporarily disappear. You have been warned!
  • Before we start looking at Concurrent Programming using C++, a few general concepts related to concurrent and parallel programming will be introduced.

    Concurrency and parallelism

    Concurrency and parallelism are two terms that are sometimes used interchangeably. However, they are not the same and it is important to understand the differences between them. A program is said to run concurrently if it has multiple individual control flows running during overlapping time periods. In C++, each individual control flow is represented by a thread. The threads may or may not execute at the exact same time, though. If they do, they are said to execute in parallel. For a concurrent program to run in parallel, it needs to be executed on a machine that has support for parallel execution of instructions; that is, a machine with multiple CPU cores.
    At first glance, it might seem obvious that we always want concurrent programs to run in parallel if possible, for efficiency reasons. However, that is not necessarily always true. A lot of synchronization primitives (such as mutex locks) covered in this chapter are required only to support the parallel execution of threads. Concurrent tasks that are not run in parallel do not require the same locking mechanisms and can be a lot easier to reason about.
  • Expert Python Programming
    eBook - ePub

    Expert Python Programming

    Master Python by learning the best coding practices and advanced programming concepts, 4th Edition

    • Michał Jaworski, Tarek Ziadé(Authors)
    • 2021(Publication Date)
    • Packt Publishing
      (Publisher)
    Concurrency is often confused with actual methods of implementing it. Some programmers also think that it is a synonym for parallel processing. This is the reason why we need to start by properly defining concurrency. Only then will we be able to properly understand various concurrency models and their key differences.
    First and foremost, concurrency is not the same as parallelism. Concurrency is also not a matter of application implementation. Concurrency is a property of a program, algorithm, or problem, whereas parallelism is just one of the possible approaches to problems that are concurrent.
    In Leslie Lamport's 1976 paper Time, Clocks, and the Ordering of Events in Distributed Systems , he defines the concept of concurrency as follows:
    "Two events are concurrent if neither can causally affect the other."
    By extrapolating events to programs, algorithms, or problems, we can say that something is concurrent if it can be fully or partially decomposed into components (units) that are order-independent. Such units may be processed independently from each other, and the order of processing does not affect the final result. This means that they can also be processed simultaneously or in parallel. If we process information this way (that is, in parallel), then we are indeed dealing with parallel processing. But this is still not obligatory.
    Doing work in a distributed manner, preferably using the capabilities of multicore processors or computing clusters, is a natural consequence of concurrent problems. Anyway, it does not mean that this is the only way of efficiently dealing with concurrency. There are a lot of use cases where concurrent problems can be approached in ways other than synchronous ways, but without the need for parallel execution. In other words, when a problem is concurrent, it gives you the opportunity to deal with it in a special, preferably more efficient, way.
    We often get used to solving problems in a classical way: by performing a sequence of steps. This is how most of us think and process information—using synchronous algorithms that do one thing at a time, step by step. But this way of processing information is not well suited to solving large-scale problems or when you need to satisfy the demands of multiple users or software agents simultaneously:
  • C++ Programming for Linux Systems
    eBook - ePub

    C++ Programming for Linux Systems

    Create robust enterprise software for Linux and Unix-based operating systems

    • Desislav Andreev, Stanimir Lukanov(Authors)
    • 2023(Publication Date)
    • Packt Publishing
      (Publisher)
    This is where the preemption happens. While your task is running, it is suddenly suspended, and another task is scheduled for execution. Keep in mind that task switching is not a cheap process. The system consumes the processor’s computation resource to perform this action – to make the context switch. The conclusion should be the following: we have to design our systems to respect these limitations. On the other hand, parallelism is a form of concurrency that involves executing multiple operations simultaneously on separate processing units. For example, a computer with multiple CPUs can execute multiple tasks in parallel, which can lead to significant performance improvements. You don’t have to worry about the context switching and the preemption. It has its drawbacks, though, and we will discuss them thoroughly. Figure 6.2 – Parallel task execution Going back to our car example, if the CPU of the infotainment system is multi-core, then the tasks related to the navigation system could be executed on one core, and the tasks for the music processing on some of the other cores. Therefore, you don’t have to take any action to design your code to support preemption. Of course, this is only true if you are sure that your code will be executed in such an environment. The fundamental connection between concurrency and parallelism lies in the fact that parallelism can be applied to concurrent computations without affecting the accuracy of the outcome, but the presence of concurrency alone does not guarantee parallelism. In summary, concurrency is an important concept in computing that allows multiple tasks to be executed simultaneously, even though that is not guaranteed. This could lead to improved performance and efficient resource utilization but at the cost of more complicated code respecting the pitfalls that concurrency brings
  • High Performance Computing and the Art of Parallel Programming
    eBook - ePub

    High Performance Computing and the Art of Parallel Programming

    An Introduction for Geographers, Social Scientists and Engineers

    • Stan Openshaw, Ian Turton(Authors)
    • 2005(Publication Date)
    • Routledge
      (Publisher)
    p. 6 );
    while Krishnamurthy (1989) writes
    ‘A simple-minded approach to gain speed, as well as power, in computing is through parallelism; here many computers would work together, all simultaneously executing some portions of a procedure used for solving a problem’ (p. 1 ).
    The key common point is to note that parallel processing is the solution of a single problem by using more than one processing element (or processor or node or processing element or CPU). This feat can be achieved in various ways: indeed, parallel programming is all about discovering how to program a computer with multiple CPUs in such a way that they can all be used with maximum efficiency to solve the same problem. This is how we would define it and it is good to know that the experts all agree.
    However, it is important not to overemphasise the parallel bit, because it is not really all that novel or new! Indeed, parallelism is widely used, albeit on a small scale, in many computer systems that would not normally be regarded as being parallel processor hardware. Morse (1994) writes: ‘If by parallel we mean concurrent or simultaneous execution of distinct components then every machine from a $950 PC to a $30 million Cray C-90 has aspects of parallelism’ (p. 4 ). The key distinction is whether or not the parallelism is under the user’s control or is a totally transparent (i.e. invisible) part of the hardware that you have no explicit control over and probably do not know that it even exists. It is only the former sort that we need worry about since it is this which we would like to believe is under our control.
    3.1.2  Jargon I
    Like many other areas of technology, parallel computing is a subject with some seemingly highly mysterious jargon of its own. Yet it occurs so often that you really do need to memorise at least some of it and either know in general terms what it all means or know sufficient so that you may successfully guess the rest. There are the various words or abbreviations that previously you may have either never come across or never really understood what they mean. You will never be able to join in the small talk at an HPC conference bar unless you master the basic vocabulary and terminology! So here goes.
  • Getting Started with V Programming
    thread type. You will be able to understand the benefits of writing concurrent code in contrast to sequential code. Through this chapter, you will understand how to concurrently spawn functions such as void functions, functions that return values, as well as anonymous functions. You will also learn how to share data between the main thread and the tasks that are spawned to run concurrently.

    Technical requirements

    The full source code for this chapter is available at https://github.com/PacktPublishing/Getting-Started-with-V-Programming/tree/main/Chapter10 .
    It is recommended that you run the code examples in each of the sections in a fresh console or file with a .v extension to avoid clashes among variable names across examples.

    Introducing concurrency

    Concurrency means running tasks concurrently. While this might seem like a very abstract definition, let's consider the following real-world example. You wake up in the morning of winter, and you need hot water to bathe. You can only bathe when the water is hot enough. However, you have other morning chores to finish off while the water gets hot. So, you turn on the water heater and then, let's say, you brush your teeth for some time while the water heater indicates the water is hot. Then, you switch off the water heater, enjoy a hot shower, and get ready for the day.
    The advantage of concurrency is that you can do multiple things simultaneously that don't have to follow a specific order. So, in this scenario, you don't have to remain idle waiting for the water to get hot; you can finish brushing your teeth in parallel. So, the order the tasks are completed in is not very important.
    The term parallel I used previously is being used in a general talking sense here. But in the programming world, concurrency and parallelism are two different concepts. I'll explain parallelism in more detail in the Understanding parallelism
  • Grokking Functional Programming
    • Michal Plachta(Author)
    • 2023(Publication Date)
    • Manning
      (Publisher)

    10 Concurrent programs

    In this chapter you will learn
    • how to declaratively design concurrent program flows
    • how to use lightweight virtual threads (fibers)
    • how to safely store and access data from different threads
    • how to process a stream of events asynchronously
    We know
    the past but cannot control it. We control the future but cannot know it.
    —Claude Shannon

    Threads, threads everywhere

    So far in the book we’ve been focusing on sequential programs: each program consisted of a sequence of expressions that were evaluated one by one using a single execution thread, usually connected to a single core.
    We won’t be focusing on cores (or CPUs) in this chapter. We will focus on having multiple threads. Note that multiple threads can still run on a single core. The operating system switches between different threads to make sure everyone gets a chance to progress.
    This mode of operation is very useful in practice. It’s far easier to understand a program when it’s sequential. It’s easier to debug, and it’s easier to make changes. However, we’ve witnessed some great improvements in the hardware area over the last decade, and right now the majority of consumer hardware is armed with multiple cores. The software needed to follow suit, and that’s how multithreaded programming became the way to develop modern applications. Our programs need to do many things at once to deliver results for our users faster. They need to use many threads to preprocess data in the background or split computations into multiple parallel chunks. When we implement concurrent programs, they usually look like this.
    Entering the multithreaded world means that we no longer can debug and understand our applications with the level of confidence we had with single-threaded sequential ones. The mainstream, imperative approach doesn’t help here, either. Having to deal with shared mutable states accessed by multiple threads, which additionally need to synchronize with each other, avoiding deadlocks and race conditions, turns out to be very hard to get right. On top of this, we still need to deal with all the problems we had in the sequential world, such as error handling and IO actions. Concurrency adds another layer of complexity.
  • Index pages curate the most relevant extracts from our library of academic textbooks. They’ve been created using an in-house natural language model (NLM), each adding context and meaning to key research topics.