Computer Science

Threading In Computer Science

Threading is a technique used in computer science to allow multiple tasks to be executed concurrently within a single process. It involves creating multiple threads of execution within a program, each of which can run independently and perform different tasks simultaneously. Threading is commonly used in applications that require high performance and responsiveness, such as video games and web servers.

Written by Perlego with AI-assistance

12 Key excerpts on "Threading In Computer Science"

  • Parallel Programming with C# and .NET Core
    eBook - ePub

    Parallel Programming with C# and .NET Core

    Developing Multithreaded Applications Using C# and .NET Core 3.1 from Scratch

    Universal Windows Platform (UWP) or Xamarin, and so on, have to deal with high CPU consuming operations or operations that may take too long to complete. While the user waits for the operation to complete, the application UI should remain responsive to the user actions. You may have seen that dreadful "Not responding" status on one of the applications. This is a classic case of the Main UI thread getting blocked. Proper use of threading can offload the main UI thread and keep the application UI responsive.
  • Handling concurrent requests in server : When we develop a web application or Web API hosted on one or many servers, they may receive a large number of requests from different client applications concurrently. These applications are supposed to cater to these requests and respond in a timely fashion. If we use the ASP.NET/ASP.NET Core framework, this requirement is handled automatically, but threading is how the underlying framework achieves it.
  • Leverage the full power of the multi-core hardware : With the modern machines powered with multi-core CPUs, effective threading provides a means to leverage the powerful hardware capability optimally.
  • Improving performance by proactive computing : Many times, the algorithm or program that we write requires a lot of calculated values. In all such cases, it's best to calculate these values before they are needed, by computing them in parallel . One of the great examples of this scenario is 3D animation for a gaming application.
  • Now that we know the reason to use threading, let us see what it is.

    What is threading?

    Let's go back to our "human body" example. Each subsystem works independently of another, so even if there is a fault in one, another can continue to work (at least to start with). Just like our body, the Microsoft Windows operating system is very complex. It has several applications and services running independently of each other in the form of processes. A process is just an instance of the application running in the system, with dedicated access to address space, which ensures that data and memory of one process don't interfere with the other. This isolated process ecosystem makes the overall system robust and reliable for the simple reason that one faulting or crashing process cannot impact another. The same behavior is desired in any application that we develop as well. It is achieved by using threads, which are the basic building blocks for threading in the world of Windows and .NET.
  • Enterprise Application Development with C# 10 and .NET 6
    • Ravindra Akella, Arun Kumar Tamirisa, Suneel Kumar Kunani, Bhupesh Guptha Muthiyalu(Authors)
    • 2022(Publication Date)
    • Packt Publishing
      (Publisher)
    Tasks and parallels section.
  • Multithreading : Multithreading is a way to achieve concurrency where new threads are created manually and executed concurrently, as with CLR ThreadPool . In a multicore/multiprocessor system, multithreading helps to achieve parallelism by executing newly created threads in different cores.
  • Now that we understand the key terms in parallel programming, let's move on to look at how to create threads and the role of ThreadPool in .NET Core.

    Demystifying threads, lazy initialization, and ThreadPool

    A thread is the smallest unit in an operating system, and it executes instructions in the processor. A process is a bigger executing container, and the thread inside the process is the smallest unit to use processor time and execute instructions. The key thing to remember is that whenever your code needs to be executed in a process, it should be assigned to a thread. Each processor can only execute one instruction at a time; that's why, in a single-core system, at any point time, only one thread is being executed. There are scheduling algorithms that are used to allocate processor time to a thread. A thread typically has a stack (which keeps track of execution history), registers in which to store various variables, and counters to hold instructions that need to be executed.
    A quick look at Task Manager will give us details regarding the number of physical and logical cores, and navigating to Resource Monitor will tell us about the CPU usage in each core. The following figure shows the details of a hyper-threading-enabled quad-core CPU that can execute eight threads in parallel at any point in time:
    Figure 4.2 – Task Manager and Resource Monitor
    A typical application in .NET Core has one single thread when it is started and can add more threads by manually creating them. A quick refresher on how this is done will be covered in the following sections.
  • C# for Financial Markets
    • Daniel J. Duffy, Andrea Germani(Authors)
    • 2013(Publication Date)
    • Wiley
      (Publisher)
    24 Introduction to Multi-threading in C# 24.1 INTRODUCTION AND OBJECTIVES
    In this part of the book we introduce a number of design and software tools to help developers take advantage of the computing power of multi-core processors and multi-processor computers . These computers support parallel programming models. The main reason for writing parallel code is to improve the performance (called the speedup ) of software programs. Another advantage is that multi-threaded code can also promote the responsiveness of applications in general.
    We introduce a new programming model in this chapter. This is called the multi-threading model and it allows us to write software systems whose tasks can be carried out in parallel. This chapter is an introduction to multi-threaded programming techniques. We introduce the most important concepts that we need to understand in order to write multi-threaded applications in C#. A thread is a single sequential flow of control within a program. However, a thread itself is not a program. It cannot run on its own, but instead it runs within a program. A thread also has its own private data and it may be able to access shared data.
    When would we create multi-threaded applications? The general answer is performance. Some common scenarios are:
    • Parallel programming : much of the code in computational finance implements compute intensive algorithms whose performance (called the speedup ) we wish to improve by using a divide-and-conquer strategy to assign parts of the algorithms to separate processors. A discussion of the divide-and-conquer and other parallel design patterns is given in Mattson, Sanders and Massingill 2005.
    • Simultaneous processing of requests
  • Java Professional Interview Guide
    eBook - ePub

    Java Professional Interview Guide

    Learn About Java Interview Questions and Practise Answering About Concurrency, JDBC, Exception Handling, Spring, and Hibernate

    In this chapter, you will learn what is a thread and how to create it. We will also discuss about how different threads can communicate with each other to create an efficient application. In the later stages, we will discuss the concurrency API of Java, which makes the multithreaded programming way simpler than the traditional programming approach.

    What is multithreading? Can you explain multithreading?

    Multithreading is a situation in which multiple threads are executed simultaneously. Multithreading consumes less memory. As threads are lightweight, they enhance the performance of an application.

    What is a thread?

    A thread is a lightweight sub-process. It is a separate path of execution because each thread runs in a different stack frame. A process may contain multiple threads. Threads share the process resources, but still, they execute independently.

    What are the advantages of multithreading?

    The advantages are as follows:
    • The thread is lightweight. So, it enhances the application performance.
    • Multithreading reduces more servers, as one server can execute multiple threads at a time.
    • Even though some background task is running, multithreading allows an application to be always reactive to accept an input.
    • As multithreading includes execution of multiple threads independently, it allows the faster execution of tasks.
    • The threads share the common memory resources. So, multithreading provides better utilization of cache memory.

    What is a process? Can you differentiate between a process and a thread?

    A program in the execution is called the process , and a thread is a subset of the process.
    The differences between them are as follows:
    • Processes are independent, but the threads are the subset of a process.
    • The processes have different address space in the memory. However, the threads contain a shared address space of the process.
  • Enterprise Application Development with C# 9 and .NET 5
    • Ravindra Akella, Arun Kumar Tamirisa, Suneel Kumar Kunani, Bhupesh Guptha Muthiyalu(Authors)
    • 2021(Publication Date)
    • Packt Publishing
      (Publisher)
    Concurrency : This entails doing many tasks at the same time, such as in our previous example of replying to an email while queuing for a restaurant counter, or the chef seasoning dish 1 and heating the pan for dish 2. In terms of enterprise applications, concurrency involves multiple threads sharing a core and, based on their time slicing, executing tasks and performing context switching.
  • Asynchronous : Asynchronous programming is a technique that relies on executing tasks asynchronously instead of blocking the current thread while it is waiting. In our example, asynchronicity is waiting for your token to be called for you to go to the pickup counter while the chef is working on preparing your food, but while you're waiting, you have moved away from the ordering counter, thereby allowing other orders to be placed. This is like a task that executes asynchronously and frees up resources while waiting on an I/O task (for instance, while waiting on data from a database call). The beauty of asynchronicity is that tasks are executed either parallelly or concurrently, which is completely abstracted from developers by the framework and lets the developer focus their development efforts on the business logic of the application rather than on managing tasks. We will see this in the Tasks and parallels section.
  • Multithreading : Multithreading is a way to achieve concurrency where new threads are created manually and executed concurrently, as with the CLR ThreadPool . In a multicore/multiprocessor system, multithreading helps in achieving parallelism by executing newly created threads in different cores.
  • Now that we have an understanding of the key terms in parallel programming, let's move on to look at how to create threads and the role of ThreadPool in .NET Core.

    Demystifying threads, lazy initialization, and ThreadPool

    The thread is the smallest unit in Windows and it executes instructions in the processor. A process is a bigger executing container, and the thread inside the process is the smallest unit to use processor time and execute instructions. The key thing to remember is that whenever your code needs to be executed in a process, it should be assigned to a thread. Each processor can only execute one instruction at a time; that's why, in a single-core system, at any point time only one thread is being executed. There are scheduling algorithms that are used to allocate processor time to a thread. A thread typically has a stack (which keeps track of execution history), registers in which to store various variables, and counters to hold instructions that need to be executed.
  • Learn C# Programming
    eBook - ePub

    Learn C# Programming

    A guide to building a solid foundation in C# language for writing efficient programs

    • Marius Bancila, Raffaele Rialdi, Ankit Sharma(Authors)
    • 2020(Publication Date)
    • Packt Publishing
      (Publisher)
    At the same time, blocking the code execution is also not acceptable and therefore a different strategy is required. This domain of problems is categorized under asynchronous programming and requires slightly different tools. In this chapter, we will learn the basics of multithreading and asynchronous programming and look specifically at the following: What is a thread? Creating threads in.NET Understanding synchronization primitives The task paradigm By the end of this chapter, you will be familiar with multithreading techniques, using primitives to synchronize code execution, tasks, continuations, and cancellation tokens. You will also understand what the potentially dangerous operations are and the basic patterns to use to avoid problems when sharing resources among multiple threads. We will now begin familiarizing ourselves with the basic concepts needed to operate with multithreading and asynchronous programmin g. What is a thread? Every OS provides abstractions to allow multiple programs to share the same hardware resources, such as CPU, memory, and input and output devices. The process is one of those abstractions, providing a reserved virtual address space that its running code cannot escape from. This basic sandbox avoids the process code interfering with other processes, establishing the basis for a balanced ecosystem. The process has nothing to do with code execution, but primarily with memory. The abstraction that takes care of code execution is the thread. Every process has at least one thread, but any process code may request the creation of more threads that will all share the same virtual address space, delimited by the owning process
  • Apps and Services with .NET 8
    eBook - ePub

    Apps and Services with .NET 8

    Build practical projects with Blazor, .NET MAUI, gRPC, GraphQL, and other enterprise technologies

    5

    Multitasking and Concurrency

    This chapter is about allowing multiple actions to occur at the same time to improve performance, scalability, and user productivity for the applications that you build. In this chapter, we will cover the following topics:
    • Understanding processes, threads, and tasks
    • Running tasks asynchronously
    • Synchronizing access to shared resources
    • Understanding async and await

    Understanding processes, threads, and tasks

    A process , with one example being each of the console applications we have created, has resources like memory and threads allocated to it.
    A thread executes your code statement by statement. By default, each process only has one thread, and this can cause problems when we need to do more than one task at the same time. Threads are also responsible for keeping track of things like the currently authenticated user and any internationalization rules that should be followed for the current language and region.
    Windows and most other modern operating systems use preemptive multitasking , which simulates the parallel execution of tasks. It divides the processor time among the threads, allocating a time slice to each thread one after another. The current thread is suspended when its time slice finishes. The processor then allows another thread to run for a time slice.
    When Windows switches from one thread to another, it saves the context of the thread and reloads the previously saved context of the next thread in the thread queue. This takes both time and resources to complete.
    As a developer, if you have a small number of complex pieces of work and you want complete control over them, then you could create and manage individual Thread instances. If you have one main thread and multiple small pieces of work that can be executed in the background, then you can use the ThreadPool
  • Hands-On System Programming with Linux
    eBook - ePub

    Hands-On System Programming with Linux

    Explore Linux system programming interfaces, theory, and practice

    Virtual Memory ).
    A thread is an independent execution (or flow) path within a process. The life and scope of a thread, in the familiar procedural programming paradigm we typically work with, is simply a function.
    So, in the traditional model we mentioned previously, we have a single thread of execution; that thread, in the C programming paradigm, is the main() function! Think about it: the main() thread is where execution begins (well, at least from the app developer's viewpoint) and ends. This model is (now) called the single threaded software model. As opposed to what? The multithreaded one, of course. So, there we have it: it is possible to have more than one thread alive and executing concurrently (in parallel) with other independent threads within the same process.
    But, hang on, can't processes generate parallelism too and have multiple copies of themselves working on different aspects of the application? Yes, of course: we have covered the fork(2) system call in all its considerable glory (and implications) in Chapter 10 , Process Creation . This is known as the multiprocessing model. So, if we have multiprocessing – where several processes run in parallel and, hey, they get the work done—the million dollar question becomes: "why multithreading at all?" (Kindly deposit a million dollars and we shall provide the answer.) There are several good reasons; check out the upcoming sections (especially Motivation – why threads?
  • Expert C++
    eBook - ePub

    Expert C++

    Become a proficient programmer by learning coding best practices with C++17 and C++20's latest features

    • Vardan Grigoryan, Shunguang Wu(Authors)
    • 2020(Publication Date)
    • Packt Publishing
      (Publisher)
    One process stores the results of the calculation to a shared segment in the memory, and the second process reads it from the segment. In the context of our previous example, the teacher and their assistants share their checking results in a shared paper. Threads, on the other hand, share address space of the process because they run in the context of the process. While a process is a program, thread is a function rather than a program. That said, a process must have at least one thread , which we call the thread of execution. A thread is the container of instructions of a program that are run in the system, while the process encapsulates the thread and provides resources for it. Most of our interest lies in threads and their orchestration mechanisms. Let's now meet them in person.
    Passage contains an image

    Threads

    A thread is a section of code in the scope of a process that can be scheduled by the OS scheduler. While a process is the image of the running program, managing multi-process projects along with IPC is much harder and sometimes useless compared to projects leveraging multithreading. Programs deal with data and, usually, collections of data. Accessing, processing, and updating data is done by functions that are either the methods of objects or free functions composed together to achieve an end result. In most projects, we deal with tens of thousands of functions and objects. Each function represents a bunch of instructions wrapped under a sensible name used to invoke it by other functions. Multithreading aims to run functions concurrently to achieve better performance.
    For example, a program that calculates the sum of three different vectors and prints them calls the function calculating the sum for the first vector, then for the second vector, and finally, for the last one. It all happens sequentially. If the processing of a single vector takes A amount of time, then the program will run in 3A time. The following code demonstrates the example:
  • Learn Java 17 Programming
    As you can see, there are many ways to get results from a thread. The method you choose depends on the particular needs of your application.

    Parallel versus concurrent processing

    When we hear about working threads executing at the same time, we automatically assume that they literally do what they are programmed to do in parallel. Only after we look under the hood of such a system do we realize that such parallel processing is possible only when the threads are each executed by a different CPU; otherwise, they time-share the same processing power. We perceive them working at the same time only because the time slots they use are very short—a fraction of the time units we use in our everyday life. When threads share the same resource, in computer science, we say they do it concurrently .

    Concurrent modification of the same resource

    Two or more threads modifying the same value while other threads read it is the most general description of one of the problems of concurrent access. Subtler problems include thread interference and memory consistency errors, both of which produce unexpected results in seemingly benign fragments of code. In this section, we are going to demonstrate such cases and ways to avoid them.
    At first glance, the solution seems quite straightforward: allow only one thread at a time to modify/access the resource, and that’s it. But if access takes a long time, it creates a bottleneck that might eliminate the advantage of having many threads working in parallel. Or, if one thread blocks access to one resource while waiting for access to another resource and the second thread blocks access to a second resource while waiting for access to the first one, it creates a problem called a deadlock . These are two very simple examples of possible challenges a programmer may encounter while using multiple threads.
    First, we’ll reproduce a problem caused by the concurrent modification of the same value. Let’s create a Calculator
  • Hands-On Parallel Programming with C# 8 and .NET Core 3
    eBook - ePub

    Hands-On Parallel Programming with C# 8 and .NET Core 3

    Build solid enterprise software using task parallelism and multithreading

    aren't many performance gains that can be achieved via multithreading in such systems. Moreover, context switching comes with performance overheads. If the work that's allocated to a thread spans multiple time slices, then the thread needs to be switched in and out of memory. Every time it switches out, it needs to bundle and save its state (data) and reload it when it switches back in.
    Concurrency is a concept that's primarily used in the context of multi-core processors. A multi-core processor has a higher number of CPUs available, as we discussed previously, and therefore different threads can be run simultaneously on different CPUs. A higher number of processors means a higher degree of concurrency.
    There are multiple ways that threads can be created in programs. These include the following:
    • The Thread class
    • The ThreadPool Class
    • The BackgroundWorker Class
    • Asynchronous delegates
    • TPL
    We will cover asynchronous delegates and TPL in depth during the course of this book, but in this chapter, we will provide an explanation of the remaining three methods. Passage contains an image

    Thread class

    The simplest and easiest way of creating threads is via the Thread class, which is defined in the System.Threading namespace. This approach has been used since the arrival of .NET version 1.0 and it works with .NET core as well. To create a thread, we need to pass a method that the thread needs to execute. The method can either be parameter-less or parameterized . There are two delegates that are provided by the framework to wrap these functions:
    • System.Threading.ThreadStart
    • System.Threading.ParameterizedThreadStart
    We will learn both of these through examples. Before showing you how to create a thread, I will try to explain how a synchronous program works. Later on, we will introduce multithreading so that we understand the asynchronous way of execution. An example of how to create a thread
  • Expert Python Programming
    eBook - ePub

    Expert Python Programming

    Master Python by learning the best coding practices and advanced programming concepts, 4th Edition

    • Michał Jaworski, Tarek Ziadé(Authors)
    • 2021(Publication Date)
    • Packt Publishing
      (Publisher)
    RDW ). The latter is a popular design approach of web applications that allows you to display the same web application well on a variety of mediums (such as desktop browsers, mobiles, or tablets).

    Multiuser applications

    Serving multiple users simultaneously may be understood as a special case of application responsiveness. The key difference is that here the application has to satisfy the parallel inputs of many users and each one of them may have some expectations about how quickly the application should respond. Simply put, one user should not have to wait for other user inputs to be processed in order to be served.
    Threading is a popular concurrency model for multiuser applications and is extremely common in web applications. For instance, the main thread of a web server may accept all incoming connections but dispatch the processing of every single request to a separate dedicated thread. This usually allows us to handle multiple connections and requests at the same time. The number of connections and requests the application will be able to handle at the same time is only constrained by the ability of the main thread to quickly accept connections and dispatch requests to new threads. A limitation of this approach is that applications using it can quickly consume many resources. Threads are not free: memory is shared but each thread will have at least its own stack allocated. If the number of threads is too large, the total memory consumption can quickly get out of hand.
    Another model of threaded multiuser applications assumes that there is always a limited pool of threads acting as workers that are able to process incoming user inputs. The main thread is then only responsible for allocating and managing the pool of workers. Web applications often utilize this pattern too. A web server, for instance, can create a limited number of threads and each of those threads will be able to accept connections on its own and handle all requests incoming on that connection. This approach usually allows you to serve fewer users at the same time (compared to one thread per request) but gives more control over resource usage. Two very popular Python WSGI-compliant web servers—Gunicorn and uWSGI—allow serving HTTP requests with threaded workers in a way that generally follows this principle.
  • Index pages curate the most relevant extracts from our library of academic textbooks. They’ve been created using an in-house natural language model (NLM), each adding context and meaning to key research topics.
    Explore more topic indexes