Creating Components
eBook - ePub

Creating Components

Object Oriented, Concurrent, and Distributed Computing in Java

  1. 448 pages
  2. English
  3. ePUB (mobile friendly)
  4. Available on iOS & Android
eBook - ePub

Creating Components

Object Oriented, Concurrent, and Distributed Computing in Java

Book details
Book preview
Table of contents
Citations

About This Book

Concurrency is a powerful technique for developing efficient and lightning- fast software. For instance, concurrency can be used in common applications such as online order processing to speed processing and ensure transaction reliability. However, mastering concurrency is one of the greatest challenges for both new and veteran programmers. Softwar

Frequently asked questions

Simply head over to the account section in settings and click on “Cancel Subscription” - it’s as simple as that. After you cancel, your membership will stay active for the remainder of the time you’ve paid for. Learn more here.
At the moment all of our mobile-responsive ePub books are available to download via the app. Most of our PDFs are also available to download and we're working on making the final remaining ones downloadable now. Learn more here.
Both plans give you full access to the library and all of Perlego’s features. The only differences are the price and subscription period: With the annual plan you’ll save around 30% compared to 12 months on the monthly plan.
We are an online textbook subscription service, where you can get access to an entire online library for less than the price of a single book per month. With over 1 million books across 1000+ topics, we’ve got you covered! Learn more here.
Look out for the read-aloud symbol on your next book to see if you can listen to it. The read-aloud tool reads text aloud for you, highlighting the text as it is being read. You can pause it, speed it up and slow it down. Learn more here.
Yes, you can access Creating Components by Charles W. Kann in PDF and/or ePUB format, as well as other popular books in Computer Science & Information Technology. We have over one million books available in our catalogue for you to explore.

Information

Year
2017
ISBN
9781135505905
Edition
1

Chapter 1: Introduction to Concurrent Programming and Components


1.1 Introduction

This chapter introduces the topics of the book, particularly concurrency and components. Because the concept of concurrency, particularly as it applies to programming, is so poorly understood by novice programmers, this chapter begins by giving a working definition of concurrent programming. This definition abandons the largely useless definition of concurrency as two programs running at the same time, replacing it with a definition that deals with how concurrency affects the implementation of a solution to the problem.
Once the definition of concurrent programming has been given, special purpose objects called concurrent components are introduced. These objects are the most interesting objects in concurrent programming because they are the ones that coordinate the activities in a concurrent program. Without concurrent components a concurrent program is simply a set of unrelated activities. It is the components that allow these activities to work together to solve a problem. Components are also the most difficult objects to write. This is because the activities (or active objects) correspond closely to normal procedural programs, but components require a change in the way that most programmers think about programs. It is also in components that the problems specific to concurrent programming, such as race conditions and deadlock, are found and dealt with. The rest of the book is about how to implement concurrent programs using these concurrent components.
Finally, this chapter explains the different types of concurrent programs and how these programs result in various types of programs. Part of understanding concurrent programming is realizing that there is more than one reason to do concurrent programming. An important aspect of any program is that it should solve a problem. Concurrency improves the solution to many different types of problems. Each of these problem types looks at the problem to be solved in a slightly different manner and thus requires the programmer to approach the problem in a slightly different way.

1.2 Chapter Goals

After completing this chapter, you should be able to:
  • Understand why concurrent programming is important.
  • Give a working definition of a concurrent program.
  • Understand the two types of synchronization and give examples of each.
  • Give a definition of the term component and know what special problems can be encountered when using components.
  • Describe several different reasons for doing concurrent programming and how each of these reasons leads to different design decisions and different program implementation.

1.3 What Is Concurrent Programming?

The purpose of this book is to help programmers understand how to create concurrent programs. Specifically, it is intended to help programmers understand and program special concurrent objects, called concurrent components.* Because these components are used only in concurrent programs, a good definition of a concurrent program is needed before components can be defined and methods given for their implementation. This section provides a good working definition of a concurrent program after first explaining why concurrent programming is an important concept for a programmer to know. The working definition of a concurrent program provided here will serve as a basis for understanding concurrent programming throughout the rest of the book.

1.3.1 Why Do Concurrent Programming?

The first issue in understanding concurrent programming is to provide a justification for studying concurrent programming. Most students and, indeed, many professional programmers have never written a program that explicitly creates Java threads, and it is possible to have a career in programming without ever creating a thread. Therefore, many programmers believe that concurrency in programming is not used in most real systems, and so it is a sidebar that can be safely ignored. However, that the use of concurrent programming is hidden from programmers is itself a problem, as the effects of a concurrent program can seldom be safely ignored.
When asked in class, most students would say they that have never implemented a concurrent program, but then they can be shown Exhibit 1 (Program1.1). This program puts a button in a JFrame and then calculates Fibonacci numbers in a loop. The fact that there is no way to set the value of stopProgram to false within the loop implies that the loop is infinite, and so it can never stop; however, when the button is pressed the loop eventually stops. When confronted with this behavior, most students correctly point out that when the Stop Calculation button is pressed the value of stopProgram is set to true and the loop can exit; however, at no place in the loop is the button checked to see if it has been pressed. So, some mechanism must be present that is external to the loop that allows the value of stopProgram to be changed. The mechanism that allows this value to be changed is concurrency.
What is happening in Exhibit 1 (Program1.1) is that, behind the scenes and hidden from the programmer, a separate thread, the Graphical User Interface (GUI) thread, was started. This thread is a thread started by Java that is running all the time, waiting for the Stop Calculation button to be pressed. When this button is pressed, the GUI thread runs for a short period of time concurrently with the main thread (the thread doing the calculation of Fibonacci numbers) and sets the value of stopProgram to true. Thus, Exhibit 1 (Program1.1) is a very simple example of a concurrent program. Because nearly every Java programmer at some point has written a program that uses buttons or other Abstract Window Tool Kit (AWT) or Swing components, nearly every Java programmer has written a concurrent program.
This brings up the first reason to study concurrent programming. Regardless of what a programmer might think, concurrent programming is ubiquitous; it is everywhere. Programmers using visual components in nearly any language are probably using some form of concurrency to implement those components. Programmers programming distributed systems, such as programs that run on Web servers that produce Web pages, are doing concurrent programming. Programmers who write UNIX “.so” (shared object) files or Windows “.com” or “.ddl” files are writing concurrent programs. Concurrency in programs is present, if hidden, in nearly every major software project, and it is unlikely that a programmer with more than a few years left in a career could get by without encountering it at some point. And, as will be seen in the rest of the book, while the fact that a program is concurrent can be hidden, the effects of failing to account for concurrency can result in catastrophic consequences.
Exhibit 1. Program1.1: A Program To Calculate Fibonacci Numbers
image
Exhibit 1. Program1.1 (Continued)
image
The second reason to study concurrent programming is that breaking programs into parts using concurrency can significantly reduce the complexity of a program. For example, there was a time when implementing buttons, as in Exhibit 1 (Program1.1), involved requiring the loop to check whether or not a button had been pressed. This meant that a programmer had to consistently put code throughout a program to make sure that events were properly handled. Using threads has allowed this checking to be handled in a separate thread, thus relieving the program of the responsibility. The use of such threads allows programmers to write code to solve their problem, not to perform maintenance checks for other objects.
The third reason to study concurrent programming is that its use is growing rapidly, particularly in the area of distributed systems. Every system that runs part of the program on separate computers is by nearly every definition (including the one used in this book) concurrent. This means every browser access to a Web site involves some level of concurrency. This chain of concurrency does not stop at the Web server but normally extends to the resources that the Web server program uses. How to properly implement these resources requires the programmer to at least understand the problems involved in concurrent access or the program will have problems, such as occasionally giving the wrong answer or running very slowly.
The rest of this text is devoted to illustrating how to properly implement and control concurrency in a program and how to use concurrency with objects in order to simplify and organize a program. However, before the use of concurrency can be described, a working definition of concurrency, particularly in relationship to objects, must be given. Developing that working definition is the purpose of the rest of this chapter.

1.3.2 A Definition of Concurrent Programming

Properly defining a concurrent program is not an easy task. For example, the simplest definition would be when two or more programs are running at the same time, but this definition is far from satisfactory. For example, consider Exhibit 1 (Program1.1). This program has been described as concurrent, in that the GUI thread is running separately from the main thread and can thus set the value of the stopProgram variable outside of the calculation loop in the main thread. However, if this program is run on a computer with one Central Processing Unit (CPU), as most Windows computers are, it is impossible for more than one instruction to be run at a time; thus, by the simple definition given above, this program is not concurrent.
Another program with this simple definition can be illustrated by the example of two computers, one running a word processor in San Francisco and another running a spreadsheet in Washington, D.C. By the definition of a concurrent program above, these are concurrent. However, because the two programs are in no way related, the fact that they are concurrent is really meaningless.
It seems obvious that a good definition of concurrent programming would define the first example as concurrent and the second as not concurrent; therefore, something is fundamentally wrong with this simple definition of concurrent programming. In fact, the simpleminded notion of concurrency involving two activities occurring at the same time is a poor foundation on which to attempt to build a better definition of the term concurrency. To create a definition of concurrency that can be used to describe concurrent programming, a completely new foundation needs to be built. A better, workable definition is supplied in the rest of Section 1.3.2.

1.3.2.1 Asynchronous Activities

Defining a concurrent program begins by defining the basic building block of a program which will be called an activity. An activity could be formally defined as anything that could be done by an abstract Turing machine or as an algorithm. However, what is of interest here is a working definition, and it is sufficient to define an activity as simply a series of steps implemented to perform a task. Examples of an activity would be baking a pie or calculating a Fibonacci number on a computer. The steps required to perform a task will be called an ordering.
Activities can be broken down into subactivities, each an activity itself. For example, baking a pie could consist of making the crust, making the filling, filling the crust with the filling, and baking the pie. For example, Exhibit 2 shows the steps in baking a pie, where the crust must first be made, then the filling made, the filling added to the crust, and the pie baked. If the order of these activities is completely fixed, then the ordering is called a total ordering, as all steps in all activities are ordered. In the case of a total orderings of events, the next step to be taken can always be determined within a single activity. An activity for which the order of the steps is determined by the activity is called a synchronous activity. Note that partial orderings are also controlled by synchronous activities; these are implemented by the programming equivalent of “if” and “while” statements.
image
Exhibit 2. Synchronous Activity to Make a Pie
In the case of making a pie it is not necessary to first make the crust and then make the filling. The filling could be made the night before, and the crust could then be made in the morning before combining the two to make a pie. If the order in which the crust and the filling are made can be changed, then the ordering is called a partial ordering (the order of steps to make the crust and the order of steps to make the filling remain fixed, but either can be done first). However, if one activity must always finish before the other begins, it is possible to implement this behavior with a synchronous activity.
image
Exhibit 3. One Possible Example of Asynchronous Activities in Making a Pie
A special case occurs when, for a partial ordering, the next step is not determined by a single activity. To show this, several values of time must be defined. The time after which preparing the crust can be started is t1c, and the time that it must be completed is t2c. The time after which preparing the filling can be started is t1f, and the time that it must be completed is t2f. Now, if [(t1c < = t1f < t2c) || (t1f < = t1c < t2f)], then the activities of making the crust and the filling can (but do not necessarily have to) overlap. If the steps overlap, then the overall ordering of the steps cannot be determined within any one task or, thus, any one activity. One example of this situation for baking a pie is illustrated in the Gant chart in Exhibit 3. Note that many other timelines are possible, as the crust does not have to start at t1c, nor does it have to end at t1f; it simply has to occur between those two times. The same is true of making the filling. The two activities might not actually overlap; it is sufficient that they can overlap.
The only way that these two activities can overlap in this manner is if the lists of steps for the activities are being executed independently. For example, it is possible that two bakers are responsible for the pie, one making the filling and the other making the crust. It is also possible that one baker is responsible for both the crust and filling, but they are switching back and forth from doing steps from one part of the recipe (making the crust) to another part of the recipe (making the filling). However they are accomplished, by the definition given here the steps involved in the two subtasks are being executed independently, or asynchronously, of each other. This type of activity is called an asynchronous activity.
The definition of an asynchronous activity leads to a very simple definition of concurrency: Concurrency is defined as the presence of two or more asynchronous activities.
When asynchronous activities are present in a program, it is possible (but not necessary) for the steps for the two activities to interleave. As we will see in Chapter 2, the number of different ways they can interleave can be quite large, and the results can be quite unexpected. However, note that from the definition of asynchronous activities the two activities do not have to run at the same time; they simply have to be able to run at the same time. This is a useful distinction, because the problems that will be encountered in concurrency occur not because the activities execute at the same time but because they can interleave their executions. It is also useful because if a program allows activities to interleave, it must protect against the ill effects of that interleaving whether it occurs or not. As will be seen, this means that methods that might be used concurrently must be synchronized even if the vast majority of the time the use of the synchronized statement provides no benefit.
The i...

Table of contents

  1. Cover Page
  2. Other Auerbach Publications
  3. Title Page
  4. Copyright Page
  5. Preface
  6. Chapter 1: Introduction to Concurrent Programming and Components
  7. Chapter 2: Threads and Program Contexts
  8. Chapter 3: Designing and Implementing Concurrent Programs with State Diagrams
  9. Chapter 4: Identifiers, Variables, Objects, and Collection Classes
  10. Chapter 5: Programming to an Interface
  11. Chapter 6: Exceptions in Java
  12. Chapter 7: Implementing an Animator Component Using the Java Event Model
  13. Chapter 8: Cooperative Synchronization
  14. Chapter 9: Combining Concurrent Techniques
  15. Chapter 10: Organizing the Problem for Reuse: Reuse of Utility Classes
  16. Chapter 11: Object-Oriented Design
  17. Chapter 12: Program Management in Java
  18. Chapter 13: Distributed Programming Using RMI
  19. Appendix A: Key Words
  20. References