An Introduction to Self-adaptive Systems
eBook - ePub

An Introduction to Self-adaptive Systems

A Contemporary Software Engineering Perspective

  1. English
  2. ePUB (mobile friendly)
  3. Available on iOS & Android
eBook - ePub

An Introduction to Self-adaptive Systems

A Contemporary Software Engineering Perspective

Book details
Book preview
Table of contents
Citations

About This Book

A concise and practical introduction to the foundations and engineering principles of self-adaptation

Though it has recently gained significant momentum, the topic of self-adaptation remains largely under-addressed in academic and technical literature. This book changes that. Using a systematic and holistic approach, An Introduction to Self-adaptive Systems: A Contemporary Software Engineering Perspective provides readers with an accessible set of basic principles, engineering foundations, and applications of self-adaptation in software-intensive systems.

It places self-adaptation in the context of techniques like uncertainty management, feedback control, online reasoning, and machine learning while acknowledging the growing consensus in the software engineering community that self-adaptation will be a crucial enabling feature in tackling the challenges of new, emerging, and future systems.

The author combines cutting-edge technical research with basic principles and real-world insights to create a practical and strategically effective guide to self-adaptation. He includes features such as:

  • An analysis of the foundational engineering principles and applications of self-adaptation in different domains, including the Internet-of-Things, cloud computing, and cyber-physical systems
  • End-of-chapter exercises at four different levels of complexity and difficulty
  • An accompanying author-hosted website with slides, selected exercises and solutions, models, and code

Perfect for researchers, students, teachers, industry leaders, and practitioners in fields that directly or peripherally involve software engineering, as well as those in academia involved in a class on self-adaptivity, this book belongs on the shelves of anyone with an interest in the future of software and its engineering.

Frequently asked questions

Simply head over to the account section in settings and click on “Cancel Subscription” - it’s as simple as that. After you cancel, your membership will stay active for the remainder of the time you’ve paid for. Learn more here.
At the moment all of our mobile-responsive ePub books are available to download via the app. Most of our PDFs are also available to download and we're working on making the final remaining ones downloadable now. Learn more here.
Both plans give you full access to the library and all of Perlego’s features. The only differences are the price and subscription period: With the annual plan you’ll save around 30% compared to 12 months on the monthly plan.
We are an online textbook subscription service, where you can get access to an entire online library for less than the price of a single book per month. With over 1 million books across 1000+ topics, we’ve got you covered! Learn more here.
Look out for the read-aloud symbol on your next book to see if you can listen to it. The read-aloud tool reads text aloud for you, highlighting the text as it is being read. You can pause it, speed it up and slow it down. Learn more here.
Yes, you can access An Introduction to Self-adaptive Systems by Danny Weyns in PDF and/or ePUB format, as well as other popular books in Technologie et ingénierie & Microélectronique. We have over one million books available in our catalogue for you to explore.

Information

1
Basic Principles of Self‐Adaptation and Conceptual Model

Modern software‐intensive systems1 are expected to operate under uncertain conditions, without interruption. Possible causes of uncertainties include changes in the operational environment, dynamics in the availability of resources, and variations of user goals. Traditionally, it is the task of system operators to deal with such uncertainties. However, such management tasks can be complex, error‐prone, and expensive. The aim of self‐adaptation is to let the system collect additional data about the uncertainties during operation in order to manage itself based on high‐level goals. The system uses the additional data to resolve uncertainties and based on its goals re‐configures or adjusts itself to satisfy the changing conditions.
Consider as an example a simple service‐based health assistance system as shown in Figure 1.1. The system takes samples of vital parameters of patients; it also enables patients to invoke a panic button in case of an emergency. The parameters are analyzed by a medical service that may invoke additional services to take actions when needed; for instance, a drug service may need to notify a local pharmacy to deliver new medication to a patient. Each service type can be realized by one of multiple service instances provided by third‐party service providers. These service instances are characterized by different quality properties, such as failure rate and cost. Typical examples of uncertainties in this system are the patterns that particular paths in the workflow are invoked by, which are based on the health conditions of the users and their behavior. Other uncertainties are the available service instances, their actual failure rates and the costs to use them. These parameters may change over time, for instance due to the changing workloads or unexpected network failures.
Architecture of a simple service-based health assistance system that takes samples of vital parameters of patients and also enables patients to invoke a panic button in case of an emergency.
Figure 1.1 Architecture of a simple service‐based health assistance system
Anticipating such uncertainties during system development, or letting system operators deal with them during operation, is often difficult, inefficient, or too costly. Moreover, since many software‐intensive systems today need to be operational 24/7, the uncertainties necessarily need to be resolved at runtime when the missing knowledge becomes available. Self‐adaptation is about how a system can mitigate such uncertainties autonomously or with minimum human intervention.
The basic idea of self‐adaptation is to let the system collect new data (that was missing before deployment) during operation when it becomes available. The system uses the additional data to resolve uncertainties, to reason about itself, and based on its goals to reconfigure or adjust itself to maintain its quality requirements or, if necessary, to degrade gracefully.
In this chapter, we explain what a self‐adaptive system is. We define two basic principles that determine the essential characteristics of self‐adaptation. These principles allow us to define the boundaries of what we mean by a self‐adaptive system in this book, and to contrast self‐adaptation with other approaches that deal with changing conditions during operation. From the two principles, we derive a conceptual model of a self‐adaptive system that defines the basic elements of such a system. The conceptual model provides a basic vocabulary for the remainder of this book.

LEARNING OUTCOMES

  • To explain the basic principles of self‐adaptation.
  • To understand how self‐adaptation relates to other adaptation approaches.
  • To describe the conceptual model of a self‐adaptive system.
  • To explain and illustrate the basic concepts of a self‐adaptive system.
  • To apply the conceptual model to a concrete self‐adaptive application.

1.1 Principles of Self‐Adaptation

There is no general agreement on a definition of the notion of self‐adaptation. However, there are two common interpretations of what constitutes a self‐adaptive system.
The first interpretation considers a self‐adaptive system as a system that is able to adjust its behavior in response to the perception of changes in the environment and the system itself. The self prefix indicates that the system decides autonomously (i.e. without or with minimal human intervention) how to adapt to accommodate changes in its context and environment. Furthermore, a prevalent aspect of this first interpretation is the presence of uncertainty in the environment or the domain in which the software is deployed. To deal with these uncertainties, the self‐adaptive system performs tasks that are traditionally done by operators. Hence, the first interpretation takes the stance of the external observer and looks at a self‐adaptive system as a black box. Self‐adaptation is considered as an observable property of a system that enables it to handle changes in external conditions, availability of resources, workloads, demands, and failures and threats.
The second interpretation contrasts traditional “internal” mechanisms that enable a system to deal with unexpected or unwanted events, such as exceptions in programming languages and fault‐tolerant protocols, with “external” mechanisms that are realized by means of a closed feedback loop that monitors and adapts the system behavior at runtime. This interpretation emphasizes a “disciplined split” between two distinct parts of a self‐adaptive system: one part that deals with the domain concerns and another part that deals with the adaptation concerns. Domain concerns relate to the goals of the users for which the system is built; adaptation concerns relate to the system itself, i.e. the way the system realizes the user goals under changing conditions. The second interpretation takes the stance of the engineer of the system and looks at self‐adaptation from the point of view how the system is conceived.
Hence, we introduce two complementary basic principles that determine what a self‐adaptive system is:
  1. External principle: A self‐adaptive system is a system that can handle changes and uncertainties in its environment, the system itself, and its goals autonomously (i.e. without or with minimal required human intervention).
  2. Internal principle: A self‐adaptive system comprises two distinct parts: the first part interacts with the environment and is responsible for the domain concerns – i.e. the concerns of users for which the system is built; the second part consists of a feedback loop that interacts with the first part (and monitors its environment) and is responsible for the adaptation concerns – i.e. concerns about the domain concerns.
Let us illustrate how the two principles of self‐adaptation apply to the service‐based health assistance system. Self‐adaptation would enable the system to deal with dynamics in the types of services that are invoked by the system as well as variations in the failure rates and costs of particular service instances. Such uncertainties may be hard to anticipate before the system is deployed (external principle). To that end, the service‐based system could be enhanced with a feedback loop. This feedback loop tracks the paths of services that are invoked in the workflow, as well as the failure rates of service instances and the costs of invoking service instances that are provided by the service providers. Taking this data into account, the feedback loop adapts the selection of service instances by the workflow engine such that a set of adaptation concerns is achieved. For instance, services are selected that keep the average failure rate below a required threshold, while the cost of using the health assistance system is minimized (internal principle).

1.2 Other Adaptation Approaches

The ability of a software‐intensive system to adapt at runtime in order to achieve its goals under changing conditions is not the exclusivity of self‐adaptation, but can be realized in other ways.
The field of autonomous systems has a long tradition of studying systems that can change their behavior during operation in response to events that may not have been anticipated fully. A central idea of autonomous systems is to mimic human (or animal) behavior, which has been a source of inspiration for a very long time. The area of cybernetics founded by Norbert Wiener at MIT in the mid twentieth century led to the development of various types of machines that exposed seemingly “intelligent” behavior similar to biological systems. Wiener's work contributed to the foundations of various fields, including feedback control, automation, and robotics. The interest in autonomous systems has expanded significantly in recent years, with high‐profile application domains such as autonomous vehicles. While these applications have extreme potential, their successes so far have also been accompanied by some dramatic failures, such as the accidents caused by first generation autonomous cars. The consequences of such failures demonstrate the real technical difficulties associated with realizing truly autonomous systems.
An important sub‐field of autonomous systems is multi‐agent systems, which studies the coordination of autonomous behavior of agents to solve problems that go beyond the capabilities of single agents. This study involves architectures of autonomous agents, communication and coordination mechanisms, and supporting infrastructure. An important aspect is the representation of knowledge and its use to coordinate autonomous behavior of agents. Self‐organizing systems emphasize decentralized control. In a self‐organizing system, simple reactive agents apply local rules to adapt their interactions with other agents in response to changing conditions in order to cooperatively realize the system goals. In such systems, the global macroscopic behavior emerges from the local interactions of the agents. However, emergent behavior can also appear as an unwanted side effect, for example in the form of oscillations. Designing decentralized systems that expose the required global behavior while avoiding unwanted emergent phenomena remains a major challenge.
Context‐awareness is another traditional field that is rel...

Table of contents

  1. Cover
  2. Table of Contents
  3. Title Page
  4. Copyright
  5. Dedication
  6. Foreword
  7. Acknowledgments
  8. Acronyms
  9. Introduction
  10. 1 Basic Principles of Self‐Adaptation and Conceptual Model
  11. 2 Engineering Self‐Adaptive Systems: A Short Tour in Seven Waves
  12. 3 Internet‐of‐Things Application
  13. 4 Wave I: Automating Tasks
  14. 5 Wave II: Architecture‐based Adaptation
  15. 6 Wave III: Runtime Models
  16. 7 Wave IV: Requirements‐driven Adaptation
  17. 8 Wave V: Guarantees Under Uncertainties
  18. 9 Wave VI: Control‐based Software Adaptation
  19. 10 Wave VII: Learning from Experience
  20. 11 Maturity of the Field and Open Challenges
  21. Bibliography
  22. Index
  23. End User License Agreement