CONTENTS
1.1 Improving Manageability
1.2 The Road to Autonomic Computing
1.3 Defining an Autonomic Environment
1.4 Applying Autonomic Computing to IT Service Management
1.4.1 Relating Autonomic Computing to Grid, Service-Oriented Architecture, and Virtualization Technologies
1.4.1.1 Grid Computing
1.4.1.2 Service-Oriented Architecture (SOA)
1.4.1.3 Virtualization
1.5 Current State of Autonomic Computing
1.5.1 Autonomic Computing Architecture
1.5.2 Standards
1.5.3 Reference Implementations
1.5.4 Research Opportunities
1.6 Conclusion
âCivilization advances by extending the number of important operations which we can perform without thinking about them.â Said by mathematician and philosopher Alfred North Whitehead almost a century ago, this statement today embodies both the influence and importance of computer technology.
In the past two decades alone, the information technology (IT) industry has been a driving force of progress. Through the use of distributed networks, Web-based services, handheld devices, and cellular phones, companies of all sizes and across all industries are delivering sophisticated services that fundamentally change the tenor of daily life, from how we shop to how we bank to how we communicate. And in the process itâs dramatically improving business productivity. Consider that call centers can now respond to customer inquiries in seconds. Banks can approve mortgage loans in minutes. Phone companies can activate new phone service in just one hour.
Yet, while business productivity is soaring, these advances are creating significant management challenges for IT staffs. The sophistication of services has inspired a new breed of composite applications that span multiple resourcesâWeb servers, application servers, integration middleware, legacy systemsâand thus become increasingly difficult to manage. At the same time, escalating demand, growing volumes of data, and multi-national services are driving the proliferation of technologies and platforms, and creating IT environments that require tens of thousands of servers, millions of lines of code, petabytes of storage, multitudes of database subsystems, and intricate global networks composed of millions of components.
With physically more resources to manage and increasingly elaborate interdependencies among the various IT components, it is becoming more difficult for IT staff to deploy, manage, and maintain IT infrastructures. The implications are far-reaching, affecting operational costs, organizational flexibility, staff productivity, service availability, and business security. In fact, up to 80 percent of an average companyâs IT budget is spent on maintaining existing applications.1 And an increasing number of companies today report that their IT staffs spend much of their time locating, isolating, and repairing problems.
The rapid pace of changeâunpredictable demand, growing corporate governance prompted by both regulatory requirements and an increasingly litigious world, and an escalating number of access points (cellular phones, PDAs, PCs)âmakes these problems even more acute. In an IBM study of 456 organizations worldwide, only 13 percent of CEOs surveyed believed that their organizations could be rated as âvery responsiveâ to change.2
With so much time required to manage core business processes, IT staffs have little time left to identify and address areas of potential growth. To enable companies to focus on the application of technology to new business opportunities and innovation, the IT industry must address this complexity.
Thatâs where autonomic computing comes in.
1.1 Improving Manageability
The term autonomic computing was coined in 2001 by Paul Horn, senior vice president of research for IBM. According to Horn, the industryâs focus on creating smaller, less expensive, and more powerful systems was fueling the problem of complexity. Left unchecked, he said, this complexity would ultimately prevent companies from âmoving to the next era of computingâ and, therefore, the next era of business.3 In response, he issued a âGrand Challengeâ to the IT industry to focus on the development of autonomic systems that could manage themselves.
In much the same way as the autonomic nervous system regulates and protects our bodies without our conscious involvement, Horn envisioned autonomic IT infrastructures that could sense, analyze, and respond to situations automatically, alleviating the need for IT professionals to perform tedious systems management tasks and enabling them to focus on applying IT to solve business problems. For example, rather than having to worry about what database parameters were needed to optimize data delivery for a customer service application, IT administrators could instead spend their time extending that application to provide customers even greater conveniences.
For years, science fiction writers have imagined a world in which androids and sentient systems could make decisions independent of human input. This represents neither the spirit nor the goal of autonomic computing. Although artificial intelligence plays an important role in the field of autonomic computingâas some of the research highlighted later in this chapter showsâautonomic computing isnât focused on eliminating the human from the equation. Its goal is to help eliminate mundane, repetitive IT tasks so that IT professionals can apply technology to drive business objectives and set policies that guide decision making. At its core, the field of autonomic computing works to effectively balance people, processes, and technology.
When first introduced, the concept of autonomic computing was sometimes confused with the notion of automation. However, autonomic computing goes beyond traditional automation, which leverages complex âif/then scriptsâ written by programmers to dictate specific behavior, such as restarting a server or sending alerts to IT administrators. Rather, autonomic computing focuses on enabling systems to adjust and adapt automatically based on business policies. It addresses the process of how IT infrastructures are designed, managed, and maintained. And it calls for standardizing, integrating, and managing the communication among heterogeneous IT components to help simplify processes. In a self-managing autonomic environment, a system can sense a rapidly growing volume of customer requests, analyze the infrastructure to determine how best to process these requests, and then make the necessary changesâfrom adjusting database parameters to provisioning servers to rerouting network trafficâso that all requests are handled in a timely manner in accordance with established business policies.
Although the target of autonomic computing is improving the manageability of IT processes, it has great implications on a companyâs ability to transition to on demand business, where business processes can be rapidly adapted and adjusted as customer demand or market requirements dictate. Without autonomic computing, it would be nearly impossible for the IT infrastructure to provide the level of flexibility, resiliency, and responsiveness necessary to allow a company to shift strategies to realize on demand goals.
1.2 The Road to Autonomic Computing
Autonomic computing represents a fundamental shift in managing IT. Beginning in the 1970s, reliability, availability, and serviceability (RAS) technology worked to help improve performance and availability at the resource level. This technologyâprimarily developed for mainframe computersâ created redundancies within the hardware systems themselves and used embedded scripts that could bypass failing components, enabling IT staffs to easily install spare parts while the system was running, or even detect and make use of new processors automatically. By the late 1980s, as distributed systems began taking hold, IT vendors created management solutions that centralized and automated monitoring and management at the systems level. Examples of such management systems include the use of a single console for managing a particular set of resources or business services, and the automation of problem recovery using scripts for well-known and well-defined problems.
Although these approaches helped streamline many discrete IT activities, they were not based on interoperable standards and, hence, they lacked the ability to integrate business requirements in a holistic manner across all resources in the infrastructure and across IT processes themselves. Moving to this next era of IT management requires more than the work of a single IT ...