Deploying and Managing a Cloud Infrastructure
eBook - ePub

Deploying and Managing a Cloud Infrastructure

Real-World Skills for the CompTIA Cloud+ Certification and Beyond: Exam CV0-001

  1. English
  2. ePUB (mobile friendly)
  3. Available on iOS & Android
eBook - ePub

Deploying and Managing a Cloud Infrastructure

Real-World Skills for the CompTIA Cloud+ Certification and Beyond: Exam CV0-001

Book details
Book preview
Table of contents
Citations

About This Book

Learn in-demand cloud computing skills from industry experts

Deploying and Managing a Cloud Infrastructure is an excellent resource for IT professionals seeking to tap into the demand for cloud administrators. This book helps prepare candidates for the CompTIA Cloud+ Certification (CV0-001) cloud computing certification exam. Designed for IT professionals with 2-3 years of networking experience, this certification provides validation of your cloud infrastructure knowledge.

With over 30 years of combined experience in cloud computing, the author team provides the latest expert perspectives on enterprise-level mobile computing, and covers the most essential topics for building and maintaining cloud-based systems, including:

  • Understanding basic cloud-related computing concepts, terminology, and characteristics
  • Identifying cloud delivery solutions and deploying new infrastructure
  • Managing cloud technologies, services, and networks
  • Monitoring hardware and software performance

Featuring real-world examples and interactive exercises, Deploying and Managing Cloud Infrastructure delivers practical knowledge you can apply immediately. And, in addition, you also get access to a full set of electronic study tools including:

  • Interactive Test Environment
  • Electronic Flashcards
  • Glossary of Key Terms

Now is the time to learn the cloud computing skills you need to take that next step in your IT career.

Frequently asked questions

Simply head over to the account section in settings and click on ā€œCancel Subscriptionā€ - itā€™s as simple as that. After you cancel, your membership will stay active for the remainder of the time youā€™ve paid for. Learn more here.
At the moment all of our mobile-responsive ePub books are available to download via the app. Most of our PDFs are also available to download and we're working on making the final remaining ones downloadable now. Learn more here.
Both plans give you full access to the library and all of Perlegoā€™s features. The only differences are the price and subscription period: With the annual plan youā€™ll save around 30% compared to 12 months on the monthly plan.
We are an online textbook subscription service, where you can get access to an entire online library for less than the price of a single book per month. With over 1 million books across 1000+ topics, weā€™ve got you covered! Learn more here.
Look out for the read-aloud symbol on your next book to see if you can listen to it. The read-aloud tool reads text aloud for you, highlighting the text as it is being read. You can pause it, speed it up and slow it down. Learn more here.
Yes, you can access Deploying and Managing a Cloud Infrastructure by Abdul Salam, Zafar Gilani, Salman Ul Haq in PDF and/or ePUB format, as well as other popular books in Computer Science & Computer Networking. We have over one million books available in our catalogue for you to explore.

Information

Publisher
Sybex
Year
2015
ISBN
9781118875582
Edition
1

Chapter 1
Understanding Cloud Characteristics

Topics Covered in This chapter Include:
  • Basic terms and characteristics
    • Elasticity
    • On-demand/self-service
    • Pay-as-you-grow
    • Chargeback
    • Ubiquitous access
    • Metering and resource pooling
    • Multitenancy
    • Cloud bursting
    • Rapid deployment
    • Automation
  • Object storage concepts
    • ObjectID
    • Metadata
    • Extended metadata
    • Data/blob
    • Policies
    • Replication
    • Access control
Thomas J. Watson, the founder of IBM, remarked in the early 1940s, ā€œI think there is a world market for about five computers.ā€
Even though that comment was referring to a new line of ā€œscientificā€ computers that IBM built and wanted to sell throughout the United States, in the context of the cloud, the idea behind it still applies. If you think about it, most of the worldā€™s critical business infrastructure relies on a handful of massiveā€”really massiveā€”data centers spread across the world. Cloud computing has come a long way, from early mainframes to todayā€™s massive server farms powering all kinds of applications.
This chapter starts off with overview of some of the key concepts in cloud computing. Broadly, the standard features of a cloud are categorized into compute, storage, and networking. Toward the end of the chapter, thereā€™s a dedicated section on elastic, object-based storage and how it has enabled enterprises to store and process big data on the cloud.

Basic Terms and Characteristics

Before we begin, itā€™s important to understand the basic terms that will be used throughout the book and are fundamental to cloud computing. The following sections will touch upon these terms to give a feel for whatā€™s to follow in later chapters.

Elasticity

Natural clouds are indeed elastic, expanding and contracting based on the force of the winds carrying them. The cloud is similarly elastic, expanding and shrinking based on resource usage and cloud tenant resource demands. The physical resources (computing, storage, networking, etc.) deployed within the data center or across data centers and bundled as a single cloud usually do not change that fast. This elastic nature, therefore, is something that is built into the cloud at the software stack level, not the hardware.
The classic promise of the cloud is to make compute resources available on demand, which means that theoretically, a cloud should be able to scale as a business grows and shrink as the demand diminishes. Consider here, for example, Amazon.com during Black Friday. Thereā€™s a spike in inbound traffic, which translates into more memory consumption, increased network density, and increased compute resource utilization. If Amazon.com had, letā€™s say, 5 servers and each server could handle up to 100 users at a time, the whole deployment would have peak service capacity of 500 users. During the holiday season, thereā€™s an influx of 1,000 users, which is double the capacity of what the current deployment can handle. If Amazon were smart, it would have set up 5 additional (or maybe 10) servers within its data center in anticipation of the holiday season spike. This would mean physically provisioning 5 or 10 machines, setting them up, and connecting with the current deployment of 5 servers. Once the season is over and the traffic is back to normal, Amazon doesnā€™t really need those additional 5 to 10 servers it brought in before the season. So either they stay within the data center sitting idle and incurring additional cost or they can be rented to someone else.
What we just described is what a typical deployment looked like pre-cloud. There was unnecessary physical interaction and manual provisioning of physical resources. This is inefficient and something that cannot be linearly scaled up. Imagine doing this with millions of users and hundreds or even thousands of servers. Needless to say, it would be a mess.
This manual provisioning is not only inefficient, itā€™s also financially infeasible for startups because it requires investing significant capital in setting up or co-locating to a data center and dedicated personnel who can manually handle the provisioning.
This is what the cloud has replaced. It has enabled small, medium, and large teams and enterprises to provision and then decommission compute, network, and memory resources, all of which are physical, in an automated way, which means that you can now scale up your resources just in time to serve the traffic spike and then wind down the additional provisioned resources, effectively just paying for the time that your application served the spike with increased resources.
This automated resource allocation and deallocation is what makes a cloud elastic.

On-Demand Self-service/JIT

On-demand self-service can be thought of as the end point of an elastic cloud, or the application programming interface (API) in strict programming terminology. In other words, elasticity is the intrinsic characteristic that manifests itself to the end user or a cloud tenant as on-demand self-service, or just in time (JIT).
Every cloud vendor offers a self-service portal where cloud tenants can easily provision new servers, configure existing servers, and deallocate extra resources. This process can be done manually by the user, or it can also be automated, depending upon the business case.
Letā€™s look again at our Amazon.com example to understand how JIT fits in that scenario and why itā€™s one of the primary characteristics of a cloud. When the devops (development and operations) personnel or team figures out that demand would surge during the holiday season, they can simply provision 10 more servers during that time, either through a precooked shell script or by using the cloud providerā€™s web portal. Once the extra allocated resources have been consumed and are no longer needed, they can be deallocated through another custom shell script or through the portal. Every organization and team will have its own way of doing this.
With automated scripting, almost all major cloud vendors now support resource provisioning based on JavaScript Object Notation (JSON). Here is an example pseudo-JSON object that can be fed to an HTTPS request to spin up a new server:
{ request-type : "provision-new", template-name : "my-centos-template", total-instances : "10", }
This script can and will of course be more comprehensive. We wrote it just to give you an idea of how simple and basic it has become to spin up and shut down servers.

Templating

Templating is a nice way to spin up preconfigured servers that hit the ground running. Letā€™s consider a real example: Suppose you are part of a project that provides smart video analytics to advertisers. It is distributed as a web app running on the cloud. The app spans across several languages, frameworks, backend processing engines, and protocols. Currently, it runs atop Ubuntu Linux servers. Web apps that process and serve videos can typically clog servers pretty quickly. The web app that the devops team has written has a ā€œsoft installation,ā€ which bundles required frameworks and libraries into a neat deployment script.
Now, when you anticipate (or predict) a traffic spike or even an abnormal increase in app consumption, you have to spin new instances and join them with the whole deployment network running several servers for DB, process, CDN, front end, and so on. You have cooked a nice ā€œimageā€ of your deployment server, which means that whenever you have to spin a new instance to meet increased user demand, you simply provision a new instance and provide it with this ready-to-run template. Within 20 to 30 seconds, you have a new member in this small server family ready to serve users. This process is automated through your custom provisioning script, which handles all the details like specifying the template, setting the right security for the instance, and allocating the right server size based on the memory, compute resources, storage, and network capacity of the available server sizes.
Once the request is sent, it is not queued; instead itā€™s served in real time, which means that the physical (actually in a virtualized environment, but more on that in Chapter 2, ā€œTerms Loosely Affiliated with Cloud Computingā€) compute resources are provisioned from the pool of seemingly infinite servers. For a typical deployment, all this would take is not more than 2 minutes to spin up 100 or more servers. This is unprecedented in computing and something that played a key role in accelerated adoption of the cloud.
Exercise 1.1: JIT Provisioning on AWS
As a real-world example, letā€™s walk through the process of provisioning a Linux server on Amazon Web Services (AWS), shown in the following screen shot. This assumes that you already have signed up for an AWS account and logged into the dashboard:
c01uf001.tif
  1. Once you have logged into the AWS dashboard, select EC2 and then Launch Instance.
    c01uf002.tif
    Itā€™s a seven-step process to configure and launch a single or multiple instances, although if you have your template prepared already, itā€™s as simple as a one-step click and launch operation.
    Back to launching our new instance from scratch.
  2. First you will have to specify the Amazon Machine Image (AMI), which is Amazonā€™s version of a template used to spin up a preconfigured customized server in the cloud. In AWS, you can select a vanilla OS or your own template (AMI), or you can select from hundreds of community AMIs available on AWS.
    It is, however, recommended not to spin mission-critical applications on top of shared community AMIs until you are certain about the security practices put in place.
  3. Next, select the size of the instance.
    c01uf003.tif
    AWS has a set of fixed compute resource servers.
  4. You can get started with selecting the hardware and networking configurations you need and then add more resources on top of that while configuring the instance.
c01uf004.tif

Pay as You Grow

One of the primary ā€œpromisesā€ of the cloud, and also what contributed significantly to the early adoption of it, was the pay-as-you-go model, which has been slightly tweaked, in fact, and just renamed to pay as you grow. A decade ago, startups needed to have substantial financial muscle to invest in initial data center setup. This is not an easy feat. A specialized skill set is needed to properly set up, configure, and manage a server infrastructure. There were two problemsā€”steep financial cost and unnecessary and unneeded complexity. Smaller s...

Table of contents

  1. Cover
  2. Titlepage
  3. Copyright
  4. Credits
  5. Dedication
  6. Acknowledgments
  7. About the Authors
  8. Introduction
  9. Chapter 1: Understanding Cloud Characteristics
  10. Chapter 2: To Grasp the Cloudā€”Fundamental Concepts
  11. Chapter 3: Within the Cloud: Technical Concepts of Cloud Computing
  12. Chapter 4: Cloud Management
  13. Chapter 5: Diagnosis and Performance Monitoring
  14. Chapter 6: Cloud Delivery and Hosting Models
  15. Chapter 7: Practical Cloud Knowledge: Install, Configure, and Manage
  16. Chapter 8: Hardware Management
  17. Chapter 9: Storage Provisioning and Networking
  18. Chapter 10: Testing and Deployment: Quality Is King
  19. Chapter 11: Cloud Computing Standards and Security
  20. Chapter 12: The Cloud Makes It Rain Money: The Business in Cloud Computing
  21. Chapter 13: Planning for Cloud Integration: Pitfalls and Advantages
  22. Appendix: The CompTIA Cloud+ Certification Exam
  23. Free Online Learning Environment
  24. End-User License Agreement