Hands-on Kubernetes on Azure
eBook - ePub

Hands-on Kubernetes on Azure

Use Azure Kubernetes Service to automate management, scaling, and deployment of containerized applications, 3rd Edition

  1. 528 pages
  2. English
  3. ePUB (mobile friendly)
  4. Available on iOS & Android
eBook - ePub

Hands-on Kubernetes on Azure

Use Azure Kubernetes Service to automate management, scaling, and deployment of containerized applications, 3rd Edition

Book details
Book preview
Table of contents
Citations

About This Book

Understand the fundamentals of Kubernetes deployment on Azure with a learn-by-doing approach

Key Features

  • Get to grips with the fundamentals of containers and Kubernetes
  • Deploy containerized applications using the Kubernetes platform
  • Learn how you can scale your workloads and secure your application running in Azure Kubernetes Service

Book Description

Containers and Kubernetes containers facilitate cloud deployments and application development by enabling efficient versioning with improved security and portability.

With updated chapters on role-based access control, pod identity, storing secrets, and network security in AKS, this third edition begins by introducing you to containers, Kubernetes, and Azure Kubernetes Service (AKS), and guides you through deploying an AKS cluster in different ways. You will then delve into the specifics of Kubernetes by deploying a sample guestbook application on AKS and installing complex Kubernetes apps using Helm. With the help of real-world examples, you'll also get to grips with scaling your applications and clusters.

As you advance, you'll learn how to overcome common challenges in AKS and secure your applications with HTTPS. You will also learn how to secure your clusters and applications in a dedicated section on security. In the final section, you'll learn about advanced integrations, which give you the ability to create Azure databases and run serverless functions on AKS as well as the ability to integrate AKS with a continuous integration and continuous delivery (CI/CD) pipeline using GitHub Actions.

By the end of this Kubernetes book, you will be proficient in deploying containerized workloads on Microsoft Azure with minimal management overhead.

What you will learn

  • Plan, configure, and run containerized applications in production.
  • Use Docker to build applications in containers and deploy them on Kubernetes.
  • Monitor the AKS cluster and the application.
  • Monitor your infrastructure and applications in Kubernetes using Azure Monitor.
  • Secure your cluster and applications using Azure-native security tools.
  • Connect an app to the Azure database.
  • Store your container images securely with Azure Container Registry.
  • Install complex Kubernetes applications using Helm.
  • Integrate Kubernetes with multiple Azure PaaS services, such as databases, Azure Security Center, and Functions.
  • Use GitHub Actions to perform continuous integration and continuous delivery to your cluster.

Who this book is for

If you are an aspiring DevOps professional, system administrator, developer, or site reliability engineer interested in learning how to get the most out of containers and Kubernetes, then this book is for you.

Frequently asked questions

Simply head over to the account section in settings and click on “Cancel Subscription” - it’s as simple as that. After you cancel, your membership will stay active for the remainder of the time you’ve paid for. Learn more here.
At the moment all of our mobile-responsive ePub books are available to download via the app. Most of our PDFs are also available to download and we're working on making the final remaining ones downloadable now. Learn more here.
Both plans give you full access to the library and all of Perlego’s features. The only differences are the price and subscription period: With the annual plan you’ll save around 30% compared to 12 months on the monthly plan.
We are an online textbook subscription service, where you can get access to an entire online library for less than the price of a single book per month. With over 1 million books across 1000+ topics, we’ve got you covered! Learn more here.
Look out for the read-aloud symbol on your next book to see if you can listen to it. The read-aloud tool reads text aloud for you, highlighting the text as it is being read. You can pause it, speed it up and slow it down. Learn more here.
Yes, you can access Hands-on Kubernetes on Azure by Nills Franssens, Shivakumar Gopalakrishnan, Gunther Lenz in PDF and/or ePUB format, as well as other popular books in Computer Science & Data Warehousing. We have over one million books available in our catalogue for you to explore.

Information

Year
2021
ISBN
9781801078917
Edition
3

Section 1: The Basics

In Section 1 of this book, we will cover the basic concepts that you need to understand in order to follow the examples in this book.
We will start this section by explaining the basics of these underlying concepts, such as containers and Kubernetes. Then, we will explain how to create a Kubernetes cluster on Azure and deploy an example application.
By the time you have finished this section, you will have a foundational knowledge of containers and Kubernetes and will have a Kubernetes cluster up and running in Azure that will allow you to follow the examples in this book.
This section contains the following chapters:
  • Chapter 1, Introduction to containers and Kubernetes
  • Chapter 2, Getting started with Azure Kubernetes Service

1. Introduction to containers and Kubernetes

Kubernetes has become the leading standard in container orchestration. Since its inception in 2014, Kubernetes has gained tremendous popularity. It has been adopted by start-ups as well as major enterprises, with all major public cloud vendors offering a managed Kubernetes service.
Kubernetes builds upon the success of the Docker container revolution. Docker is both a company and the name of a technology. Docker as a technology is the most common way of creating and running software containers, called Docker containers. A container is a way of packaging software that makes it easy to run that software on any platform, ranging from your laptop to a server in a datacenter to a cluster running in the public cloud.
Although the core technology is open source, the Docker company focuses on reducing complexity for developers through a number of commercial offerings.
Kubernetes takes containers to the next level. Kubernetes is a container orchestrator. A container orchestrator is a software platform that makes it easy to run many thousands of containers on top of thousands of machines. It automates a lot of the manual tasks required to deploy, run, and scale applications. The orchestrator takes care of scheduling the right container to run on the right machine. It also takes care of health monitoring and failover, as well as scaling your deployed application.
The container technology Docker uses and Kubernetes are both open-source software projects. Open-source software allows developers from many companies to collaborate on a single piece of software. Kubernetes itself has contributors from companies such as Microsoft, Google, Red Hat, VMware, and many others.
The three major public cloud platforms—Azure, Amazon Web Services (AWS), and Google Cloud Platform (GCP)—all offer a managed Kubernetes service. They attract a lot of interest in the market since the virtually unlimited compute power and the ease of use of these managed services make it easy to build and deploy large-scale applications.
Azure Kubernetes Service (AKS) is Azure's managed service for Kubernetes. It reduces the complexity of building and managing Kubernetes clusters. In this book, you will learn how to use AKS to run your applications. Each chapter will introduce new concepts, which you will apply through the many examples in this book.
As a user, however, it is still very useful to understand the technologies that underpin AKS. We will explore these foundations in this chapter. You will learn about Linux processes and how they are related to Docker and containers. You will see how various processes fit nicely into containers and how containers fit nicely into Kubernetes.
This chapter introduces fundamental Docker concepts so that you can begin your Kubernetes journey. This chapter also briefly introduces the basics that will help you build containers, implement clusters, perform container orchestration, and troubleshoot applications on AKS. Having cursory knowledge of what's in this chapter will demystify much of the work needed to build your authenticated, encrypted, and highly scalable applications on AKS. Over the next few chapters, you will gradually build scalable and secure applications.
The following topics will be covered in this chapter:
  • The software evolution that brought us here
  • The fundamentals of containers
  • The fundamentals of Kubernetes
  • The fundamentals of AKS
The aim of this chapter is to introduce the essentials rather than to provide a thorough information source describing Docker and Kubernetes. To begin with, we'll first take a look at how software has evolved to get us to where we are now.

The software evolution that brought us here

There are two major software development evolutions that enabled the popularity of containers and Kubernetes. One is the adoption of a microservices architectural style. Microservices allow an application to be built from a collection of small services that each serve a specific function. The other evolution that enabled containers and Kubernetes is DevOps. DevOps is a set of cultural practices that allows people, processes, and tools to build and release software faster, more frequently, and more reliably.
Although you can use both containers and Kubernetes without using either microservices or DevOps, the technologies are most widely adopted for deploying microservices using DevOps methodologies.
In this section, we'll discuss both evolutions, starting with microservices.

Microservices

Software development has drastically evolved over time. Initially, software was developed and run on a single system, typically a mainframe. A client could connect to the mainframe through a terminal, and only through that terminal. This changed when computer networks became common when the client-server programming model emerged. A client could connect remotely to a server and even run part of the application on their own system while connecting to the server to retrieve the data the application required.
The client-server programming model has evolved toward distributed systems. Distributed systems are different from the traditional client-server model as they have multiple different applications running on multiple different systems, all interconnected.
Nowadays, a microservices architecture is common when developing distributed systems. A microservices-based application consists of a group of services that work together to form the application, while the individual services themselves can be built, tested, deployed, and scaled independently of each other. The style has many benefits but also has several disadvantages.
A key part of a microservices architecture is the fact that each individual service serves one and only one core function. Each service serves a single-bound business function. Different services work together to form the complete application. Those services work together over network communication, commonly using HTTP REST APIs or gRPC:
Evolution from mainframe to client-server, to distributed systems to microservices
Figure 1.1: A standard microservices architecture
This architectural approach is commonly adopted by applications that run using containers and Kubernetes. Containers are used as the packaging format for the individual services, while Kubernetes is the orchestrator that deploys and manages the different services running together.
Before we dive into container and Kubernetes specifics, let's first explore the benefits and downsides of adopting microservices.

Advantages of running microservices

There are several advantages to running a microservices-based application. The first is the fact that each service is independent of the other services. The services are designed to be small enough (hence micro) to handle the needs of a business domain. As they are small, they can be made self-contained and independently testable, and so are independently releasable.
This leads to the benefit that each microservice is independently scalable as well. If a certain part of the application is getting more demand, that part of the application can be scaled independently from the rest of the application.
The fact that services are independently scalable also means that they are independently deployable. There are multiple deployment strategies when it comes to microservices. The most popular are rolling deployments and blue/green deployments.
With a rolling upgrade, a new version of the service is deployed only to a part of the application. This new version is carefully monitored and gradually gets more traffic if the service remains healthy. If something goes wrong, the previous version is still running, and traffic can easily be cut over.
With a blue/green deployment, you deploy the new version of the service in isolation. Once the new version of the service is deployed and tested, you cut over 100% of the production traffic to the new version. This allows for a clean transition between service versions.
Another benefit of the microservices architecture is that each service can be written in a different programming language. This is described as polyglot—the ability to understand and use multiple languages. For example, the front-end service can be developed in a popular JavaScript framework, the back end can be developed in C#, and the machine learning algorithm can be developed in Python. This allows you to select the right language for the right service and allows developers to use the languages they are most familiar with.

Disadvantages of running microservices

There's a flip side to every coin, and the same is true for microservices. While there are multiple advantages to a microservices-based architecture, this architecture has its downsides as well.
Microservices designs and architectures require a high degree of software development maturity in order to be implemented correctly. Architects who understand the domain very well must ensure that each service is bounded and that different services are cohesive. Since services are independent of each other and versioned independently, the software contract between these different services is important to get right.
Another common issue with a microservices design is the added complexity when it comes to monitoring and troubleshooting such an application. Since different services make up a single application, and those different services run on multiple servers, both logging and tracing such an application is a complicated endeavor.
Linked to the disadvantages mentioned before is that, typically, in microservices, you need to build more fault tolerance into your application. Due to the dynamic nature of the different services in an application, faults are more likely to happen. In order to guarantee application availability, it is important to build fault tolerance into the different microservices that make up an application. Implementing patterns such as retry logic or circuit breakers is critical to avoid a single fault causing application downtime.
In this section, you learned about microservices, their benefits, and their disadvantages. Often linked to microservices, but a separate topic, is the DevOps movement. We will explore what DevOps means in the next section.

DevOps

DevOps literally means the combination of development and operations. More specifically, DevOps is the union of people, processes, and tools to deliver software faster, more frequently, and more reliably. DevOps is more about a set of cultural practices than about any...

Table of contents

  1. Hands-on Kubernetes on Azure, Third Edition
  2. Preface
  3. Foreword
  4. Section 1: The Basics
  5. 1. Introduction to containers and Kubernetes
  6. 2. Getting started with Azure Kubernetes Service
  7. Section 2: Deploying on AKS
  8. 3. Application deployment on AKS
  9. 4. Building scalable applications
  10. 5. Handling common failures in AKS
  11. 6. Securing your application with HTTPS
  12. 7. Monitoring the AKS cluster and the application
  13. Section 3: Securing your AKS cluster and workloads
  14. 8. Role-based access control in AKS
  15. 9. Azure Active Directory pod-managed identities in AKS
  16. 10. Storing secrets in AKS
  17. 11. Network security in AKS
  18. Section 4: Integrating with Azure managed services
  19. 12. Connecting an application to an Azure database
  20. 13. Azure Security Center for Kubernetes
  21. 14. Serverless functions
  22. 15. Continuous integration and continuous deployment for AKS
  23. Index