Discover millions of ebooks, audiobooks, and so much more with a free trial

Only $11.99/month after trial. Cancel anytime.

Windows Containers for IT Pros: Transitioning Existing Applications to Containers for On-premises, Cloud, or Hybrid
Windows Containers for IT Pros: Transitioning Existing Applications to Containers for On-premises, Cloud, or Hybrid
Windows Containers for IT Pros: Transitioning Existing Applications to Containers for On-premises, Cloud, or Hybrid
Ebook329 pages3 hours

Windows Containers for IT Pros: Transitioning Existing Applications to Containers for On-premises, Cloud, or Hybrid

Rating: 0 out of 5 stars

()

Read preview

About this ebook

This book has everything you need to know about Windows Containers, from an IT pro and ops perspective.

Containers are the next big thing in IT infrastructure. More and more, we see companies relying on Kubernetes implementations to run their workloads on-premises, in the cloud, or even in hybrid deployments. IT pros and ops teams are now faced with the challenge of getting up to speed on container architecture, knowing how it differs from virtual machines (VMs), and the best means and practices for managing their applications in containers. 

Windows Containers for IT Pros explores all of that, from the IT pro experience. You will approach learning about containers through the optics of an author who is accustomed to deploying virtual machines (VMs). You will learn about differences, parallel practices, use cases, and how to get started and go deep into day 2 operations.


What You Will Learn

  • Architect and deploy Windows Containers leveraging existing skills
  • Containerize existing applications
  • Know best practices for managing resources in Windows Containers
  • Get comfortable moving containers to the cloud with Azure
  • Understand the options for using containers on Azure

Who This Book Is For 
Windows IT pros and technical professionals deploying Windows Server and server applications today, such as .NET, ASP.NET, IIS, and more. This book assumes little to no experience with scripting as readers deploy their workloads via one of the Windows UIs (Hyper-V, Server Manager, Windows Admin Center, etc.). Knowledge of VMs and infrastructure, such as clustered operating systems, is recommended but not required.


LanguageEnglish
PublisherApress
Release dateFeb 23, 2021
ISBN9781484266861
Windows Containers for IT Pros: Transitioning Existing Applications to Containers for On-premises, Cloud, or Hybrid

Related to Windows Containers for IT Pros

Related ebooks

Programming For You

View More

Related articles

Reviews for Windows Containers for IT Pros

Rating: 0 out of 5 stars
0 ratings

0 ratings0 reviews

What did you think?

Tap to rate

Review must be at least 10 words

    Book preview

    Windows Containers for IT Pros - Vinicius Ramos Apolinario

    © Vinicius Ramos Apolinario 2021

    V. Ramos ApolinarioWindows Containers for IT Proshttps://doi.org/10.1007/978-1-4842-6686-1_1

    1. Introduction to containers

    Vinicius Ramos Apolinario¹  

    (1)

    Newcastle, WA, USA

    Let me tell you a story:

    John is an IT admin currently working for a medium company. He has been managing physical hosts and virtual machines (VMs) for a long time – even before joining this company. John has worked with Active Directory, mail servers such as Microsoft Exchange, file servers, and many other infrastructure and networking components. One thing he learned over the years is the importance of keeping servers with the right configuration and only approved software installed. This has saved him in the past in audits and performance measurements, but also in keeping servers secure. Recently, John was put in a project alongside a development team to build a new module for their enterprise resource planning (ERP) system. His role in this project? Ensure servers are up and running with high availability (HA) and the new module is running as expected. After a couple weeks of meetings, development, and testing by the development and quality assurance (QA) teams, John has been handed the first version of the new module, along with a Word document describing how to deploy it. John follows the instructions on the Word doc – how to configure the operating system (OS), folder structure, and network and then, finally, install the .exe file. After installing, John sees a new icon on the desktop, which he then opens – and guess what? One of the services in the application fails. Is this story familiar to you? Has that ever occurred to you? If the answer is yes, this book is for you.

    Where containers started?

    When you hear about containers today, you will hear about many positive characteristics: they start faster, they allow for a better approach on DevOps practices, they have small footprint compared to VMs, and so on. However, the reason containers were created was to solve one fundamental problem in the IT industry – which is exactly the preceding scenario with John. In theory, applications should work the same way regardless of where they are deployed. In the preceding story, can you find out what is wrong and why the application did not work? If John followed all the steps in the document provided by the development team, why did the application fail? In the majority of the cases, the answer is there was some application requirement missing. It might be that John missed one step from the document, or it might be the developer forgot to add one step to the document. In some cases, it might be even because the development environment had a bunch of other software installed on which the application depended upon and the developer never noticed. This situation is more common than it sounds.

    In a nutshell, a container is a packaging of the application along with all other related requirements for that application to work. When running, a container has its own isolated view of the file system and registry, and that is not only isolated from the host but also from other containers. That means an application inside the container will always perform the same way. As you will learn in this book, containers are much more than that, but you should never forget why they were created in the first place. Funny fact: The reason why this technology received the name containers is because the nature of shipping containers is mostly the same. In the past, shipping companies found themselves in a difficult situation when they started to get orders of all kinds of goods to be shipped across the globe. To solve that problem, the industry created a standard shipping container size on which any type of good can be put in on one side and delivered on the other – with one standard for the shipping container and for the vessel transporting it.

    Although containers really took off in the past few years, the technology to enable it has been around for a long time. Linux has had this technology for a long time, and even Microsoft has investigated it in the past. However, this technology only started to be adopted when Docker came in and connected a few dots. We will look in more detail on what Docker is later, but here are the main things they did to make containers successful:

    A standard for creating and packaging container images as well as pulling and pushing it (we will explain more about images later in this chapter)

    A specification for the container runtime so different OS could run the same container image standard via one API entry point

    An open repository for container images on which users could push and pull images to and from

    A container runtime that works on different OS allowing developers to write the application once and have it run the same way on different environments

    In 2015, Docker took another step that was extremely important to the adoption and growth of the container ecosystem by establishing the Open Container Initiative (OCI). This initiative brought together many other developers and companies interested in creating an ecosystem of services for companies interested in running their applications on containers. You can find more information on the OCI at their website: www.opencontainers.org/.

    If we look at the IT landscape today, containers are the foundation and enablers of much of what has been developed today: DevOps technology, Serverless, Cloud computing, and so much more. If you were to ask me Can you summarize containers in one paragraph?, this would be it: containers allow developers to package an application and its dependencies in one standard on which you can deploy it and have the application executing and behaving the same way regardless on where you deploy it. In addition, because they have a small form factor and you don’t have to boot an entire OS, containers offer better performance when compared to VMs. Finally, they allow for better DevOps practices since you can reuse the same recipe for creating container images with the instructions on how to build the application.

    With all that, there’s still one issue that hasn’t really been addressed by the market yet: containers (and all tooling around them) have been created by developers for developers, and most of the tools out there are focused on Linux and open source. In this book, I intend to give you an overview of Windows containers from the infrastructure and operations perspective – someone like John. Although we might need to explain a few developer-related topics, we will cover what containers are with the eyes of someone not focused on the code. So, let’s dive into that!

    How are containers different from VMs?

    You might be thinking that some of the benefits I described earlier are also true for VMs and wondering why containers are so special. In fact, applications packaged in a VM will (almost) always perform the same way – they have their own view of the file system and registry. In fact, VMs are completely isolated from host and other VMs and even have their own kernel. Well, that’s exactly the problem with VMs. When you spin up a VM, the VM itself thinks it is running on a dedicated hardware with its own processor, memory, disk, networking, and so on. And it runs its own OS with its own kernel. In terms of isolation, that is a high standard. In terms of management, not so much.

    The problem with VMs is that you have to manage each VM as an individual instance. Thinking about each VM instance, you have to install an OS, update it, back it up, and configure its settings to then start configuring your application. And managing a VM is a constant exercise. In contrast, containers offer a more streamlined approach to how the application is instantiated. A container doesn’t have a dedicated OS – instead, it shares the same kernel from its container host. The isolation I mentioned before is then achieved by a combination of techniques. Each OS (Linux and Windows) implements this in a different way, but this method is called process isolation.

    Process isolation means the container will be on a layer above the kernel. This layer above the kernel is usually referenced as user mode. Containers have their isolated user mode and consequently their own view of the file system and registry.

    ../images/499315_1_En_1_Chapter/499315_1_En_1_Fig1_HTML.png

    Figure 1-1

    Traditional deployment

    In Figure 1-1, we have the applications being deployed on user mode, and the kernel is responsible for scheduling hardware access (along with many other attributions, of course). When you deploy a VM, you are virtualizing everything above the hardware. A new partition is created for the VM, thus achieving isolation, but with a full-blown deployment. If you think about it, it’s actually ironic that VMs are being replaced by containers since VMs were created to provide a better hardware utilization.

    ../images/499315_1_En_1_Chapter/499315_1_En_1_Fig2_HTML.png

    Figure 1-2

    Containerized deployment

    In Figure 1-2, we have two applications deployed, each one in its own container. You can see that we’re still running the application in user mode and we’re still with one single kernel. However, now each application has its own boundary with process isolation – the container itself. Here are a few things to call out:

    If something happens to Container 1, Container 2 will continue to work without any interruption.

    Both Containers 1 and 2 share the same kernel, but they have their own view of the file system and registry. If a file is added to C:\AppFolder in Container 1, Container 2 won’t see that. In fact, not even the host will see that.

    Although both containers have their own view of the file system and registry, they still need the kernel on the host to run processes. Because of that, the process on both Containers 1 and 2 are visible from the host. (We will explore isolation and what the host can see from each container in more detail later in this chapter.)

    It all sounds great, and at this point, you are probably thinking I should run containers for my workloads. Well, it’s not that simple.

    When to use containers and when to use VMs

    Let me open this chapter by telling you there is no definitive answer to that question. In fact, still today there is a lot of debate in the IT industry around which workloads should or should not be running in containers, if the best architecture for applications is microservices or monolithic, and if companies should adopt DevOps practices or keep their traditional deployments. Rather than give you my opinion on these topics, I’d like to equip you with the information to do what is best for your environment and/or scenario. First, let’s start by looking at what can actually run in a container – spoiler alert, not everything runs in a container.

    Back to Figure 1-2, I mentioned containers share the kernel of the container host. That is true, but the container itself still runs a version of the OS. For Windows-based container, that is either a purpose-built version of Server Core, or Nano Server, or Windows – none of them providing a graphical user interface (GUI). With that information alone, you can probably guess that applications relying on any GUI application programing interface (API) will fail. But it’s more than that. The Server Core version, for example, is not the same OS you can install in a physical machine or VM. This image was tailor-made to run on containers and does not provide all the infrastructure roles and features that you are used to. For example, Active Directory, DNS, DHCP, and many other roles are not available to be installed. In fact, you can check which roles and features are not available on containers by running the following command:

    Get-WindowsFeature | Where-Object {$_.InstallState -eq Removed}

    The preceding command takes us to the next question: if there is no GUI present in containers, how do I interact with them? Keep in mind that interacting and managing a container are two different things. Interacting means running the container and executing commands in it. Managing it means checking its state, performance, and so on. We will cover management later in Chapters 4 and 5. As for interacting with Windows Containers, you would do that via PowerShell. Command Prompt is also available, but PowerShell ends up being the preferred way most people will interact with Windows containers. However, it’s important to make one crucial distinction here. As an IT Admin, you are used to logging into a server/VM or remote connecting to it via a Remote Desktop Protocol (RDP). When doing that, you interact with the server/VM to perform whatever action, such as checking logs, operating the server/VM, and so on. In the containers’ world, it’s not expected that you interact directly with a container instance. Rather, the container should start with the desired state and configuration already in place. If something needs to be adjusted, you bring the container instance down, reconfigure the image from which the container is created, and then spin up a new instance. This approach is probably the main differentiation for IT admins moving from VMs to Containers and is usually referred to as cloud mindset. In the next section, we will explore an approach to running resources that is crucial to containers: defining a desired state to resources via script or code.

    Going back to how we manage VMs, here’s the process you have probably used to prepare a VM for later use: You go to Hyper-V Manager (or vCenter – whatever platform you’re using), and you start the process of creating a new VM. Then you specify the VM hardware – processor, memory, networking, disk, and so on. You then start the VM and either boot from a virtual disk or from the network to start the Windows installation process. You configure Windows regularly and install any application, and when you’re done, you run a famous process called Sysprep. Sysprep is a native tool in Windows used to generalize the current installation so it can be used for new installations. Usually after running Sysprep, you turn the VM off and discard the VM configuration files, preserving its virtual disk only. This virtual disk can then be used to create new VMs without running into a situation on which you have duplicated server names or any other unique identifier. From there, the process can be fully automated, and spinning up new VMs can be completed using automation rules with PowerShell or any other tool. With containers, we do it completely different.

    Earlier in this chapter, I mentioned one of Docker’s contributions was to create a standard for creating and packaging container images. What that means is that containers are created based on a template called container image. Those container images are not created by spinning up a new container instance and then generalizing it like VMs. Instead, container images are created based on a recipe that describes how the template (or container image) should be created. We will explore more how this works in the next section.

    Finally, just like the GUI APIs, other APIs have been removed from the OS images available. Most of these APIs have been removed to reduce the footprint of Windows containers. The side effect is that this means regular applications might not run at all simply because they can’t interact with the OS itself. Unfortunately, to answer the question if an application can be containerized or not, you will have to try to run the application and check for yourself. We will cover more details on how to containerize existing applications in Chapter 3. All of the described in the preceding paragraphs is considering a Server Core–based container. While not the same as a regular Server Core, it is still an option for containerizing existing applications, particularly applications running in .Net Framework. For new applications based on .Net Core, Nano Server is a better option as it provides support for it. However, Nano Server has its API surface further reduced, allowing only applications specifically developed to it.

    To close on making a decision on when to use containers or VMs, the next thing you need to know about containers is that they are stateless in nature. What that means is that containers were not supposed to store state. To explain this concept, think about a self-contained application – an application that only requires a VM instance to run. Everything is stored in the VM. If you turn off the VM, the application stops, but if you turn the VM back on, the application will start work again. That seems obvious but is only possible because the storage of the VM is persistent. If the VM were to lose its content every time it powered off, it would be a nightmare, right? Well, yes, but that presents other issues. How do you scale a web application, for example, if all the data is contained in one single VM? The way to do that is by segregating your application into tiers. Traditionally, to achieve that in a web application, you would have at least two tiers: web tier and data tier. The web tier is the one dealing with the user requests and in the Windows world would be running an Internet Information Service (IIS) instance. VMs in this tier would have stateless versions of the applications that can receive user requests and query or write to the data tier. Scaling this web tier requires that you bring new VMs and add them to a Network Load Balancer (NLB). Because these VM instances are stateless, you can add and remove VM instances to satisfy the requests as you wish – as long as the NLB can probe these instances and know who is up and who is not. For the data tier, you would be running a database (DB) that is responsible for persistently storing the data. There are many DB products in the market, and it’s not the goal

    Enjoying the preview?
    Page 1 of 1