Xylos brands

How to orchestrate a container professionally

Do you want to offer your end users reliable containers? A container orchestrator is just what you need. The most common one is Google’s open-source container orchestrator, Kubernetes. Since almost all public clouds work with it, it’s often considered the standard solution. Microsoft offers Kubernetes as a service through Azure Kubernetes Service (AKS).

1. How does container orchestration work?

Does the term ‘container orchestration’ sound alien to you? Don’t worry, it’s a very simple concept. Container orchestration software lets you start containers according to your personal specifications. For example, we want to do the following with our web application:

  • Always start the containers at least twice to improve their availability and speed.
  • Spread the containers across several servers to be safe in case a server fails.
  • Make the containers available through a service. This gives us access to the web application.
  • Make the service available through a public IP address. This address is where our users will be able to access the web application.
  • Make sure we can upgrade the application without interruptions.
  • Make sure we can go back to the previous version when upgrading.

If we’re using Kubernetes as a container orchestrator, we’ll need to define the above requirements as a desired state. Kubernetes will then keep an eye on the state of these requirements. If the current state doesn’t match the desired state, Kubernetes will correct it.

2. Getting started with Azure Kubernetes Service

If you want to deploy Kubernetes in your own data centre, you’ll need to know a thing or two about it. There are plenty of tools that simplify the deployment itself, but you’re responsible for maintenance, backup, disaster recovery and troubleshooting.

Microsoft offers Kubernetes as a service through Azure Kubernetes Service (AKS). This approach has some considerable advantages:

  • Easy installation via the Azure Portal or scripts.
  • No master node management. Master nodes are Kubernetes’ beating heart. Microsoft manages these servers for you – and you don’t have to pay for them.
  • Controlled upgrade to new Kubernetes versions.
  • Simple agent node scaling. Agent nodes are the servers on which your containers run. You can use a slider to set the amount. Do you want 100 of them? No problem.
  • Integration with the Azure virtual network and Azure load balancers.
  • Integration with Azure Active Directory.
  • Integration with Azure Monitor for logs and measurement data (such as disk, CPU and network).

After you’ve rolled out Kubernetes, you can execute some of the above tasks via the Azure Portal or scripts. The image below illustrates how to adjust the scale.

3. How to roll out containers

In this blog post, we’ll use AKS to roll out our web application by entering commands in the Azure Cloud Shell. We’ll roll out containers using configuration files and a tool like Azure DevOps. If you want to know more about this method, we’ll explain the details in later blog posts in this series.

We’ll start our containers with the following command:

kubectl run nasnet --image=xylos/nasnet --port=9090 --replicas=2

This command uses kubectl, a programme that’s applied to the command line. It creates two containers that run on the web application. The application is available via TCP port 9090 and the container uses the xylos/nasnet image we created in the previous blog post.

Use the command below to see the containers:

kubectl get pods

Kubernetes uses pods. A pod can contain one or several containers. In our example, each container runs in a separate pod, which is exactly the way it should be in this case. The result of the command is shown in the image below.

Apart from the pods, the kubectl run has created other objects as well. One of these objects is a deployment. Deployments allow us to edit the application in a controlled way – to install a new version, for example.

Because the pods aren’t available through an external IP address yet, we’ll use the deployment to create a service and then link an Azure Load Balancer to this service. We’ll need the following command:

kubectl expose --port=80 --target-port=9090 --type=LoadBalancer deploy/nasnet

This command creates a service that makes the containers in the nasnet deployment internally available. It doesn’t matter if this involves one or 100 containers. The service is available on a public IP address via an Azure Load Balancer (type=LoadBalancer). To view the IP address, use the following command:

kubectl get service nasnet

This results in the output below:

If you want to verify whether your actions work, go to http://52.166.68.169:

If your application is successful, you can easily scale up the number of containers. The image below shows the command to scale to 4 containers and the result of this action:

As an administrator, you won’t have to adjust the service or load balancer yourself; Kubernetes does it for you in the background. This example clearly shows the advantage of a container orchestrator.

Conclusion

A container orchestrator offers a powerful toolkit to roll out containers in production. With a few simple commands, we made our web application available and scaled the solution up from two to four containers. We don’t recommend using manual commands in practice, though: they make errors more likely to happen, and manual actions limit the possibility to roll out updates several times per day. In the next post, we’ll talk about how to use Azure DevOps to roll out containers automatically.

Want to know how to create and use a container? Be sure to read the other blog posts in this series: ‘6 steps to the cloud’.

 

Share this blogpost

Leave a reply

Your email address will not be published. Required fields are marked.