Virtualisation is a hot topic in the current IT landscape. As OSs and applications are no longer installed on physical servers, the use of virtual machines is quickly gaining ground. Solutions like Microsoft Hyper-V, VMware vSphere and KVM make it easy to carry out this transition in your own data centre as well as in the cloud.
Technology never stops progressing. Virtual machines make it easier to virtualise an entire operating system (e.g. Linux), but they package applications. A virtual machine contains an entire operating system, which means a considerable amount of overhead. If your application consists of multiple smaller components (or microservices), things only get worse. Containers offer an alternative that’s better suited to packaging and deploying applications, even when you’re using microservices.
At Xylos, our experts use containers for internal applications as well as externally available applications, such as OASE.
A container is an efficient way to package software in a standard format. Apart from the application code itself, a container includes all tools, system libraries, runtimes and settings you need to run the application. Nothing more, nothing less. The developer has full control, so the container only contains the necessary elements. As a result, containers are usually a lot more compact than virtual machines, they start up quicker and they can be replaced with newer versions more easily.
The markt van containers groeit snel: it’s estimated to reach 2.7 billion dollars in 2020, which is 3.5 times its worth in 2016.
Even though container technology has been around for a long time, it didn’t really break through until the introduction of Docker in 2013. Linux distributors and public cloud providers quickly picked up on Docker, and even though it only worked on Linux in the beginning, a collaboration with Microsoft quickly led to it becoming available for Windows. Docker is a so-called ‘container runtime’. There are plenty of alternatives today, but Docker is still the most popular one.
In this blog post, we’ll keep it simple and explain how to use a Docker container. Want to find out how to build a container? We’ll tell you all about it in the next blog post.
Imagine a developer has built an application that lets users recognise images. It’s a web application with an upload function. After uploading a picture, the user sees the following result:
This application was built with Go and uses TensorFlow, an open source machine learning framework by Google. Since the application was packaged in a container, this isn’t too relevant, because the container can be started on any system that supports Docker containers. You can even do it on your own Windows device if you install the necessary software, like Docker Desktop. After installing Docker Desktop on your device, use the following command:
docker run -p 9090:9090 -d gbaeke/nasnet
This command makes the web application available on port 9090 (http://localhost:9090). The container runs in the background (-d) and uses the container image named gbaeke/nasnet. Note that this container image is stored in a container registry, a location where you can store your own images. The container registry that’s used here is Docker Hub and the image is publicly available.
You can run Docker containers with every public cloud provider. Microsoft Azure offers the following possibilities, among others:
Our web application can easily be started in ACI through the Azure portal:
The application is now available via a public name.
Containers make it easy to package an application in a standard format. In this post, we used an existing Docker container image; this file is easy to use on any system that supports Docker, like your own PC or the Azure cloud. In the next blog post, we’ll show you how to build a container image.
Want to know more about containers? Be sure to attend our “The Azure Migration Fundamentals” event on April 4th.
Your email address will not be published. Required fields are marked.