April 4 2017
The WORA (Write Once, Run Anywhere) concept has always been one of the core strengths of Java. It makes it possible to run Java on any operating system. So, why would you need technologies like Docker and Kubernetes when building a microservice?
Practically, it turns out that the WORA concept is only half true. You can run Java apps on the JVM (Java Virtual Machine), but some incapability might surface during JVM installation. The app might require a specific version of JVM, specific ports to be available, or environment variables to be set. Some applications may even require non-Java components like a database. This makes it difficult to run applications consistently on various machines and move to application production when it comes to using large clusters and deployment automation. That’s when Docker comes into the picture.
Docker enables packaging any type of application or component and works with each container in the same way, regardless of what technology is used within. As a result, developers and operations get a single interface/server to work with different components. However, it doesn’t help with running clusters by itself. It only focuses on running containers on a single server. So, to start and monitor container on a cluster of machines, you require additional tools. Luckily, the Docker ecosystem offers some popular tools like Kubernetes and Mesos to run containers on clusters.
Kubernetes, being the most active open source service in this area, allows you to deploy containers that are replicated over a number of machines while handling all the essential orchestration of starting the containers on multiple machines and monitoring them for failures. Kubernetes also includes a command line tool that lets you start deployments and avail information on the cluster. Most importantly, it provides a REST API that can be utilized to integrate with load balancers, build servers, etc.
When it comes to developing microservices, Docker and Kubernetes-based deployments make the operational side of microservices incredibly easier. Even when different services in a microservice architecture are developed using various technologies, languages or Java frameworks, all the services are deployed in a similar manner at the infrastructure level. All that infrastructure cares about is the containers. Even if your app doesn’t follow the microservice approach, you still have the freedom to make your own technology choices as the deployment environment is not limited.
Moreover, while working with microservices, it should be possible to scale all the services individually. This can give rise to another challenge, i.e., knowing the IP and port of a service when it moves to a different machine(s). Service discovery is an essential aspect of a microservice architecture. And Kubernetes can help to solve this issue easily.
A service in Kubernetes is a proxy on top of containers that are replicated over a number of machines. The service users only need to know the IP address of the server, while the underlying containers use dynamic IPs which may change after deployment, scaling or failure.
If you hire expert Java programmers, Docker offers great benefits when coupled with tools like Kubernetes. It can take deployment automation and distributed architecture to a whole new level. You can leave your thoughts about this post in the comments section.