TLDR; Github Repository
If you have been working on for example distributed microservice system that you know that managing it or even simple run is not simple anymore. To make your app running on your local machine you need to setup different things like each service, probably some message queues, databases etc. So you need to prepare environment for your system at least twice: for your local machine and for production environment which sounds like a perfect case for docker. In this example we will dockerize simple ‘microservice’ application. Diagram:
Producer produces messages and there are 2 consumers, each in different consumer group. Those 3 are separated projects built with sbt. As you see there is kafka in version 0.10 as message broker but each service might have different version of client api. This is fine since kafka is backward compatible and we intend to run 3 separated applications on 3 different jvm’s so we don’t have dependencies conflict.
– Docker installed
Running the example
Clone the repository:
and inside catalog with repository run:
Which boots up whole application inside docker containers. Logs should look similar to this:
If you are familiar with docker (if not check this quick intro) you should know that in order to build docker image you need to have specified dockerfile. And depending what image you need to build, you write different dockerfile which is set of instructions for docker to create an image which later can be run in container. In previous post we have created small dockerfile which created environment required by golang to run very simple http server. In this kafka example we need to build at least 5 images: 2 for consumer, 1 producer and one for kafka and one for zookeper. So what are the steps which dockerfile should specify for for example producer?
- First of all we need some unix os.
- Then we need to have jvm installed.
- And finally we need to have a command to run our compiled sbt project
We can of course create this dockerfile manually but in this post we will use sbt docker existing library which will help us in this task.
Installing sbt docker
Sbt-docker is sbt plugin so you add it like every other plugin, instructions in readme
Go to build.sbt to see how some of dockerfile settings:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19
This settings build for us Dockerfile. Main idea is to add jar file with our sbt application (in this example we use sbt-assembly to create jar) to base docker image “java”. As entrypoint I specified script which boots up our jvm application. You can preview the content of entrypoint.sh to get idea why it’s done this way. I will explain more in next paragraph about docker-compose. The entry point script takes 3 arguments:
- name of application
- version of application
- and optional parameter debug port. It will be used for exposing port for debugging.
We can additionally specify name of docker image:
1 2 3 4 5 6 7 8 9 10 11
And as you see those settings are used by all from projects use in this example so by 2 consumers and producer.
you create 3 images with specified options.
IMPORTANT NOTE I’ve encountered some issues when I run this command and it was related to not enough permissions for bash scripts in /docker-scripts. To solve it set correct permissions to those scripts:
And do this for every application.
Now we need to somehow glue together those 3 created images (+2 with kafka and zookeper) because we want our application to be multi-container. Docker compose is a tool for achieving this goal. To create many containers from images in one place, make one container dependent on another etc. Our example will be hopefully simple enough to understand without going to docs. Let’s look at docker-compose.yml:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41
As you see docker-file is in yaml key value format. When specifying image to be run we need to specify name of the image which we want to run. If docker won’t find this image name
in our local machine it will go search in dockerhub.
We need to expose some ports like in consumer example:
Which says exactly: map port 8082 on my machine to port 8082 in docker container network.
We want to link it to kafka image created on top of docker-compose.yml and make consumer container dependent on it.
Option depends_on express dependencies between services. Depends_on specifies which service needs to be started before. In our example kafka will be started before service of producer and consumers and zookeeper will be started before kafka.
There is important thing here. Depends_on doesn’t guarantee that service which is dependency to other service is “ready”. Depends_on only ensures that your dependency service starts. This is how we get to another case which is…
Controlling startup order in docker-compose
We need to make sure that before our consumer starts, kafka and zookeeper are running and to do this we will use script which will wait for kafka and zk to start.
In this example I’ve used wait-for-it script which pings the specified port and waits till the service is “ready”. In our simple case it’s working fine but you might specify your own script to wait for specified service to be ready.
Important thing is we want to run this script and have acknowledgment about ‘parent’ container before we will run our application. To do this go to one of the entrypoint.sh:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16
This script is first thing which our container will invoker during the start It waits for kafka and zookeeper to start and then run the application. There is some condition for debug port because we want to run app in debug mode when user specifies port and if not then not.
You might ask why did I do it this way? Why not specify in docker settings in build sbt, instead of entrypoint, command like this:
1 2 3 4 5
And then in docker-compose.yml specify entrypoint which would be script for waiting. After waiting script from entrypoint finishes, then the command from dockerfile will be executed. The reason is if you do it this way, entrypoint from docker-compose “wipes out” cmd parameters specified in Dockerfile. To read more about this behaviour check this discussion.
Official docs about startup order and this waiting scripts check are here.
Debugging with Intellij
As you can see docker settings is method which accepts debug port which will be exposed for intellij idea.
Let’s say you want to debug exampleConsumerOld on port 5005:
1 2 3 4 5 6
Next you need to expose this 5005 port in container with exampleConsumerOld:
1 2 3 4 5 6 7 8
And then you can run remote config for debugging in intellij. Put few breakpoints to try if it works.
This simple example is to show how to use docker-compose, build multi-container docker environment using sbt plugin for docker. Docker principles are relatively easy to understand so it’s worth to use it even for quick prototyping like in example above. Also starting many completely separated modules with one simple command, with guarantee that they will work the same no matter on which environment they run is a paradise for developer. Hope you’ve found something useful for you and don’t hesitate to give me some comments if you found something wrong/not clear in this post. Thanks for reading.