Docker Architecture

Docker allows us to package an application with all of its dependencies into a standardized unit for software development.

Here we are going to see the architecture of Docker. Please refer the below Docker Architecture diagram. The below image is retrieved from Docker architecture dockerarc

Docker uses the client server architecture. So Docker client which is responsible for getting the request from users and connecting the Docker Daemon and running it. Docker daemon which will do all the heavy lifting tasks such as building the containers, running it.

Inside the Docker, we have the below components

Docker Images

Docker registries

Docker Containers

Docker images is nothing but an operating system image with the web server and our application installed. This is used to create docker containers. Docker image can be built by Dockerfile. Inside the Dockerfile we can reference other images as well. Docker image is the build component of docker

Docker registries used to hold the Docker images. These are public or private stores from which you upload or download images. The public Docker registry is provided with the Docker Hub. We can also maintain Private registry which will reside inside a company’s firewall.

Docker container holds everything to run an application. Its created from a docker image.It can be started, stopped, removed, killed. Each one is isolated from other containers and also secured. We can use Docker compose for effectively managing multi container application.

When you run a Docker command with run option. It will do the below things

  1. It will pull the image from Docker Hub or Private registry
  2. Then it will create a container using that image
  3. Then it allocates file system and network/bridge interface
  4. Then it will setup the IP address. You can run docker inspect to view the IP address details and other details of a container
  5. It will run the task which we specified. Here its run task
  6. Finally it will connect to the application and transfer the application output to log file, error files

Kafka – Multiple producer threads

In my application, I use Kafka for logging the user events. So the user events are collected in XML format and send it to Kafka. So from Kafka it will be consumed by Flume agent.

In my API, We create a producer thread for each event. So after the message is sent to Kafka, then this producer is closed. This is perfectly fine but while testing these changes in our Load env, we have got an issue. The issue is that that Kafka server could not allocate the resources for the Kafka producer thread and its also complaining that there are two many open files.

So to avoid this issue, we have to increase the open file limit or change our API to create only one Kafka producer instance which is responsible for producing all the messages.

As Kafka producer instance is thread safe, the second solution is the correct fit for this issue

HATEOAS

HATEOAS stands for “Hypertext As The Engine Of Application State”

It’s a constraint in the Rest architecture. By using which we can add the details of our API such as service link in the actual response. So the user can easily navigate through the response and he/she can easily identify the underlying services or other dependent services.

Refer the below example,


{
	"name": "Bala",
	"age": "45",
	"links": [{
		"rel": "self",
		"href": "http://localhost:8080/api/customers/1"
	}, {
		"rel": "address",
		"href": "http://localhost:8080/api/customers/1/address"
	}]
}

Assume that the you have received the above response while accessing the service http://localhost:8080/api/customers/1

Here, the links elements will provide some information about the underlying services. So to access the address information of this customer, we should access http://localhost:8080/api/customers/1/address service. This is an example of HATEOAS response