Rest application to send messages to Kafka

I have created a simple Rest application to send messages to Kafka.

Please refer the below code in Github.

I did not cover the Kafka installation. The below example will cover only the Kafka message producer part.

Rest Assured – Example

Rest Assured is one of the nice API for testing the Rest web services. Its easy to learn and simple to implement. I can say that its based on Given when then approach[Gherkin language]

Just include the below dependency in the maven pom.xml


Here is a simple example,

import com.jayway.restassured.RestAssured;
import com.jayway.restassured.response.Response;
import org.junit.Before;
import org.junit.Test;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;

import static org.junit.Assert.assertEquals;
import static org.junit.Assert.assertNotNull;

public class CustomerRestAssuredTest {

    private final static Logger LOGGER = LoggerFactory.getLogger(CustomerRestAssuredTest .class);

    public void setUp() {
       // Replace with the domain name of the service
        RestAssured.baseURI = "http://localhost";
        //Port number
        RestAssured.port = 8080;
        //Service Base Path
        RestAssured.basePath = "/service/";

    public void testGetCustomers() {
        Response response = RestAssured.when().get("customers");
        assertEquals(200, response.getStatusCode());
        assertEquals(true, response.getBody().asString().contains("CONTENT_TO_TEST_AGAINST"));
    public void testGetCustomerById() {
        Response response = RestAssured.when().get("customers").param("id","cust_123");
        assertEquals(200, response.getStatusCode());
        assertEquals(true, response.getBody().asString().contains("CONTENT_TO_TEST_AGAINST"));
    public void testSaveCustomerDetails() {
       //Create request builder
        RequestSpecBuilder builder = new RequestSpecBuilder();
       //Set the body content
        //Set the request content type
        builder.setContentType("application/json; charset=UTF-8");
        //Build the spec
        RequestSpecification requestSpec =;
        Response response = RestAssured.given().spec(requestSpec ).when().post("customers/save");
        assertEquals(200, response.getStatusCode());
        assertEquals(true, response.getBody().asString().contains("SUCCESS"));

Assume that we have a service called customers and its returns the list of customers. The service URL is http://localhost:8080/service/customers

Please refer the above code and inside the setUp method, we just have to initialize the Base URL, port and Base Path

After that in the actual test method, We call RestAssured.when() then followed by HTTP method and the service name. Here in this its a GET service and method name is customers. Here we dont pass any parameters to this service. Thats is.
It will return com.jayway.restassured.response.Response and we have to parse through it or get the response body content and validate it.

This is just a simple example. For more information and examples, please refer

Apache Spark Cluster Architecture

Please refer the below cluster diagram of Spark.



Here we are just going to see how the Spark cluster is working. Basically the Cluster manager is responsible for managing all the worker nodes and allocate resources upon the request from Driver program.

So the below is like this,
1. Driver submits the request to any one of the cluster manager to run the jobs. If in case of stand alone cluster it will submit the request to Master
2. Master/Cluster manager allocate the resources(Worker Nodes) to Driver
3. Then Driver program contacts each worker nodes directly and each node has Executor which is responsible for doing the tasks.
4. Driver sends the application code to each executors in the form of JAR
5. Finally the Spark Context sends tasks to each executors to run the tasks


Create a Fat JAR

To create a FAT jar contains all the dependent classes and Jars, Please use the below approach





Add the sbt-assembly plugin in the plugins.sbt as below

addSbtPlugin("com.eed3si9n" % "sbt-assembly" % "0.14.1")

Change the version appropriately and run sbt assembly to create the Fat Jar

Apache Spark

Apache spark is a cluster computing engine and its best fit for handling the iterative tasks

Apache Spark supports in memory data computation which means, the data will be moved to node’s memory, then the computation will be done. Because of this, Spark is more faster than Hadoop.

The fundamental data structure in spark is RDD. Resilent Distributed DataSet which is an immutable distributed collection of objects.

It does not have a specific storage as like Hadoop. So it may use any of the underlying data storage. It could be anything such as cloud storage, Hdfs, Nfs.

Spark is not a replacement for Hadoop but its a better replace for Hadoop Map reduce jobs.

Spark job is very easy to create compared to Hadoop Map reduce.

Spark streaming is a best alternative to Storm processing

MLib is also a best alternative to Mahout