Testing the mail functionality with Fake Smtp Server

In this post, I am going to show how we can integrate the FakeSmtp server with a java application.

Fake SMTP is a dummy SMTP server which is mainly used for testing the emails. It’s useful for the developers as it emits the email content into a local file directory. So we can check the local file content and do the validations.

The Dockerfile for the Fake Smtp is given below,

mail-rest-docker/tree/master/fake-smtp


FROM java:8

RUN mkdir -p /opt && \
  wget -q http://nilhcem.github.com/FakeSMTP/downloads/fakeSMTP-latest.zip && \ 
   unzip fakeSMTP-latest.zip -d /opt && \
   rm fakeSMTP-latest.zip

VOLUME /output

RUN chmod +x /opt/fakeSMTP-2.0.jar

EXPOSE 25

CMD java -jar /opt/fakeSMTP-2.0.jar --start-server --background --output-dir /output --port 25

If you refer the last line of the above file, you can understand that the email content would be written under the /output folder. So we have to mount the local directory accordingly in the docker compose file.

Next one is the REST application code to send out the email to Fake SMTP server.

mail-rest-docker/blob/master/src/main/java/com/resource/MailResource.java


package com.resource;

import javax.mail.Message;
import javax.mail.Session;
import javax.mail.Transport;
import javax.mail.internet.InternetAddress;
import javax.mail.internet.MimeMessage;
import javax.ws.rs.GET;
import javax.ws.rs.Path;
import javax.ws.rs.Produces;
import javax.ws.rs.WebApplicationException;
import javax.ws.rs.core.MediaType;
import javax.ws.rs.core.Response;
import java.util.Properties;

@Path("/sendMail")
public class MailResource {

    Properties properties = System.getProperties();

    public MailResource() {
        properties.put("mail.smtp.host", System.getenv("SMTP_HOST"));
        properties.put("mail.smtp.port", "25");
    }

    @GET
    @Produces(MediaType.TEXT_HTML) public Response sendSimpleHelloMail() throws WebApplicationException {

        String to = "test123@gmail.com";
        String from = "test123@gmail.com";
        Session session = Session.getDefaultInstance(properties);
        try {
            MimeMessage message = new MimeMessage(session);
            message.setFrom(new InternetAddress(from));
            message.addRecipient(Message.RecipientType.TO, new InternetAddress(to));
            message.setSubject("Subject");
            message.setContent("<h1>Hello</h1>", "text/html");
            Transport.send(message);
            System.out.println("Sent message successfully....");
        }
        catch (Exception ex) {
            ex.printStackTrace();
            throw new WebApplicationException(ex.getMessage());
        }
        return Response.ok().entity("Mail has been sent successfully").build();
    }
}

The SMTP_HOST enviornmental variable should hold the Fake SMTP server host. In this case, we have to link the Fake SMTP service with the REST container and give that service. Refer the below docker-compose.yml file to know how to do that.

mail-rest-docker/blob/master/docker-compose.yml


rest:
    image: mail-rest:latest
    ports:
        - 8080:8080
    environment:
       - SMTP_HOST=mail
    links:
      - mail:mail
mail:
    image: fake-smtp:latest
    volumes:
     - ./output:/output
    expose:
     - 25

Here we have mounted the local output directory to output folder. so the mail content will be available under the output folder and linked the fake-smtp service and giving that in the SMTP_HOST env variable.

Follow the below steps to run this application,

1. Clone this repository
2. Package the project by running mvn Package
3. Run ‘docker images’ and confirm that the ‘mail-rest’ docker images is available.
3. Then go into fake-smtp folder and build the image by running ‘docker build -t fake-smtp:latest . ” and confirm that the ‘fake-smtp’ docker images is available.

4. Then go to project root folder(java-mail-rest) and run “docker-compose up -d”
5. Access the REST endpoint with http://localhost:8080/api/sendEmail
6. Check the output folder and confirm that there is an eml file created which has the email content

Here is the sample file content

mail-rest-docker/blob/master/output/220717072710377.eml


        Sat, 22 Jul 2017 19:27:10 +0000 (UTC)
From: test123@gmail.com
To: test123@gmail.com
Message-ID: <840194110.01500751630345.JavaMail.root@2c7d28e1a4a2>
Subject: Subject
MIME-Version: 1.0
Content-Type: text/html; charset=us-ascii
Content-Transfer-Encoding: 7bit

<h1>Hello</h1>

Refer the code @ https://github.com/dkbalachandar/mail-rest-docker

How to use Alex collins docker maven plugin

The “alexec/docker-maven-plugin” is a maven plugin which is used to build, test and publishing the docker images. Refer this link for more information about this plugin(https://github.com/alexec/docker-maven-plugin).

Check out my sample project @ https://github.com/dkbalachandar/java-rest-docker. This is a simple Hello world REST application running on Docker.

In this post, I am going to show how to use “alexec” docker maven plugin with an example. Let’s go into the details one by one.

1. The first step is to add the com.alexecollins.docker plugin in the build section of the pom.xml. Check below snippet.

java-rest-docker/pom.xml:


<?xml version="1.0" encoding="UTF-8"?>
<project xmlns="http://maven.apache.org/POM/4.0.0"
         xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
         xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
    <modelVersion>4.0.0</modelVersion>

    <groupId>com</groupId>
    <artifactId>hello-rest</artifactId>
    <version>1.0</version>
    <properties>
       <docker.image.name>hello-rest</docker.image.name>
    </properties>

    <build>
        <plugins>
            <plugin>
                <artifactId>maven-compiler-plugin</artifactId>
                <version>3.6.1</version>
                <configuration>
                    <source>1.8</source>
                    <target>1.8</target>
                </configuration>
            </plugin>
            <plugin>
                <artifactId>maven-assembly-plugin</artifactId>
                <executions>
                    <execution>
                        <phase>package</phase>
                        <goals>
                            <goal>single</goal>
                        </goals>
                    </execution>
                </executions>
                <configuration>
                    <archive>
                        <manifest>
                            <addClasspath>true</addClasspath>
                            <mainClass>com.Grizzly</mainClass>
                        </manifest>
                    </archive>
                    <descriptorRefs>
                        <descriptorRef>jar-with-dependencies</descriptorRef>
                    </descriptorRefs>
                </configuration>
            </plugin>
            <plugin>
                <groupId>org.apache.maven.plugins</groupId>
                <artifactId>maven-jar-plugin</artifactId>
                <version>2.6</version>
                <configuration>
                    <archive>
                        <manifest>
                            <addClasspath>true</addClasspath>
                            <classpathPrefix>lib/</classpathPrefix>
                            <mainClass>com.Grizzly</mainClass>
                        </manifest>
                    </archive>
                </configuration>
            </plugin>
            <!-- Add the Alex Collins plugin-->
            <plugin>
                <groupId>com.alexecollins.docker</groupId>
                <artifactId>docker-maven-plugin</artifactId>
                <version>2.11.21</version>
                <executions>
                    <execution>
                        <goals>
                            <goal>clean</goal>
                            <goal>package</goal>
                        </goals>
                    </execution>
                </executions>
            </plugin>
        </plugins>
    </build>
    <dependencies>
        <dependency>
            <groupId>org.glassfish.grizzly</groupId>
            <artifactId>grizzly-http-server</artifactId>
            <version>2.3.22</version>
        </dependency>
        <dependency>
            <groupId>org.glassfish.grizzly</groupId>
            <artifactId>grizzly-http-servlet</artifactId>
            <version>2.3.24</version>
        </dependency>
        <dependency>
            <groupId>javax.servlet</groupId>
            <artifactId>javax.servlet-api</artifactId>
            <version>3.1.0</version>
        </dependency>
        <dependency>
            <groupId>org.glassfish.jersey.containers</groupId>
            <artifactId>jersey-container-jetty-servlet</artifactId>
            <version>2.5</version>
        </dependency>
    </dependencies>
</project>

2. Next step is to create a folder named ‘docker’ under src/main. Then go into ‘docker’ folder and create an another folder named ‘helloRest’. You can give name any name you want.

3. Next step is to create a conf.yml and give all the configuration information in it. Refer the below snippet.

java-rest-docker/src/main/docker/helloRest/conf.yml:


packaging:
  add:
    - target/${project.build.finalName}-jar-with-dependencies.jar
    - src/main/bin
tags:
    - ${project.artifactId}:latest

I am using the project.artifactId as my image name(hello-rest). If your project has any upper case letter then this plugin wont work. Make sure to give lower case letter always.

Assume that, the project artifactId is “hello-Rest” and when you try to build the project, you will get the below error.


[ERROR] Error during callback
com.alexecollins.docker.orchestration.OrchestrationException: com.github.dockerjava.api.exception.InternalServerErrorException: Error parsing reference: "hello-Rest:latest" is not a valid repository/tag

	at com.alexecollins.docker.orchestration.DockerOrchestrator.build(DockerOrchestrator.java:395)
	at com.alexecollins.docker.orchestration.DockerOrchestrator.build(DockerOrchestrator.java:323)
	at com.alexecollins.docker.orchestration.DockerOrchestrator.build(DockerOrchestrator.java:855)
	at com.alexecollins.docker.mojo.PackageMojo.doExecute(PackageMojo.java:16)
	at com.alexecollins.docker.mojo.AbstractDockerMojo.execute(AbstractDockerMojo.java:164)
	at org.apache.maven.plugin.DefaultBuildPluginManager.executeMojo(DefaultBuildPluginManager.java:101)
	at org.apache.maven.lifecycle.internal.MojoExecutor.execute(MojoExecutor.java:209)
	at org.apache.maven.lifecycle.internal.MojoExecutor.execute(MojoExecutor.java:153)
	at org.apache.maven.lifecycle.internal.MojoExecutor.execute(MojoExecutor.java:145)
	at org.apache.maven.lifecycle.internal.LifecycleModuleBuilder.buildProject(LifecycleModuleBuilder.java:84)
	at org.apache.maven.lifecycle.internal.LifecycleModuleBuilder.buildProject(LifecycleModuleBuilder.java:59)
	at org.apache.maven.lifecycle.internal.LifecycleStarter.singleThreadedBuild(LifecycleStarter.java:183)
	at org.apache.maven.lifecycle.internal.LifecycleStarter.execute(LifecycleStarter.java:161)
	at org.apache.maven.DefaultMaven.doExecute(DefaultMaven.java:320)
	at org.apache.maven.DefaultMaven.execute(DefaultMaven.java:156)
	at org.apache.maven.cli.MavenCli.execute(MavenCli.java:537)
	at org.apache.maven.cli.MavenCli.doMain(MavenCli.java:196)
	at org.apache.maven.cli.MavenCli.main(MavenCli.java:141)
	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
	at java.lang.reflect.Method.invoke(Method.java:498)
	at org.codehaus.plexus.classworlds.launcher.Launcher.launchEnhanced(Launcher.java:289)
	at org.codehaus.plexus.classworlds.launcher.Launcher.launch(Launcher.java:229)
	at org.codehaus.plexus.classworlds.launcher.Launcher.mainWithExitCode(Launcher.java:415)
	at org.codehaus.plexus.classworlds.launcher.Launcher.main(Launcher.java:356)
Caused by: com.github.dockerjava.api.exception.InternalServerErrorException: Error parsing reference: "hello-Rest:latest" is not a valid repository/tag



4. Next step is to create the DockerFile. Check the below snippet which has all the information.

java-rest-docker/src/main/docker/helloRest/Dockerfile



FROM java:8

WORKDIR /opt

ADD hello-rest-${project.version}-jar-with-dependencies.jar /opt/helloRest.jar
ADD bin /opt/bin

RUN chmod +x /opt/helloRest.jar
RUN chmod +x /opt/bin/run.sh

CMD ["/opt/bin/run.sh"]

EXPOSE 8080


5. Run the ‘mvn clean’ command and check the build log and confirm that the image ‘hello-rest’ is built with the configuration information from conf.yml and Dockerfile.
6. Run the ‘docker images’ command to confirm the image availability.
7. Finally, Run the docker image with this command ‘docker run -p 8080:8080 hello-rest’ and test it with this URL http://localhost:8080/api/greeting.

docker-gc: Utility to do garbage collection

The “docker-gc” is used to clean up the unused containers and images. By defualt, It removes all the containers that exited more than an hour ago and also the images that don’t belong to any remaining containers.

Refer the GitHub(docker-gc)

We can use this utility as a script and container. To run this utility as container, then use the below command. The container will start up, run a garbage collection process, then shutdown.


docker run --rm -v /var/run/docker.sock:/var/run/docker.sock -v /etc:/etc:ro spotify/docker-gc

There is an another cron utility docker-gc-cron which internally uses the docker-gc utilty for cleaning up the unused docker containers and images. This utility is very useful for scheduling the cleanup process on Jenkins CI nodes.

docker-slim: Utility to reduce the fat size docker images

The “docker-slim” is a magic diet pill for our containers. It uses the static and dynamic analysis to create skinny image variants of our fat images.

To use docker-slim, we have to download its binary from docker-slim. Binaries are available for Linux and Mac. Once we download the binary, then add it to PATH.

I have used the “helloworld-docker” image for this exercise.  Refer my GitHub project (java-rest-docker).

Assume that, I have already built the docker image. Let’s run the “docker images” command to check the size of it. The below is the response.

Command: docker images


REPOSITORY               TAG                 IMAGE ID            CREATED          SIZE
hello-rest               latest              81a33e78995c        3 minutes ago       651.4 MB
java                     8                   d23bdf5b1b1b        4 months ago        643.2 MB

Let’s run the docker-slim utility now. The utility will run and create a new slim image.


  sudo docker-slim build --http-probe hello-rest

Then run the “docker images” command and check the output. In the output, I see a new image named “hello-rest.slim” and its size is 193.6 MB which is better than the original image size. Make sure to run the newly created image and check if its working or not.

Command: docker images


REPOSITORY               TAG                 IMAGE ID            CREATED          SIZE
hello-rest.slim          latest              1507f864ebbc        9 seconds ago       193.6 MB
hello-rest-docker        latest              81a33e78995c        3 minutes ago       651.4 MB
java                     8                   d23bdf5b1b1b        4 months ago        643.2 MB

Create Docker image with Maven build

There are lots of maven Docker plugin available to integrate the docker with maven.

In this example, I am going to show how to build the Docker image while building a maven project.

Copy the below snippet and put into your pom.xml file and then create a maven property “docker.image.name” with the appropriate docker image name and also make sure that the Dockerfile available in the correct location.

Then run the ‘mvn install’ and once its done, run ‘docker images’ and check that the docker image is available in the list of images.

pom.xml:


  <plugin>
	<groupId>org.codehaus.mojo</groupId>		
	<artifactId>exec-maven-plugin</artifactId>		
	<version>1.4.0</version>		
	<executions>		
		<execution>		
			<goals>		
				<goal>java</goal>		
			</goals>		
		</execution>		
		<execution>		
			<id>build-image</id>		
			<phase>install</phase>		
			<goals>		
				<goal>exec</goal>		
			</goals>		
			<configuration>		
				<executable>docker</executable>		
				<arguments>		
					<argument>build</argument>		
					<argument>-t=${docker.image.name}</argument>		
					<argument>.</argument>		
				</arguments>		
			</configuration>		
		</execution>		
	</executions>			
 </plugin>		
<plugin>
      

Apache HBase – Java Client API with Docker HBase

HBase is the Hadoop database, a distributed, scalable, big data store. We can use HBase when we need random, realtime read/write access to our Big Data.

I have used the Standalone HBase and Docker HBase for this exercise.

The first step is to install Docker if you dont have it and then do the below steps to install docker HBase.

  1. Refer this repository https://github.com/sel-fish/hbase.docker and follow the instructions available to install Docker HBase.
  2. I have Ubuntu VM hence used my hostname instead of ‘myhbase’. If you have used, the hostname, then you don’t need to update the /etc/hosts file. But make sure to check the /etc/hosts file and verify the below.

    
    <<MACHINE_IP_ADDRESSS>> <<HOSTNAME>>
    
    
  3. My docker run command will be like below.
    
    docker run -d -h $(hostname) -p 2181:2181 -p 60000:60000 -p 60010:60010 -p 60020:60020 -p 60030:60030 --name hbase debian-hbase
    
    
  4. Once you are done, then check the links http://localhost:60010(Master) and http://localhost:60030(Region Server)

pom.xml


<dependency>
  <groupId>org.apache.hbase</groupId>
  <artifactId>hbase-client</artifactId>
  <version>1.3.0</version>
</dependency>

To access the Hbase shell, then follow the below steps,


1. Run 'docker exec -it hbase bash' to enter into the container
2. Go to '/opt/hbase/bin/' folder 
3. Run'./hbase shell' and it will open up the HBase Shell.

You can use the HBase shell available inside the docker container and run scripts to perform all the operations(create table, list, put and scan)


root@HOST-NAME:/opt/hbase/bin# ./hbase shell
2017-02-15 14:55:26,117 INFO  [main] Configuration.deprecation: hadoop.native.lib is deprecated. Instead, use io.native.lib.available
2017-02-15 14:55:27,095 WARN  [main] util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
HBase Shell; enter 'help' for list of supported commands.
Type "exit" to leave the HBase Shell
Version 1.2.0-cdh5.7.0, r49168a0b3987d5d8b1f1b359417666f477a0618e, Wed Jul 20 23:13:03 EDT 2016

hbase(main):001:0> status
1 active master, 0 backup masters, 1 servers, 0 dead, 3.0000 average load

hbase(main):002:0> list
TABLE                                                                                                                                                                                         
customer                                                                                                                                                                                      
1 row(s) in 0.0330 seconds

=> ["customer"]
hbase(main):003:0> create 'user','personal'
0 row(s) in 1.2540 seconds

=> Hbase::Table - user
hbase(main):004:0> list
TABLE                                                                                                                                                                                         
customer                                                                                                                                                                                      
user                                                                                                                                                                                          
2 row(s) in 0.0080 seconds

=> ["customer", "user"]
hbase(main):005:0> list 'user'
TABLE                                                                                                                                                                                         
user                                                                                                                                                                                          
1 row(s) in 0.0090 seconds

=> ["user"]
hbase(main):006:0> put 'user','row1','personal:name','bala'
0 row(s) in 0.1500 seconds

hbase(main):007:0> put 'user','row2','personal:name','chandar'
0 row(s) in 0.0110 seconds

hbase(main):008:0> scan 'user'
ROW                                              COLUMN+CELL                                                                                                                                  
 row1                                            column=personal:name, timestamp=1487170597246, value=bala                                                                                    
 row2                                            column=personal:name, timestamp=1487170608622, value=chandar                                                                                 
2 row(s) in 0.0700 seconds

hbase(main):009:0> get 'user' , 'row2'
COLUMN                                           CELL                                                                                                                                         
 personal:name                                   timestamp=1487170608622, value=chandar                                                                                                       
1 row(s) in 0.0110 seconds



The hbase-site.xml will be like this. It will be available in the docker container inder /opt/hbase/conf.

hbase-site.xml


<configuration>
  <property>
    <name>hbase.master.port</name>
    <value>60000</value>
  </property>
  <property>
    <name>hbase.master.info.port</name>
    <value>60010</value>
  </property>
  <property>
    <name>hbase.regionserver.port</name>
    <value>60020</value>
  </property>
  <property>
    <name>hbase.regionserver.info.port</name>
    <value>60030</value>
  </property>
  <property>
    <name>hbase.zookeeper.quorum</name>
    <value>localhost</value>
  </property>
  <property>
    <name>hbase.localcluster.port.ephemeral</name>
    <value>false</value>
  </property>
</configuration>

Create Table



import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.hbase.HBaseConfiguration;
import org.apache.hadoop.hbase.HColumnDescriptor;
import org.apache.hadoop.hbase.HTableDescriptor;
import org.apache.hadoop.hbase.TableName;
import org.apache.hadoop.hbase.client.Admin;
import org.apache.hadoop.hbase.client.Connection;
import org.apache.hadoop.hbase.client.ConnectionFactory;

public class CreateTable {

    public static void main(String... args) throws Exception {
        System.out.println("Creating Htable starts");
        Configuration config = HBaseConfiguration.create();
        //config.set("hbase.zookeeper.quorum", "HOSTNAME");
        //config.set("hbase.zookeeper.property.clientPort","2181");
        Connection connection = ConnectionFactory.createConnection(config);
        Admin admin = connection.getAdmin();
        TableName tableName = TableName.valueOf("customer");
        if (!admin.tableExists(tableName)) {
            HTableDescriptor htable = new HTableDescriptor(tableName);
            htable.addFamily(new HColumnDescriptor("personal"));
            htable.addFamily(new HColumnDescriptor("address"));
            admin.createTable(htable);
        } else {
            System.out.println("customer Htable is exists");
        }
        admin.close();
        connection.close();
        System.out.println("Creating Htable Done");
    }
}

List Tables



import org.apache.hadoop.hbase.HBaseConfiguration;
import org.apache.hadoop.hbase.HTableDescriptor;
import org.apache.hadoop.hbase.client.Admin;
import org.apache.hadoop.hbase.client.Connection;
import org.apache.hadoop.hbase.client.ConnectionFactory;

public class ListTable {

    public static void main(String... args) throws Exception {
        Connection connection = ConnectionFactory.createConnection(HBaseConfiguration.create());
        Admin admin = connection.getAdmin();
        HTableDescriptor[] tableDescriptors = admin.listTables();
        for (HTableDescriptor tableDescriptor : tableDescriptors) {
            System.out.println("Table Name:"+ tableDescriptor.getNameAsString());
        }
        admin.close();
        connection.close();
    }
}


Delete Table



import org.apache.hadoop.hbase.HBaseConfiguration;
import org.apache.hadoop.hbase.TableName;
import org.apache.hadoop.hbase.client.Admin;
import org.apache.hadoop.hbase.client.Connection;
import org.apache.hadoop.hbase.client.ConnectionFactory;

import java.io.IOException;

public class DeleteTable {

    public static void main(String... args) {

        System.out.println("DeleteTable Starts");
        Connection connection = null;
        Admin admin = null;

        try {
            connection = ConnectionFactory.createConnection(HBaseConfiguration.create());
            TableName tableName = TableName.valueOf("customer");
            admin = connection.getAdmin();
            admin.disableTable(tableName);
            admin.deleteTable(tableName);
            if(!admin.tableExists(tableName)){
                System.out.println("Table is deleted");
            }
        } catch (IOException e) {
            e.printStackTrace();
        } finally {
            try {
                if (admin != null) admin.close();
                if (connection != null) connection.close();
            } catch (IOException e) {
                e.printStackTrace();
            }
        }
        System.out.println("DeleteTable Done");
    }
}

Delete Data



import org.apache.hadoop.hbase.HBaseConfiguration;
import org.apache.hadoop.hbase.TableName;
import org.apache.hadoop.hbase.client.Connection;
import org.apache.hadoop.hbase.client.ConnectionFactory;
import org.apache.hadoop.hbase.client.Delete;
import org.apache.hadoop.hbase.client.Get;
import org.apache.hadoop.hbase.client.Result;
import org.apache.hadoop.hbase.client.Table;
import org.apache.hadoop.hbase.util.Bytes;

public class DeleteData {

    public static void main(String... args) throws Exception {
        System.out.println("DeleteData starts");
        Connection connection = ConnectionFactory.createConnection(HBaseConfiguration.create());
        TableName tableName = TableName.valueOf("customer");
        Table table = connection.getTable(tableName);
        Delete delete = new Delete(Bytes.toBytes("row1"));
        table.delete(delete);
        Get get = new Get(Bytes.toBytes("row1"));
        Result result = table.get(get);
        System.out.println("result:"+result);
        if (result.value() == null) {
            System.out.println("Delete Data is successful");
        }
        table.close();
        connection.close();
    }

}

To populate HBase table:


import org.apache.hadoop.hbase.HBaseConfiguration;
import org.apache.hadoop.hbase.TableName;
import org.apache.hadoop.hbase.client.Connection;
import org.apache.hadoop.hbase.client.ConnectionFactory;
import org.apache.hadoop.hbase.client.Get;
import org.apache.hadoop.hbase.client.Put;
import org.apache.hadoop.hbase.client.Result;
import org.apache.hadoop.hbase.client.Table;
import org.apache.hadoop.hbase.util.Bytes;

public class PopulateData {

    public static void main(String... args) throws Exception {

        Connection connection = ConnectionFactory.createConnection(HBaseConfiguration.create());

        TableName tableName = TableName.valueOf("customer");
        Table table = connection.getTable(tableName);

        Put p = new Put(Bytes.toBytes("row1"));
        //Customer table has personal and address column families. So insert data for 'name' column in 'personal' cf
        // and 'city' for 'address' cf
        p.addColumn(Bytes.toBytes("personal"), Bytes.toBytes("name"), Bytes.toBytes("bala"));
        p.addColumn(Bytes.toBytes("address"), Bytes.toBytes("city"), Bytes.toBytes("new york"));
        table.put(p);
        Get get = new Get(Bytes.toBytes("row1"));
        Result result = table.get(get);
        byte[] name = result.getValue(Bytes.toBytes("personal"), Bytes.toBytes("name"));
        byte[] city = result.getValue(Bytes.toBytes("address"), Bytes.toBytes("city"));
        System.out.println("Name: " + Bytes.toString(name) + " City: " + Bytes.toString(city));
        table.close();
        connection.close();
    }
}

To scan the tables


import org.apache.hadoop.hbase.HBaseConfiguration;
import org.apache.hadoop.hbase.TableName;
import org.apache.hadoop.hbase.client.Connection;
import org.apache.hadoop.hbase.client.ConnectionFactory;
import org.apache.hadoop.hbase.client.Result;
import org.apache.hadoop.hbase.client.ResultScanner;
import org.apache.hadoop.hbase.client.Scan;
import org.apache.hadoop.hbase.client.Table;
import org.apache.hadoop.hbase.util.Bytes;

import java.io.IOException;

public class ScanTable {

    public static void main(String... args) {
        Connection connection = null;
        ResultScanner scanner = null;
        try {
            connection = ConnectionFactory.createConnection(HBaseConfiguration.create());
            TableName tableName = TableName.valueOf("customer");
            Table table = connection.getTable(tableName);
            Scan scan = new Scan();
            // Scanning the required columns
            scan.addColumn(Bytes.toBytes("personal"), Bytes.toBytes("name"));

            scanner = table.getScanner(scan);

            // Reading values from scan result
            for (Result result = scanner.next(); result != null; result = scanner.next())
                System.out.println("Found row : " + result);
            //closing the scanner
        } catch (IOException e) {
            e.printStackTrace();
        } finally {
            if (scanner != null) scanner.close();
            if (connection != null) try {
                connection.close();
            } catch (IOException e) {
                e.printStackTrace();
            }
        }
    }


Rest API to produce message to Kafka using Docker Maven Plugin

I have developed a simple REST API to send the incoming message to Apache Kafka.

I have used Docker Kafka (https://github.com/spotify/docker-kafka) and the Docker Maven Plugin(https://github.com/fabric8io/docker-maven-plugin) to do this.

So before going through this post be familiarize yourself with Docker and Docker Compose

Docker Maven Plugin[Docker Maven Plugin] provides us a nice way to specify multiple images in POM.xml and link it as necessary. We can also use Docker compose for doing this. But I have used this plugin here.

    1. Clone the project (https://github.com/dkbalachandar/kafka-message-sender)
    2. Then go into kafka-message-sender folder
    3. Then enter ‘mvn clean install’
    4. Then enter  ‘mvn docker:start’. Then enter ‘docker ps’ and make sure that there are two containers are running. The name of those containers are kafka, kafka-rest
    5. Then access http://localhost:8080/api/kafka/send/test (POST) and confirm that you see message has been sent on the browser
    6. Then enter the below command and make sure that whatever message which you sent is available at Kafka[Kafka Command Line Consumer] or you can also consume via a Flume agent[Kafka Flume Agent Consumer]
docker exec -it kafka /opt/kafka_2.11-0.8.2.1/bin/kafka-console-consumer.sh --zookeeper localhost:2181 --topic test --from-beginning