Ruby Cucumber – Running a specific feature file

If you want to run a specific feature file then use the below command,

bundle exec rake cucumber FEATURE=features/FEATURE_FILE_NAME

To run a specific line, then use the below command.

bundle exec rake cucumber FEATURE=features/FEATURE_FILE_NAME:LINE_NUMBER


How to copy the files in between two CDH clusters

Sometimes, we may need to copy the content from one CDH5 cluster to an another CDH5 cluster. We can make use of distcp to achieve that.


hadoop distcp -m 10 -prbugpcaxt hdfs://ACTIVE_NAME_NODE_OF_CLUSTER_A:8020/FILE_PATH hdfs://ACTIVE_NAME_NODE_OF_CLUSTER_B:8020/FILE_PATH

-m stands for Maximum number of simultaneous copies. Specify the number of map operations.

-p refers Preserve r: replication number b: block size u: user g: group p: permission c: checksum-type a: ACL x: XAttr t: timestamp


hadoop distcp -m 100 -pbugp hdfs:// hdfs://

If you faced any memory issues like below during map or reduce operation, than pass the below arguments ( and -Dmapreduce.reduce.memory.mb)

Container is running beyond physical memory limits. Current usage: 1.1 GB of 1 GB physical memory used; 6.1 GB of 2.1 GB virtual memory used. Killing container.

hadoop distcp -Dmapreduce.reduce.memory.mb=2000 -m 100 -pbugp hdfs:// hdfs://

Java 8 Lambda – Using Map

In this post, I am going to show some examples of using Java 8 Stream map().

I have an arraylist which contains list of strings and want to change the contents of that list to upper case. We can easily do that with map() but I want to make sure that my code should work fine for all types of input such as valid list, empty list and null list.

Refer the below code to know how to do that.

import java.util.Arrays;
import java.util.Collections;
import java.util.List;
import java.util.Optional;

public class LambdaExample {

    public static void main(String[] args) {

        //Passing valid list
        List validList = Arrays.asList("Name1", "Name2");
        System.out.println("Converting valid list contents to upper case: " + convertAllToUpperCase(validList));
        //Passing an empty list
        System.out.println("Converting an empty list  to upper case: " + convertAllToUpperCase(Collections.emptyList()));
        //Passing null
        System.out.println("Converting null list to upper case: " + convertAllToUpperCase(null));

    private static List convertAllToUpperCase(List list) {
        return Optional.ofNullable(list)


Here, I have used the “Optional” class to handle null list. The code “Optional.ofNullable(list).orElseGet(Collections::emptyList)” will return an empty list when the list is null.

The output of the above program will look like below,

Converting valid list contents to upper case: [NAME1, NAME2]
Converting an empty list  to upper case: []
Converting null list to upper case: []

Now lets see an another example to show how we can map one model class to an another class via Java 8 Functional interface.

There are two model classes here. They are Person and PersonBean. So I want to iterate through the Person list and create an another list of PersonBean.

I have used Function inferface which represents a function that accepts one argument and produces a result. Here the argument is Person and the result is PersonBean.

The first step is to define the function and then use that function while mapping. Refer the below code to know how to do that.

import java.util.ArrayList;
import java.util.Collections;
import java.util.List;
import java.util.Optional;
import java.util.function.Function;

public class LambdaExample {

    Function personBeanFunction = person -> {
        PersonBean personBean = new PersonBean();
        personBean.age = person.age;
        personBean.firstName = person.givenName;
        personBean.lastName = person.familyName;
        return personBean;

    public static void main(String[] args) {
        LambdaExample lambdaExample = new LambdaExample();
        List personList = new ArrayList();
        Person person = new Person();
        person.age = 45;
        person.familyName = "test";
        person.givenName = "test";
        System.out.println("PersonList list:" + personList);
        System.out.println("Person Bean list:" + lambdaExample.convert(personList));

    private List convert(List list) {
        return Optional.ofNullable(list).orElseGet(Collections::emptyList).stream()

    private static class Person {
        String familyName;
        String givenName;
        int age;

        public String toString() {
            final StringBuilder sb = new StringBuilder("Person{");
            sb.append(", givenName='").append(givenName).append('\'');
            sb.append(", age=").append(age);
            return sb.toString();

    private static class PersonBean {
        String firstName;
        String lastName;
        int age;

        public String toString() {
            final StringBuilder sb = new StringBuilder("PersonBean{");
            sb.append(", lastName='").append(lastName).append('\'');
            sb.append(", age=").append(age);
            return sb.toString();

The output of the above program will look like below,

PersonList list:[Person{familyName='test', givenName='test', age=45}]
Person Bean list:[PersonBean{firstName='test', lastName='test', age=45}]

HK2 injection in Standalone Java Application with Custom Binder

In my previous post, I gave an example of using the HK2 injection in a stand-alone application.

Please refer it in the below link.
Hk2 Injection with hk2 inhabitant file

In the above example, we have to create an HK2 inhabitant file which contains the binding information such as contract and service name and their mapping. The overhead in this approach is generating the HK2 default file. We can automate the file creation process in the maven build with hk2-inhabitant-generator plugin. But at some point, we may want to define the mapping explicitly such as using the service and contract from different libs. The hk2-inhabitant-generator may not add the contract/service mapping in the inhabitant file. Therefore, to resolve the above problem, we need to define a custom binder which contains all the bindings details.

Let’s see how can use the HK2 injection in a stand-alone application with a custom binder.

The below one is my POM file.

We have a standalone application which will be used for adding, deleting, updating the user details into HashMap or Google Guava Cache. We have two interfaces(UserService for service and UserDao for DAO) and three implementation classes(UserServiceImpl, UserGuavaCacheImpl and UserLocalCacheDaoImpl)

UserGuavaCacheImpl is used to do CRUD operation on Guava Chache and UserLocalCacheDaoImpl is used to do CRUD operation on HashMap.

We decide to use either Local cache or Google Guava cache based on a system property. This is a simple example to show how we can use the HK2 injection.
Refer the below image to know my project structure.


All our interfaces should be annotated with @Contract and all the implementation classes should be annotated with @Service.

Refer below my Service implementation.

Refer my various DAO implementations below.

Refer my custom binder file which has the binding information below.


In the above file, I have used “Named” annotation as we have two services for UserDao interface. If we don’t provide that then by default, the service locator injects the first available implementation.

Below is my main class which invokes the User Service to perform various CRUD operations with User Object.
The below code is used to create a ServiceLocator instance and then we bind the created service locator with our custom binder.

ServiceLocator serviceLocator = ServiceLocatorFactory.getInstance().create("serviceLocator");
ServiceLocatorUtilities.bind(serviceLocator, new DependencyBinder());

We can use the ServiceLocator object for getting the Object instance like below

UserService userService = serviceLocator.getService(UserService.class, "empService");

We have two DAO implementation. We need to decide which one to use. The hk2 IterableProvider will give both the implementations classes. Therefore, I have used a system property to decide the appropriate cache implementation. Refer the below code from UserServiceImpl.

    public UserServiceImpl(IterableProvider iterableProviders) {
        String cache = System.getProperty("CACHE");
        if ( {
            this.userDao = iterableProviders.named("empGuavaCacheDao").get();
        else if( {
            this.userDao = iterableProviders.named("empLocalCacheDao").get();

The output of my program is as given below,

Add user details
The user details after it has been added User: User{id='0701f22b-10b0-4d6f-8c8d-410da89646f9', firstName='First Name', lastName='Last Name}
Now fetch the user details with ID
User :User{id='0701f22b-10b0-4d6f-8c8d-410da89646f9', firstName='First Name', lastName='Last Name}
Now update the user details
User After Updation:User{id='0701f22b-10b0-4d6f-8c8d-410da89646f9', firstName='Bala', lastName='Samy}
Now delete the user details
User After Deletion:null

Testing HK2 injection
Fetching the various DAO implementation
UserDao userDao = serviceLocator.getService(UserDao.class, "empGuavaCacheDao")
userDao instanceof UserGuavaCacheImpl - > Its an instance of UserGuavaCacheImpl 
userDao = serviceLocator.getService(UserDao.class, "empLocalCacheDao")
userDao instanceof UserLocalCacheDaoImpl - > Its an instance of UserLocalCacheDaoImpl 


Refer the code @hk2-java-custombinder



How to use Oracle XMLTable syntax

I have got a requirement to query against a XMLTYPE content and fetch the records from it.

XMLTable maps the result of an XQuery evaluation into relational rows and columns. So I am going to use this here.

Assume that we have a table called user_events which has two columns event_data and event_date. event_data is a XMLTYPE column. Now I want to query the user_events table and find out the list of users who have used the system for Jan month.

Here is the syntax.

select distinct userName from user_events e,
XMLTABLE ('user_events'
        PASSING e.event_data
        COLUMNS userName  VARCHAR2(50) PATH '/events/user/userName', 
                userId VARCHAR2(10) PATH '/events/user/userId')
where event_date >= '01-JAN-18' AND event_date <= '31-JAN-18'
order by userName asc; 

docker installation in ubuntu

The below steps are for installing the specific version of docker-ce in ubuntu

1. Run the below command to download the package.


2. Run the below command to install the same.
sudo dpkg -i docker-engine_17.05.0~ce~rc3-0~ubuntu-trusty_amd64.deb

The below steps are for installing the docker engine in ubuntu

1. Run the below command to find out the available version
sudo apt-cache showpkg docker-engine

2. Then select the specific version and run below to install the docker-engine. The below one is used to install 1.13.1 version of docker-engine. 
sudo apt-get install docker-engine=1.13.1-0~ubuntu-trusty

How to use Maven Shade Plugin

Maven Shade Plugin provides the capability to package the artifact in an uber-jar, including its dependencies and to shade(rename) the packages of some of the dependencies.

This plugin is used to take all the dependencies and extract the content and put them as is and create one big JAR(Uber. This will be good for runnable jar but if you are planning to use this jar as a maven dependency then there may be some conflicts with the client jar. So we need to use the relocations option to relocate the conflicting classes to a different package.

In this post, I will show how to use this plugin with an example.

The below example is for creating the final Uber jar without relocations.

Create Uber Jar without relocations

Consider that we have a web app which exposes a REST web service and running on a grizzly container. Now, let’s use the shade plugin to create the fat jar.

I am not going to give the code examples here as this post explains about the shade plugin only.

Refer the complete code

The below plugin configuration has to be added in the maven pom file.

Refer the configuration section where we have to specify various options. They are given below,

  • mainclass – The entry point of the application
  • createSourcesJar – Create the sources jar as well
  • shadedClassifierName – The name of the classifier used in case the shaded artifact is attached
  • shadedArtifactAttached – It defines whether the shaded artifact should be attached as classifier to the original artifact. If its false, then the original jar will be replaced by the shaded jar

					<transformer implementation="org.apache.maven.plugins.shade.resource.ManifestResourceTransformer">

Then execute ‘mvn package’ command and then check the target folder. You will be able to see the shaded jar and it ends with shaded suffix. Refer the below screenshot.


When you extract out the contents of the shaded jar, you will see below.


Create Uber Jar with relocations

Now we are going to see how we can relocate some of the classes inside the Uber Jar. Lets use the same example and add the below relocations inside the configuration element.

		<transformer implementation="org.apache.maven.plugins.shade.resource.ManifestResourceTransformer">

Check the element and the child elements. Here, I am planning to rename/move the org.glassfish.grizzly.utils package to org.shaded.glassfish.grizzly.utils and exclude the package org.glassfish.grizzly.utils.conditions.

Then run the mvn package again to create the shaded jar and extract out the contents of the shaded jar. It will look like below.


As you can see that the package org.glassfish.grizzly.utils has been moved to org.shaded.glassfish.grizzly.utils but the inner package org.glassfish.grizzly.utils.conditions has been kept it as in the oringal location.

While relocating the packages, this plugin will also update the affected bytecode. So you will not see any issues while running the Uber jar.