Java 8 Lambda – Using Map

In this post, I am going to show some examples of using Java 8 Stream map().

I have an arraylist which contains list of strings and want to change the contents of that list to upper case. We can easily do that with map() but I want to make sure that my code should work fine for all types of input such as valid list, empty list and null list.

Refer the below code to know how to do that.


import java.util.Arrays;
import java.util.Collections;
import java.util.List;
import java.util.Optional;
import java.util.stream.Collectors;

public class LambdaExample {

    public static void main(String[] args) {

        //Passing valid list
        List validList = Arrays.asList("Name1", "Name2");
        System.out.println("Converting valid list contents to upper case: " + convertAllToUpperCase(validList));
        //Passing an empty list
        System.out.println("Converting an empty list  to upper case: " + convertAllToUpperCase(Collections.emptyList()));
        //Passing null
        System.out.println("Converting null list to upper case: " + convertAllToUpperCase(null));
    }

    private static List convertAllToUpperCase(List list) {
        return Optional.ofNullable(list)
                .orElseGet(Collections::emptyList).stream().map(String::toUpperCase)
                .collect(Collectors.toList());
    }

}


Here, I have used the “Optional” class to handle null list. The code “Optional.ofNullable(list).orElseGet(Collections::emptyList)” will return an empty list when the list is null.

The output of the above program will look like below,


Converting valid list contents to upper case: [NAME1, NAME2]
Converting an empty list  to upper case: []
Converting null list to upper case: []

Now lets see an another example to show how we can map one model class to an another class via Java 8 Functional interface.

There are two model classes here. They are Person and PersonBean. So I want to iterate through the Person list and create an another list of PersonBean.

I have used Function inferface which represents a function that accepts one argument and produces a result. Here the argument is Person and the result is PersonBean.

The first step is to define the function and then use that function while mapping. Refer the below code to know how to do that.


import java.util.ArrayList;
import java.util.Collections;
import java.util.List;
import java.util.Optional;
import java.util.function.Function;
import java.util.stream.Collectors;

public class LambdaExample {

    Function personBeanFunction = person -> {
        PersonBean personBean = new PersonBean();
        personBean.age = person.age;
        personBean.firstName = person.givenName;
        personBean.lastName = person.familyName;
        return personBean;
    };

    public static void main(String[] args) {
        LambdaExample lambdaExample = new LambdaExample();
        List personList = new ArrayList();
        Person person = new Person();
        person.age = 45;
        person.familyName = "test";
        person.givenName = "test";
        personList.add(person);
        System.out.println("PersonList list:" + personList);
        System.out.println("Person Bean list:" + lambdaExample.convert(personList));
    }

    private List convert(List list) {
        return Optional.ofNullable(list).orElseGet(Collections::emptyList).stream()
                .map(personBeanFunction).collect(Collectors.toList());
    }

    private static class Person {
        String familyName;
        String givenName;
        int age;

        @Override
        public String toString() {
            final StringBuilder sb = new StringBuilder("Person{");
            sb.append("familyName='").append(familyName).append('\'');
            sb.append(", givenName='").append(givenName).append('\'');
            sb.append(", age=").append(age);
            sb.append('}');
            return sb.toString();
        }
    }

    private static class PersonBean {
        String firstName;
        String lastName;
        int age;

        @Override
        public String toString() {
            final StringBuilder sb = new StringBuilder("PersonBean{");
            sb.append("firstName='").append(firstName).append('\'');
            sb.append(", lastName='").append(lastName).append('\'');
            sb.append(", age=").append(age);
            sb.append('}');
            return sb.toString();
        }
    }
}

The output of the above program will look like below,


PersonList list:[Person{familyName='test', givenName='test', age=45}]
Person Bean list:[PersonBean{firstName='test', lastName='test', age=45}]

Advertisements

HK2 injection in Standalone Java Application with Custom Binder

In my previous post, I gave an example of using the HK2 injection in a stand-alone application.

Please refer it in the below link.
Hk2 Injection with hk2 inhabitant file

In the above example, we have to create an HK2 inhabitant file which contains the binding information such as contract and service name and their mapping. The overhead in this approach is generating the HK2 default file. We can automate the file creation process in the maven build with hk2-inhabitant-generator plugin. But at some point, we may want to define the mapping explicitly such as using the service and contract from different libs. The hk2-inhabitant-generator may not add the contract/service mapping in the inhabitant file. Therefore, to resolve the above problem, we need to define a custom binder which contains all the bindings details.

Let’s see how can use the HK2 injection in a stand-alone application with a custom binder.

The below one is my POM file.
hk2-injection-pom

We have a standalone application which will be used for adding, deleting, updating the user details into HashMap or Google Guava Cache. We have two interfaces(UserService for service and UserDao for DAO) and three implementation classes(UserServiceImpl, UserGuavaCacheImpl and UserLocalCacheDaoImpl)

UserGuavaCacheImpl is used to do CRUD operation on Guava Chache and UserLocalCacheDaoImpl is used to do CRUD operation on HashMap.

We decide to use either Local cache or Google Guava cache based on a system property. This is a simple example to show how we can use the HK2 injection.
Refer the below image to know my project structure.

hk2-project-structure

All our interfaces should be annotated with @Contract and all the implementation classes should be annotated with @Service.

Refer below my Service implementation.
hk2-userservice

Refer my various DAO implementations below.

Refer my custom binder file which has the binding information below.

hk2-dependencybinder

In the above file, I have used “Named” annotation as we have two services for UserDao interface. If we don’t provide that then by default, the service locator injects the first available implementation.

Below is my main class which invokes the User Service to perform various CRUD operations with User Object.
hk2-Application
The below code is used to create a ServiceLocator instance and then we bind the created service locator with our custom binder.


ServiceLocator serviceLocator = ServiceLocatorFactory.getInstance().create("serviceLocator");
ServiceLocatorUtilities.bind(serviceLocator, new DependencyBinder());

We can use the ServiceLocator object for getting the Object instance like below


UserService userService = serviceLocator.getService(UserService.class, "empService");

We have two DAO implementation. We need to decide which one to use. The hk2 IterableProvider will give both the implementations classes. Therefore, I have used a system property to decide the appropriate cache implementation. Refer the below code from UserServiceImpl.


@Inject
    public UserServiceImpl(IterableProvider iterableProviders) {
        String cache = System.getProperty("CACHE");
        if (CacheDetails.GUAVA.name().equals(cache)) {
            this.userDao = iterableProviders.named("empGuavaCacheDao").get();
        }
        else if(CacheDetails.LOCAL.name().equals(cache)) {
            this.userDao = iterableProviders.named("empLocalCacheDao").get();
        }
    }

 
The output of my program is as given below,


Add user details
The user details after it has been added User: User{id='0701f22b-10b0-4d6f-8c8d-410da89646f9', firstName='First Name', lastName='Last Name}
Now fetch the user details with ID
User :User{id='0701f22b-10b0-4d6f-8c8d-410da89646f9', firstName='First Name', lastName='Last Name}
Now update the user details
User After Updation:User{id='0701f22b-10b0-4d6f-8c8d-410da89646f9', firstName='Bala', lastName='Samy}
Now delete the user details
User After Deletion:null


Testing HK2 injection
Fetching the various DAO implementation
UserDao userDao = serviceLocator.getService(UserDao.class, "empGuavaCacheDao")
userDao instanceof UserGuavaCacheImpl - > Its an instance of UserGuavaCacheImpl 
userDao = serviceLocator.getService(UserDao.class, "empLocalCacheDao")
userDao instanceof UserLocalCacheDaoImpl - > Its an instance of UserLocalCacheDaoImpl 


 

Refer the code @hk2-java-custombinder

 

 

How to use Oracle XMLTable syntax

I have got a requirement to query against a XMLTYPE content and fetch the records from it.

XMLTable maps the result of an XQuery evaluation into relational rows and columns. So I am going to use this here.

Assume that we have a table called user_events which has two columns event_data and event_date. event_data is a XMLTYPE column. Now I want to query the user_events table and find out the list of users who have used the system for Jan month.

Here is the syntax.


select distinct userName from user_events e,
XMLTABLE ('user_events'
        PASSING e.event_data
        COLUMNS userName  VARCHAR2(50) PATH '/events/user/userName', 
                userId VARCHAR2(10) PATH '/events/user/userId')
where event_date >= '01-JAN-18' AND event_date <= '31-JAN-18'
order by userName asc; 


docker installation in ubuntu

The below steps are for installing the specific version of docker-ce in ubuntu


1. Run the below command to download the package.

wget https://apt.dockerproject.org/repo/pool/testing/d/docker-engine/docker-engine_17.05.0~ce~rc3-0~ubuntu-trusty_amd64.deb

2. Run the below command to install the same.
sudo dpkg -i docker-engine_17.05.0~ce~rc3-0~ubuntu-trusty_amd64.deb


The below steps are for installing the docker engine in ubuntu


1. Run the below command to find out the available version
sudo apt-cache showpkg docker-engine

2. Then select the specific version and run below to install the docker-engine. The below one is used to install 1.13.1 version of docker-engine. 
sudo apt-get install docker-engine=1.13.1-0~ubuntu-trusty


How to use Maven Shade Plugin

Maven Shade Plugin provides the capability to package the artifact in an uber-jar, including its dependencies and to shade(rename) the packages of some of the dependencies.

This plugin is used to take all the dependencies and extract the content and put them as is and create one big JAR(Uber. This will be good for runnable jar but if you are planning to use this jar as a maven dependency then there may be some conflicts with the client jar. So we need to use the relocations option to relocate the conflicting classes to a different package.

In this post, I will show how to use this plugin with an example.

The below example is for creating the final Uber jar without relocations.

Create Uber Jar without relocations

Consider that we have a web app which exposes a REST web service and running on a grizzly container. Now, let’s use the shade plugin to create the fat jar.

I am not going to give the code examples here as this post explains about the shade plugin only.

Refer the complete code https://github.com/dkbalachandar/greeting-rest

The below plugin configuration has to be added in the maven pom file.

Refer the configuration section where we have to specify various options. They are given below,

  • mainclass – The entry point of the application
  • createSourcesJar – Create the sources jar as well
  • shadedClassifierName – The name of the classifier used in case the shaded artifact is attached
  • shadedArtifactAttached – It defines whether the shaded artifact should be attached as classifier to the original artifact. If its false, then the original jar will be replaced by the shaded jar

    <plugin>
	<groupId>org.apache.maven.plugins</groupId>
	<artifactId>maven-shade-plugin</artifactId>
	<version>3.1.0</version>
	<executions>
		<execution>
			<phase>package</phase>
			<goals>
				<goal>shade</goal>
			</goals>
			<configuration>
				<createSourcesJar>true</createSourcesJar>
				<shadedClassifierName>shaded</shadedClassifierName>
				<shadedArtifactAttached>true</shadedArtifactAttached>
				<transformers>
					<transformer implementation="org.apache.maven.plugins.shade.resource.ManifestResourceTransformer">
						<mainClass>com.Grizzly</mainClass>
					</transformer>
				</transformers>
				<filters>
					<filter>
						<artifact>*:*</artifact>
						<excludes>
							<exclude>META-INF/*.SF</exclude>
							<exclude>META-INF/*.DSA</exclude>
							<exclude>META-INF/*.RSA</exclude>
						</excludes>
					</filter>
				</filters>                          
			</configuration>
		</execution>
	</executions>
</plugin>

Then execute ‘mvn package’ command and then check the target folder. You will be able to see the shaded jar and it ends with shaded suffix. Refer the below screenshot.

shaded-jar

When you extract out the contents of the shaded jar, you will see below.

shaded-jar-extraction

Create Uber Jar with relocations

Now we are going to see how we can relocate some of the classes inside the Uber Jar. Lets use the same example and add the below relocations inside the configuration element.


  <configuration>
	<createSourcesJar>true</createSourcesJar>
	<shadedClassifierName>shaded</shadedClassifierName>
	<shadedArtifactAttached>true</shadedArtifactAttached>
	<transformers>
		<transformer implementation="org.apache.maven.plugins.shade.resource.ManifestResourceTransformer">
			<mainClass>com.Grizzly</mainClass>
		</transformer>
	</transformers>
	<filters>
		<filter>
			<artifact>*:*</artifact>
			<excludes>
				<exclude>META-INF/*.SF</exclude>
				<exclude>META-INF/*.DSA</exclude>
				<exclude>META-INF/*.RSA</exclude>
			</excludes>
		</filter>
	</filters>
	<relocations>
		<relocation>
			<pattern>org.glassfish.grizzly.utils</pattern>
			<shadedPattern>org.shaded.glassfish.grizzly.utils</shadedPattern>
			<excludes>
				<exclude>org.glassfish.grizzly.utils.conditions.*</exclude>
			</excludes>
		</relocation>
	</relocations>
</configuration>

Check the element and the child elements. Here, I am planning to rename/move the org.glassfish.grizzly.utils package to org.shaded.glassfish.grizzly.utils and exclude the package org.glassfish.grizzly.utils.conditions.

Then run the mvn package again to create the shaded jar and extract out the contents of the shaded jar. It will look like below.

shaded-jar-relocations

As you can see that the package org.glassfish.grizzly.utils has been moved to org.shaded.glassfish.grizzly.utils but the inner package org.glassfish.grizzly.utils.conditions has been kept it as in the oringal location.

While relocating the packages, this plugin will also update the affected bytecode. So you will not see any issues while running the Uber jar.

Ruby Rails test error “warning: constant :: Fixnum is deprecated”

I got the below error while trying to run the Ruby Test case in Ubuntu VM.

Error Trace:

/home/bala/.rvm/gems/ruby-2.4.1/gems/watir-webdriver-0.9.3/lib/watir-webdriver/elements/html_elements.rb:17: warning: constant ::Fixnum is deprecated

Follow the below steps to get rid of the above error.

1. Uninstall Rails completely. Execute the below commands one by one.


   gem uninstall rails
   gem uninstall railties

2. Then run the below command to install the rails.


gpg --keyserver hkp://keys.gnupg.net --recv-keys 409B6B1796C275462A1703113804BB82D39DC0E3
\curl -sSL https://get.rvm.io | bash -s stable --rails

3. Update the watir version in my Gemfile to 6.1 from 5.0.0

4. Then run the below command and then run your test case.


  bundle install

Exclude fields from Serialization with GSON API

GSON is a Java serialization/deserialization library to convert Java Objects into JSON and back.

In this post, we are going to see how we can exclude some fields from serialization with the GSON API.

Refer the below model class.

Employee.java



import org.apache.commons.lang3.builder.ToStringBuilder;

public class Employee {

    private String firstName;
    private String lastName;
    private int salary;

    public String getFirstName() {
        return firstName;
    }

    public void setFirstName(String firstName) {
        this.firstName = firstName;
    }

    public String getLastName() {
        return lastName;
    }

    public void setLastName(String lastName) {
        this.lastName = lastName;
    }

    public int getSalary() {
        return salary;
    }

    public void setSalary(int salary) {
        this.salary = salary;
    }

    @Override
    public String toString() {
        return ToStringBuilder.reflectionToString(this);
    }
}

The above Employee class has three fields such as first name, last name and salary. Consider that we want to exclude the salary field while serializing the employee object.

To do so, then we have to create a class which implements the ExclusionStrategy interface and provide our logic over there.

SalaryExclusionStrategy.java


import com.google.gson.ExclusionStrategy;
import com.google.gson.FieldAttributes;

public class SalaryExclusionStrategy implements ExclusionStrategy {

    @Override
    public boolean shouldSkipField(FieldAttributes f) {
        return "salary".equalsIgnoreCase(f.getName());
    }

    @Override
    public boolean shouldSkipClass(Class clazz) {
        return false;
    }
}


We need to instruct the GSON parser to ignore the salary field. So while creating the GSON object, we have to pass an instance of SalaryExclusionStrategy which contains the logic to ignore the salary field to the addSerializationExclusionStrategy method.

Refer the below code to know how we can do that.


import com.google.gson.FieldNamingPolicy;
import com.google.gson.Gson;
import com.google.gson.GsonBuilder;

public class GsonMain {

    private static final Gson GSON = new GsonBuilder()
            .setFieldNamingStrategy(FieldNamingPolicy.IDENTITY)
            .setPrettyPrinting()
            .addSerializationExclusionStrategy(new SalaryExclusionStrategy())
            .create();

    public static void main(String[] args) {
        Employee employee = new Employee();
        employee.setFirstName("Bala");
        employee.setLastName("Samy");
        System.out.println(GSON.toJson(employee));
    }
}


Refer below the output.


{
  "firstName": "Bala",
  "lastName": "Samy"
}