Transfer Encoding and Content Length

Chunked transfer encoding is a data transfer mechanism which enables the response from the server will be sent as a series of chunks to the client. The servlet container will decide whether to use Content Length or Chunked based on the response content size. If the response content size is  more than the actual buffer size, then it will use Chunked Encoding otherwise it will calculate the content length and add it in the response header. Usually if the response content is large data, then it will use Chunked transfer encoding and you would not see the content-length header

TCP dump and Wireshark

TCPDump is a tool for network monitoring and data acquisition. It can be used for debugging network/server related problems. Tcpdump prints out a description of the contents of packets on a network interface that match the Boolean expression and move the contents to a file as well. We can also listen on a particular port number to monitor the data flow

Run the below command to install TCPDump in Ubuntu
sudo apt-get install tcpdump

Assume that you want to capture the traffic coming from and to port number 80 run the below command to take the Dump
sudo tcpdump -i any -w dump-file.pcap port 80

You should exit by entering Ctrl + C, otherwise it will be running continuously
The above command listens for the incoming and outgoing connections and capture it and move the data to dump-file.pcap file.

Once you have the file, then you can use Wireshark to view the data

Run the below command to install Wireshark in Ubuntu
sudo apt-get install wireshark

Then view the output by running the below command
wireshark dump-file.pcap

It will open up the Wireshark UI where you can view all the traffic. Just right click on any one the link and give “Follow TCP stream” and you can view all the streaming content.

Productivity Technique

While reading through the post in Quora, I found the below technique and its very useful for everyone to improve the productivity at the work place. We can also combine this technique with the podomoro technique to get the maximum result

The steps are given below,

1. Take a paper and draw rectangles which represents 25 minutes block as much as you want
2. Write down what do you want to achieve in that rectangle. Prioritize the tasks accordingly
3. Tick them once it is completed.

How to follow this,

1. Take the first rectangle and take the set of tasks one by one
2. Then Run the timer and work on those tasks and have a mind set to finish those task within 25 mins.
3. Once the time is over, then tick all the tasks that are completed and move the unfinished tasks to the next rectangle
4. Take a break for 5 or 10 mins
5. Continue to work on [Steps 2 to 5] until you finish all the tasks

PingAccess vs Ping Federate

PingAccess is a policy server so it handles authorization requests in which we can implement all kinds of business logic to validate and authorize the requests.

PingFederate is a federated server so it knows how to authenticate the user and provides an access to a particular resource.

PingAccess provides a way to manage our web application and API in a secure manner. It can be used along with Ping Federate, otherwise we have to implement our own logic to implement the authentication and authorization logic. You can refer the diagram available in this link https://www.pingidentity.com/en/products/pingaccess.html

We have three protocols under the Identity management category.

SAML – Security Assertion Markup Language
OpenID
OAuth (Open Authorization)

SAML facilitates both authentication and authorization and OpenID is used mainly for Authentication and OAuth is for Authorization alone.

So PingAccess internally uses OpenID for authentication and also leverage Ping
Federate Server which internally uses OAuth or SAML for authentication

I have used CA SAML siteminder federation and also PingAccess in the past. I assume that using PingAccess is somewhat easy compared to Siteminder

Please feel free to add your comments if i misstated anything.

Excel – PingAccess hyerlink redirect issue

I hope that most of you got an error If you embedded an authentication enabled hyperlink in excel file(https://support.microsoft.com/en-us/kb/218153)

I have also got the same issue in my Rest application. My rest application generates a report in excel format which contains a hyperlink to view other information which is dynamically updated multiple times in a day and it’s an another Rest service. So the user has to click on that link to view more content. the user has to authenticate himself before proceeding. We use Ping Access to authenticate the user. So once the user clicks on that link, the user will be shown with a login page and upon entering the valid login credentials, he/she will be landed on the more information service.

The flow will be like.
Excel == > Login Page == > Target Service

As we know that the excel does not follow the browser redirect hence it’s not allowing me to open the target page

We have followed the below approach to resolving this issue. I hope that this will also help others.

1. I have created a redirect Rest service which will take the service URI information. If you look carefully you can come to know that the below service takes the serviceUri and replace that URL in the REDIRECT_CONTENT string and return that whole HTML to the browser.

For example


import javax.ws.rs.GET;
import javax.ws.rs.Path;
import javax.ws.rs.Produces;
import javax.ws.rs.QueryParam;
import javax.ws.rs.core.Response;

@Path("redirect")
public class RedirectService {

        private static final String REDIRECT_CONTENT = "<html><head> <title>%s</title> <meta http-equiv=\"refresh\" content=\"0;URL='%s'\" /></head> <body> <p>Redirecting</p></body></html>";

        @GET
        @Produces("text/html")
        public Response redirect(@QueryParam("serviceUri") String serviceUri) {
            return Response.status(200).entity(String.format(REDIRECT_CONTENT, serviceUri)).build();
        }
}


2. The next step is to embed the redirect service URL in the excel report instead of the actual target service URL and make sure that you pass the actual service URL as a query parameter. Assume that the hyperlink will be like below,
http://localhost:8080/service/redirect?serviceUri=/service/content/1234234

Assume that 1234234 is a unique id, in this case, it a content id

3. Make some changes in Ping access to disable the authentication for this service(/service/redirect)

That’s it. We are done. So when the user clicks on the link from the excel, it will just open up a browser window and don’t show anything but on the back end, the META refresh will work and will redirect the user to /service/content/1234234 as this service is authentication enabled, so it just shows up the login page and upon entering the login credentials it will take you to the target page

Go – Simple web application on Docker

A simple web application running on 8080 and prints “Hello” and the name you pass

1. Create a hello.go file and copy the below code
hello.go:


package main

import (
    "fmt"
    "net/http"
)

func helloHandler(w http.ResponseWriter, r *http.Request) {
    fmt.Fprintf(w, "Hello %s", r.URL.Path[1:])    
    
}

func queryHandler(w http.ResponseWriter, r *http.Request) {
    name := r.URL.Query().Get("name")
    if name == "" {
        fmt.Fprintf(w, "Hello")
    }else{
       fmt.Fprintf(w, "Hello %s", name)
   }
}

func main() {
    http.HandleFunc("/", helloHandler)
    http.HandleFunc("/query", queryHandler)
    http.ListenAndServe(":8080", nil)
}


2. Then create a Dockerfile and copy the below code
Dockerfile:


   FROM golang:1.6-onbuild
   EXPOSE 8080

3. Then type “docker build -t hello-go-lang . ” to build the images
4. Finally run by typing “docker run -p 8080:8080 hello-go-lang”

Then in the browser type the below urls and check the output
http://localhost:8080/ – Hello
http://localhost:8080/test – Hello test
http://localhost:8080/query?name=test – Hello test

DockerFile for Apache2


FROM ubuntu:16.04

RUN apt-get update && apt-get install -y apache2 && rm -rf /var/www/html/*

ENV APACHE_RUN_USER www-data && APACHE_RUN_GROUP www-data && APACHE_LOG_DIR 
/var/log/apache2

EXPOSE 80

CMD ["/usr/sbin/apachectl", "-D", "FOREGROUND"]


How to create a Kafka topic in Java – Kafka version 0.8.2.1

The below code is useful for creating the Kafka topic via Java code. Please note that the KAFKA_TOPIC and KAFKA_ZOOKEEPER_HOSTS are to be supplied as the VM arguments.

Sample values are given below
KAFKA_TOPIC=kafka_topic
KAFKA_ZOOKEEPER_HOSTS=MACHINE1_DOMAIN_NAME:2181,MACHINE2_DOMAIN_NAME:2181
KAFKA_BROKER_HOSTS=MACHINE1_DOMAIN_NAME:9092,MACHINE2_DOMAIN_NAME:9092

Replace MACHINE1_DOMAIN_NAME, MACHINE2_DOMAIN_NAME with appropriate domain name of your machine or the zookeeper server host machine

If you have only one Kafka server, then you can remove MACHINE2_DOMAIN_NAME from the KAFKA_ZOOKEEPER_HOSTS value. You can add as many as hosts separated by comma

Maven


<dependency>
    <groupId>org.apache.kafka</groupId>
    <artifactId>kafka-clients</artifactId>
    <version>0.8.2.1</version>
</dependency>
<dependency>
    <groupId>org.apache.kafka</groupId>
    <artifactId>kafka_2.10</artifactId>
    <version>0.8.2.1</version>
    <scope>compile</scope>
    <exclusions>
        <exclusion>
            <artifactId>jmxri</artifactId>
            <groupId>com.sun.jmx</groupId>
        </exclusion>
        <exclusion>
            <artifactId>jms</artifactId>
            <groupId>javax.jms</groupId>
        </exclusion>
        <exclusion>
            <artifactId>jmxtools</artifactId>
            <groupId>com.sun.jdmk</groupId>
        </exclusion>
    </exclusions>
</dependency>
<dependency>
    <groupId>com.101tec</groupId>
    <artifactId>zkclient</artifactId>
    <version>0.4</version>
</dependency>



import kafka.admin.AdminUtils;
import kafka.utils.ZKStringSerializer$;
import org.I0Itec.zkclient.ZkClient;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;

import java.util.Properties;

public class KafkaTopicCreator {

    private static final Logger LOGGER = LoggerFactory.getLogger(KafkaTopicCreator.class);

    public static void main(String[] args) {

        String topicName = System.getEnv("KAFKA_TOPIC");
        String zookeeperHosts = System.getEnv("KAFKA_ZOOKEEPER_HOSTS");
        String kafkaBrokerHosts = System.getEnv("KAFKA_BROKER_HOSTS");
        int sessionTimeOut = 10000;
        int connectionTimeOut = 10000;
        LOGGER.info("zookeeperHosts:{}", zookeeperHosts);
        ZkClient zkClient = new ZkClient(zookeeperHosts, sessionTimeOut, connectionTimeOut, ZKStringSerializer$.MODULE$);
        if (!AdminUtils.topicExists(zkClient, topicName)) {
            int replicationFactor = kafkaBrokerHosts.split(",").length;
            AdminUtils.createTopic(zkClient, topicName, 1, replicationFactor, new Properties());
        } else {
            LOGGER.info("{} is available hence no changes are done");
        }
        LOGGER.info("Topic Details:{}", AdminUtils.fetchTopicMetadataFromZk(topicName, zkClient));
        
    }
}

Riak Http API Commands to create search index and assign to a bucket

The Riak Http port number is 8098 and the below are the commands to create a search index and assign it to a bucket. I use Curl to run these commands.

Bucket Name: bucket
Search index Name: bucketIndex

Create the search index: 
curl -XPUT http://localhost:8098/search/index/bucketIndex

Assign the search index to a bucket: 
curl -XPUT http://localhost:8098/buckets/bucket/props -H ‘Content-Type:application/json’ -d ‘{“props”:{“search_index”:”bucketIndex”}}’

Check the bucket properties: 
curl http://localhost:8098/types/default/buckets/bucket/props

Riak – Code to do sorting on the custom object using Protobuf API

In Riak, we can store like a DB and query like Solr.

Assume that we have an application which collects the employee comments about a particular topic. An employee can add multiple comments. So all these comments are stored in a Riak bucket with the employee id and you have a requirement to fetch
all the comments made by an employee based on the timestamp.

The below program will do the same. The steps for this program is given below

1. Create a search index
2. Assign the bucket with the search index
3. Add the different comments
4. Then query with search index and do the sorting on the storedTime
5. Once you get the Riak keys, then retrieve the comments one by one.

import com.basho.riak.client.api.RiakClient;
import com.basho.riak.client.api.commands.buckets.StoreBucketProperties;
import com.basho.riak.client.api.commands.kv.FetchValue;
import com.basho.riak.client.api.commands.kv.StoreValue;
import com.basho.riak.client.api.commands.search.Search;
import com.basho.riak.client.api.commands.search.StoreIndex;
import com.basho.riak.client.core.RiakCluster;
import com.basho.riak.client.core.RiakNode;
import com.basho.riak.client.core.operations.SearchOperation;
import com.basho.riak.client.core.query.Location;
import com.basho.riak.client.core.query.Namespace;
import com.basho.riak.client.core.query.search.YokozunaIndex;
import org.apache.commons.lang3.builder.EqualsBuilder;
import org.apache.commons.lang3.builder.HashCodeBuilder;
import org.apache.commons.lang3.builder.ToStringBuilder;
import org.cas.osd.mp.utils.MnConstants;

import java.io.Serializable;
import java.net.UnknownHostException;
import java.util.ArrayList;
import java.util.List;
import java.util.Map;
import java.util.concurrent.ExecutionException;


public class RiakMain {

    public static void main(String[] args) {
        RiakClient riakClient = null;
        Namespace namespace = new Namespace("riakbucket");
        try {
            //Get the riak client
            riakClient = getRiakClient();
            //Create the index
            StoreIndex index = new StoreIndex.Builder(new YokozunaIndex("bucket_index")).build();
            riakClient.execute(index);
            //Assign the index to the bucket named 'riakbucket'
            Thread.sleep(5000);
            StoreBucketProperties sbp = new StoreBucketProperties.Builder(namespace).
                    withAllowMulti(true).withSearchIndex("bucket_index").build();
            riakClient.execute(sbp);
            //Create the comments
            createTheComments(riakClient, namespace);
            //Retrieve it based on the timestamp. note that i am using the Riak Default key for storing the comments.
            //We may have different approach for handling this.
            //So first Fetch the keys first, then retrieve the data
            Search searchOp = new Search.Builder("bucket_index", "employeeId_s:emp1").sort("storedTime_l desc").
                    withStart(0).
                    withRows(1000).build();
            List keys = new ArrayList();
            SearchOperation.Response response = riakClient.execute(searchOp);
            List<Map<String, List>> results = response.getAllResults();
            for (Map<String, List> map : results) {
                keys.addAll(map.get(MnConstants.YZ_RK));
            }
            for (String key : keys) {
                getData(riakClient, namespace, key);
            }

        } catch (UnknownHostException e) {
            e.printStackTrace();
        } catch (InterruptedException e) {
            e.printStackTrace();
        } catch (ExecutionException e) {
            e.printStackTrace();
        } finally {
            if (riakClient != null) {
                riakClient.shutdown();
            }

        }
    }

    private static RiakClient getRiakClient() throws UnknownHostException {

        //Change the host accordingly
        RiakNode node = new RiakNode.Builder().withRemoteAddress("localhost")
                .withRemotePort(8087).build();
        RiakCluster riakCluster = new RiakCluster.Builder(node).build();
        riakCluster.start();
        return new RiakClient(riakCluster);
    }

    private static void createTheComments(RiakClient riakClient, Namespace ns) throws ExecutionException, InterruptedException {
        CommentsRiakObject commentsRiakObject1 = new CommentsRiakObject();
        commentsRiakObject1.setEmployeeId_s("emp1");
        commentsRiakObject1.setComments_s("Comment 1");
        commentsRiakObject1.setStoredTime_l(System.currentTimeMillis());
        StoreValue storeValue1 = new StoreValue.Builder(commentsRiakObject1).withNamespace(ns).build();
        riakClient.execute(storeValue1);

        CommentsRiakObject commentsRiakObject2 = new CommentsRiakObject();
        commentsRiakObject2.setEmployeeId_s("emp2");
        commentsRiakObject2.setComments_s("Comment 2");
        commentsRiakObject2.setStoredTime_l(System.currentTimeMillis());
        StoreValue storeValue2 = new StoreValue.Builder(commentsRiakObject2).withNamespace(ns).build();
        riakClient.execute(storeValue2);

        CommentsRiakObject commentsRiakObject3 = new CommentsRiakObject();
        commentsRiakObject3.setEmployeeId_s("emp1");
        commentsRiakObject3.setComments_s("Comment 3");
        commentsRiakObject3.setStoredTime_l(System.currentTimeMillis());
        StoreValue storeValue3 = new StoreValue.Builder(commentsRiakObject3).withNamespace(ns).build();
        riakClient.execute(storeValue3);
    }

    private static void getData(RiakClient riakClient, Namespace namespace, String key) throws ExecutionException, InterruptedException {
        Location loc = new Location(namespace, key);
        FetchValue fetchOp = new FetchValue.Builder(loc).build();
        CommentsRiakObject commentsRiakObject = riakClient.execute(fetchOp).getValue(CommentsRiakObject.class);
        System.out.println(commentsRiakObject);
    }

    public static class CommentsRiakObject implements Serializable {

        private String employeeId_s;

        private String comments_s;

        private long storedTime_l;

        public String getEmployeeId_s() {
            return employeeId_s;
        }

        public void setEmployeeId_s(String employeeId_s) {
            this.employeeId_s = employeeId_s;
        }

        public String getComments_s() {
            return comments_s;
        }

        public void setComments_s(String comments_s) {
            this.comments_s = comments_s;
        }

        public long getStoredTime_l() {
            return storedTime_l;
        }

        public void setStoredTime_l(long storedTime_l) {
            this.storedTime_l = storedTime_l;
        }

        @Override
        public boolean equals(Object o) {
            if (this == o) return true;

            if (o == null || getClass() != o.getClass()) return false;

            CommentsRiakObject that = (CommentsRiakObject) o;

            return new EqualsBuilder()
                    .append(storedTime_l, that.storedTime_l)
                    .append(employeeId_s, that.employeeId_s)
                    .append(comments_s, that.comments_s)
                    .isEquals();
        }

        @Override
        public int hashCode() {
            return new HashCodeBuilder(17, 37)
                    .append(employeeId_s)
                    .append(comments_s)
                    .append(storedTime_l)
                    .toHashCode();
        }

        @Override
        public String toString() {
            return new ToStringBuilder(this)
                    .append("employeeId_s", employeeId_s)
                    .append("comments_s", comments_s)
                    .append("storedTime_l", storedTime_l)
                    .toString();
        }
    }
}