How to display traffic information with Google MAPs API

I have recently going through the Google MAPs API and learned about displaying the traffic information of a city or a place in the Google Map.

Its very easy to implement. If we want to show the traffic information in our site, then this API comes in handy for us.

Before using this one in your site, you should have to get the Google API Key as this one requires a valid API key.

I want to find out the traffic information for Columbus, Ohio. Hence updated the latitude and longitude accordingly

HTML

trafficlayercode

 

Output:

Jsfiddle page with code and output:

trafficlayer

Apache Solr 6.3.0 – How to Load CSV and do Search

In this post, I am going to show three things,

  • How to Install the Apache solr 6.3.0 on Ubuntu
  • How to load a CSV file which contains US baby names information
  • How to query the data with the Solr API

To install Apache Solr, Follow the below steps


sudo wget http://apache.claz.org/lucene/solr/6.3.0/solr-6.3.0.tgz
sudo gunzip solr-6.3.0.tgz
sudo tar -xvf  solr-6.3.0.tar

Go into the folder solr-6.3.0 and open up a terminal and then type below command to start the Solr server


 bin/solr start

Check the solr admin console with this link http://localhost:8983/solr

Then the next step is to create the collections and load the CSV data.


bin/solr create -c  babynames

Once we create the collections, then we have to specify the field definitions in the schema file. The schema file is available under the /server/solr/babynames/conf/ folder. managed-schema is the schema file name. You can rename this to schema.xml. But i just keep it as it and add the below fields in that file


  <field name="Count" type="int" indexed="true" stored="true"/>
  <field name="Gender" type="string" indexed="true" stored="true"/>
  <field name="Id" type="int" indexed="false" stored="false"/>
  <field name="Name" type="text_general" indexed="true" stored="true"/>
  <field name="Year" type="int" indexed="true" stored="true"/>

Then load the CSV file with the below command. I have used this file https://github.com/dkbalachandar/spark-scala-examples/blob/master/src/main/resources/NationalNames.csv for this exercise


bin/solr post -c babynames NationsNames.csv

Finally, I query the data with the Solr REST API.

To search with Name: http://localhost:8983/solr/babynames/select?q=Name:%22Mary%22
To search with Gender : http://localhost:8983/solr/babynames/select?q=Gender:%22M%22
To search with year range: http://localhost:8983/solr/babynames/select?q=*&fq=Year:%5B1880%20TO%201890%5D

Refer below the screenshots taken.

namesearchyearsearch

Creating a Web app and RESTful services using MEAN stack and develop the back end with Spark + Scala

I have developed a MEAN stack application which shows the San Francisco Food inspections details.
Source: Food Inspection(Use Food Inspections – LIVES Standard)

I have used Spark, Scala, MongoDB, NodeJs, AngularJs to do this.

My spark job reads the input CSV data contains food inspection details and processes it and stores the data in MongoDB as collections. I have allFoodInspection and filterFoodInspection collections here. The first one has all the data and the second one has the business name, the unique risk category and number of risk’s committed.

My MEAN stack REST layer reads the data from Mongodb and processes and exposes the data and the Web Layer uses the data and display it and use the data for drawing a chart.

Lets see how we can execute this.

  1. Follow the steps given in this post to install scala, sbt and spark in your machine if you are using Ubuntu. Refer my another post to know how to install these. How to install Scala, SBT and Spark in Ubuntu
  2. Clone the git repository https://github.com/dkbalachandar/sf-food-inspection-spark.git and go inside of sf-inspection-spark folder and run ‘sbt assembly’ to create a far jar with all the dependencies.Here I have used spark 2.0.2 and scala 2.11.8 (Spark 2.0.2 version is compatible with scala 2.11.x version).
    If you don’t use the compatible version then you will end up with lots of errors.
  3. Copy the ../sf-food-inspection-spark/target/scala-2.11/sf-food-inspection-spark-assembly-1.0.jar to /usr/local/spark folder
  4. Download Food_Inspections_-_LIVES_Standard.csv from https://data.sfgov.org/browse?q=food+inspection and move it to /usr/local/spark folder
  5. Install Mongodb with the below steps
    
     sudo apt-key adv --keyserver hkp://keyserver.ubuntu.com:80 --recv 0C49F3730359A14518585931BC711F9BA15703C6
     echo "deb http://repo.mongodb.org/apt/ubuntu trusty/mongodb-org/3.4 multiverse" | sudo tee /etc/apt/sources.list.d/mongodb-org-3.4.list
     sudo apt-get update
     sudo apt-get install -y mongodb-org
     sudo service mongod start
    
    

    Run the spark job with the below command

    
    bin/spark-submit --class com.spark.SFFoodInspectionAnalysis --master local sf-food-inspection-spark-assembly-1.0.jar file:///usr/local/spark/Food_Inspections_-_LIVES_Standard.csv 
    
    
  6. Then check the Mongo Db and check the collections and make sure that the data are getting inserted and availableOpen up a terminal window and type ‘mongo’ and enter. It will open a shell window. Then use the below commands to verify the data
    
      show dbs
      use sfFood
      show collections
      db.allFoodInspection.find()
      db.filterFoodInspection.find()
    
    
  7. Clone the git repository https://github.com/dkbalachandar/sf-food-inspection-web.git and go inside of sf-food-inspection-web folder, then run below commands to build and run the application
    
      npm install
      node server.js
    
    
  8. Open the http://localhost:8081 and check the page. I have used the data and created a table and display a chart with the details.

Please are the some of the screenshots taken from the application

How to install Scala, SBT and Spark in Ubuntu

Install Scala 2.11.8 and SBT


sudo apt-get remove scala-library scala
sudo wget www.scala-lang.org/files/archive/scala-2.11.8.deb
sudo dpkg -i scala-2.11.8.deb
echo "deb https://dl.bintray.com/sbt/debian /" | sudo tee -a /etc/apt/sources.list.d/sbt.list
sudo apt-key adv --keyserver hkp://keyserver.ubuntu.com:80 --recv 642AC823
sudo apt-get update
sudo apt-get install sbt


Install Spark 2.0.2


sudo wget http://d3kbcqa49mib13.cloudfront.net/spark-2.0.2-bin-hadoop2.7.tgz
sudo chmod -R 755 spark-2.0.2-bin-hadoop2.7.tgz
sudo gunzip spark-2.0.2-bin-hadoop2.7.tgz
sudo tar -xvf spark-2.0.2-bin-hadoop2.7.tar
sudo mv spark-2.0.2-bin-hadoop2.7 spark
sudo mv spark /usr/local/spark

Open bashrc file and add spark_home and update PATH  
sudo vi ~/.bashrc
  export SPARK_HOME="/usr/local/spark"
  export PATH=$PATH:$SPARK_HOME/bin

source ~/.bashrc

SaaS PaaS IaaS

In this post, I am going to explain about my exposure to these cloud computing terms.

SaaS – Software as a service

– One of my client X uses a Financial Planning application that has been developed by a different company Y. The application is available and deployed on the client network. The main drawback is that company Y has to closely follow up with company  X to make any changes on the application and also maintaining the latest version of this is hard for them. So Company Y wants to deploy that application on their network and provide the access to Company X users. So Company Y gives Software as a service to Company X

PaaS – Platform as a service

– Recently, I have participated in a Hackathon to develop a health application that should be deployed on the Red Hat Openshift Platform. The OpenShift provides numerous things to develop, host and scale LAMP applications. Here we use their cloud environment to host our application. So OpenShift provides Platform as a service to us for hosting the application

IaaS – Infrastructure as a service

– It is an instant computing infrastructure, provisioned and managed over the Internet. It helps us to quickly scale up and down with demand and pay only for what we use. We use Openstack for provisioning and managing the new virtual machines.

Free Flowchart Drawing Tools

I have wanted to create a simple flowchart diagram to depict an application flow. As I don’t have Visio installed in my system, I have looked for an alternative tool and found few tools.

lucidchart is good tool but it has limited features in the free version. So at the maximum we can create 3 active documents and the maximum number of objects is 60. So if you want to create a small diagram, then it’s an excellent tool. Here is the link: lucidchart

Google Drawings is an another tool and its free as well. Here is the link: Google Drawings

There is an another excellent tool draw.io. Its really good moreover its free. As my flow diagram is a big one, I have decided to use this but only drawback is you can’t export all the pages as like lucidchart. So you have to export the page one by one as image and add those images in a MS word document then convert it into PDF.

If you don’t want to use online and really want a offline tool, then YED is a good tool. YED

 

Test Secure REST services with Chrome Browser Plugin

Most of us want to test out the REST services via Advanced Rest Client or Postman for some reason or debug an issue.

But if the REST services are secure and protected by Ping Access or SiteMinder or any other tool, then we will get a login page. So we have to hardcode the browser cookies to bypass the login page.

There is an another way to do that.

If you are using Advanced Rest Client, then you can use ARC cookie exchange plugin.
So this plugin helps ARC plugin to retrieve the browser cookies and send it in the request

If you are using Postman, then you can use Postman interceptor. So the Postman interceptor plugin helps the Postman plugin to use the browser cookies for each service call.