Scala – Class vs Object

Scala class does not have static variables and methods in  it but it has a singleton object where we can define the relevant methods.

Please refer the below example.


class Hw {
  def method() {
    println("Class method");
  }
}

object Hw {
  def method() {
    println("Object method");
  }
}

object HwTest {
  def main(args: Array[String]) {
    var hw: Hw = new Hw();
    hw.method();
    Hw.method();
  }
}

Here we have a Hw class and an object with the same name. In this scenario, the singleton object Hw is called as Companion object. We can use this object just like a static method. Refer the HwTest main method and its output.

 

Capture

Advertisements

Interface Description Language

It’s a format used to describe the interface of an application(App1) in a language independent way so that any application(App2) can use this format and communicate with this application(App1).

For example, An application(App1) is written in Java language assume this as web services and an another application(App2) wants to consume the App1 services. So we have to define and describe the App1 services in IDL language so that this can be used by other application(App2) easily.

Few of IDL languages are given below,

Web Services Description Language
Web Application Description Language
RESTful Service Description Language
Protocol Buffer
Apache Thrift
Avro IDL
JSON-WSP
 

 

Flume Agent for consuming Kafka messages

Please refer the below steps to create a single node flume agent to consume Kafka messages

1. Download the Flume and install it
2. Checkout the GitHub repo [https://github.com/dkbalachandar/flume-kafka-agent]
3. Copy the flume-kafka-conf.properties[available in the /src/main/resources] into FLUME_INSTALLED_DIRECTORY/conf folder[Update the zookeeper node and the topic details]
4. Then run mvn package and then copy the /target/flume-kafka-agent-1.0-SNAPSHOT-jar-with-dependencies.jar into FLUME_INSTALLED_DIRECTORY/lib folder
5. Run the Flume agent by running the below command

bin/flume-ng agent --conf conf --conf-file conf/flume-kafka-conf.properties --name a1 -Dflume.root.logger=INFO,console

Apache Avro – Example(Serialization and Deserialization)

I have used Protocol Buffer and just want to learn Avro. Here are few points from my learning.

Avro is extensively used in the Hadoop ecosystem. The schema should be defined in .avsc file. Please refer the below example schema file


{
	"namespace": "com.avro.entity",
	"type": "record",
	"name": "User",
	"fields": [
		{"name": "id",  "type": ["int", "null"]},
		{"name": "fName", "type": "string"},
                {"name": "lName", "type": "string"}
	 ]
}

Once define the schema file, then java classes needs to be generated for that schema file. Then the same java classes needs to be used in the applications.

We can either create the java classes with that schema or we can use the schema file directly in the application.

Please refer the below Github repo to know the serialization and deserialization of avro content.

https://github.com/dkbalachandar/avro-example