Showing posts sorted by relevance for query hadoop. Sort by date Show all posts
Showing posts sorted by relevance for query hadoop. Sort by date Show all posts

Wednesday 4 February 2015

Hadoop Monitoring Using Ganglia

This post is about monitoring the Hadoop metrics such as HDFS, MAPREDUCE, JVM, RPC and UGI using the Ganglia Monitoring Tool.

I assume that the readers of blog have prior knowledge of Ganglia and Hadoop technology.

To integrate the Ganglia with Hadoop you need to configure hadoop-metrics.properties file of hadoop located inside the hadoop conf folder. In this configuration file you need to configure the server address of ganglia gmetad, period for sending metrics data and ganglia context class name.

The format and name of hadoop metrics properties file is different for various hadoop versions.
For Hadoop 0.20.x, 0.21.0 and 0.22.0 versions, the file name is hadoop-metrics.properties.
For Hadoop 1.x.x and 2.x.x versions, the file name is hadoop-metrics2.properties.
The ganglia context class name also differs with version change of ganglia, for detailed information about Ganglia Context class you can read from GangliaContext.

Procedure of configuring the hadoop metrics properties file: ---------------------------------------------------------------------------------------------
1. Configuration for 2.x.x versions: In such hadoop versions the metrics properties file is located inside the $HADOOP_HOME/etc/hadoop/ folder. Configure thehadoop-metrics2.properties file using the code as shown below:

namenode.sink.ganglia.class=org.apache.hadoop.metrics2.sink.ganglia.GangliaSink31
namenode.sink.ganglia.period=10
namenode.sink.ganglia.servers=gmetad_server_ip:8649

datanode.sink.ganglia.class=org.apache.hadoop.metrics2.sink.ganglia.GangliaSink31
datanode.sink.ganglia.period=10
datanode.sink.ganglia.servers=gmetad_server_ip:8649

resourcemanager.sink.ganglia.class=org.apache.hadoop.metrics2.sink.ganglia.GangliaSink31
resourcemanager.sink.ganglia.period=10
resourcemanager.sink.ganglia.servers=gmetad_server_ip:8649

nodemanager.sink.ganglia.class=org.apache.hadoop.metrics2.sink.ganglia.GangliaSink31
nodemanager.sink.ganglia.period=10
nodemanager.sink.ganglia.servers=gmetad_server_ip:8649



2. Configuration for 1.x.x versions: In such hadoop versions the metrics properties file is located inside the $HADOOP_HOME/conf/ folder. Configure the hadoop-metrics2.properties file using the code as shown below:

namenode.sink.ganglia.class=org.apache.hadoop.metrics2.sink.ganglia.GangliaSink31
namenode.sink.ganglia.period=10
namenode.sink.ganglia.servers=gmetad_server_ip:8649

datanode.sink.ganglia.class=org.apache.hadoop.metrics2.sink.ganglia.GangliaSink31
datanode.sink.ganglia.period=10
datanode.sink.ganglia.servers=gmetad_server_ip:8649

jobtracker.sink.ganglia.class=org.apache.hadoop.metrics2.sink.ganglia.GangliaSink31
jobtracker.sink.ganglia.period=10
jobtracker.sink.ganglia.servers=gmetad_server_ip:8649

tasktracker.sink.ganglia.class=org.apache.hadoop.metrics2.sink.ganglia.GangliaSink31
tasktracker.sink.ganglia.period=10
tasktracker.sink.ganglia.servers=gmetad_server_ip:8649

maptask.sink.ganglia.class=org.apache.hadoop.metrics2.sink.ganglia.GangliaSink31
maptask.sink.ganglia.period=10
maptask.sink.ganglia.servers=gmetad_server_ip:8649

reducetask.sink.ganglia.class=org.apache.hadoop.metrics2.sink.ganglia.GangliaSink31
reducetask.sink.ganglia.period=10
reducetask.sink.ganglia.servers=gmetad_server_ip:8649


3. Configuration for 0.20.x, 0.21.0 and 0.22.0 versions: In such hadoop versions the metrics properties file is located inside the $HADOOP_HOME/conf/ folder. Configure the hadoop-metrics.properties file using the code as shown below:

dfs.class=org.apache.hadoop.metrics.ganglia.GangliaContext31
dfs.period=10
dfs.servers=gmetad_server_ip:8649

mapred.class=org.apache.hadoop.metrics.ganglia.GangliaContext31
mapred.period=10
mapred.servers=gmetad_server_ip:8649

jvm.class=org.apache.hadoop.metrics.ganglia.GangliaContext31
jvm.period=10
jvm.servers=gmetad_server_ip:8649

rpc.class=org.apache.hadoop.metrics.ganglia.GangliaContext31
rpc.period=10
rpc.servers=gmetad_server_ip:8649

ugi.class=org.apache.hadoop.metrics.ganglia.GangliaContext31
ugi.period=10
ugi.servers=gmetad_server_ip:8649



The above configuration is for the unicast mode of Ganglia. However, if you are running Ganglia in multicast mode then you need to use the multicast address in place of gmetad_server_ip in the configuration file. Once you have applied the above changes, then you need to restart the gmetad and gmond services of Ganglia on the nodes. You also need to restart Hadoop services if they are running. Once you are done with restarting the services, the Ganglia UI displays the Hadoop graphs. InitiallyGanglia UI does not show graphs for the jobs, they will appear on UI only after submitting a job in Hadoop.

Wednesday 30 July 2014

Books for Hadoop & Map Reduce

Books for Hadoop & Map Reduce

  • The Definitive guide is in some ways the ‘hadoop bible’, and can be an excellent reference when working on Hadoop, but do not expect it to provide a simple getting started tutorial for writing a Map Reduce. This book is great for really understanding how everything works and how all the systems fit together.
  • This is the book if you need to know the ins and outs of prototyping, deploying, configuring, optimizing, and tweaking a production Hadoop system. Eric Sammer is a very knowledgeable engineer, so this book is chock full of goodies.
  • Design Patterns is a great resource to get some insight into how to do non-trivial things with Hadoop. This book goes into useful detail on how to design specific types of algorithms, outlines why they should be designed that way, and provides examples.
  • One of the few non-O’Reilly books in this list, Hadoop in Action is similar to the definitive guide in that it provides a good reference for what Hadoop is and how to use it. It seems like this book provides a more gentle introduction to Hadoop compared to the other books in this list.
  • A slightly more advanced guide to running Hadoop. It includes chapters that detail how to best move data around, how to think in Map Reduce, and (importantly) how to debug and optimize your jobs.
  • This A-Press book claims it will guide you through initial hadoop set up while also helping you avoid many of the pitfalls that usual Hadoop novices encounter. Again it is similar in contents to Hadoop in Action and The Definitive Guide
  • Another Hadoop intro book, Hadoop Essentials focuses on providing a more practical introduction to Hadoop which seems ideal for a CS classroom setting
  • A book which aims to provide real-world examples of common hadoop problems. It also covers building integrated solutions using surrounding tools (hive, pig, girafe, etc)
  • The cookbook provides an introduction to installing / configuring Hadoop along with ‘more than 50 ready-to-use Hadoop MapReduce recipes’.
  • Released July 2013 this book promises to guide readers through writing and testing Cascading based workflows. This is one of the few books written about higher level Map Reduce frameworks, so I’m excited to give it a read.
  • A front to back guide to YARN, the next generation task management layer for Hadoop. This book is written (in part) by the YARN project founder, and the project lead.
  • This book is built around seven map reduce ‘recipes’ to learn from. It aims to be a consise, practical guide to get you coding.

Bonus

  • Russell introduces his own version of an agile tool-set for data analysis and exploration. The book covers both investigative tools (like Apache Pig), and visualization tools like D3. His pitch is pretty compelling

Wednesday 10 June 2015

Hadoop commands



hadoop fs -ls /

hadoop fs -lsr /







hadoop fs -ls




hadoop fs -mkdir /kalyan

hadoop fs -mkdir /kalyan1

hadoop fs -ls /


hadoop fs -put /etc/hosts /etc/hostname /etc/passwd /kalyan

hadoop fs -ls /kalyan









hadoop fs -cat /kalyan/passwd



hadoop fs -text /kalyan/hostname




hadoop fs -cp /kalyan/hosts /kalyan1/hosts

hadoop fs -cp /kalyan/hosts /kalyan1/hosts1




hadoop fs -mv /kalyan/hosts /kalyan1/myhosts



hadoop fs -rm /kalyan1/hosts

hadoop fs -rmr /kalyan1




hadoop fs -get /kalyan/passwd /home/hadoop/temp




hadoop fs -getmerge /kalyan  /home/hadoop/temp/merge

hadoop fs -touchz /kalyan/test


hadoop fs -du /

hadoop fs -dus /

hadoop fs -du /kalyan

hadoop fs -dus /kalyan



hadoop fs -mkdir /demo

hadoop fs -chown -R test:testgrp /demo

hadoop fs -chmod 770 /demo

hadoop fs -chmod 775 /demo


Saturday 7 September 2013

Hadoop Cluster Interview Questions

Which are the three modes in which Hadoop can be run?
The three modes in which Hadoop can be run are:
1. standalone (local) mode
2. Pseudo-distributed mode
3. Fully distributed mode

What are the features of Stand alone (local) mode?
In stand-alone mode there are no daemons, everything runs on a single JVM. It has no DFS and utilizes the local file system. Stand-alone mode is suitable only for running MapReduce  programs during development. It is one of the most least used environments.

What are the features of Pseudo mode?
Pseudo mode is used both for development and in the QA environment. In the Pseudo mode all the daemons run on the same machine.

Can we call VMs as pseudos?
No, VMs are not pseudos because VM is something different and pesudo is very specific to Hadoop.

What are the features of Fully Distributed mode?
Fully Distributed mode is used in the production environment, where we have ‘n’ number of machines forming a Hadoop cluster. Hadoop daemons run on a cluster of machines. There is one host onto which Namenode is running and another host on which datanode is running and then there are machines on which task tracker is running. We have separate masters and separate slaves in this distribution.

Does Hadoop follows the UNIX pattern?
Yes, Hadoop closely follows the UNIX pattern. Hadoop also has the ‘conf‘ directory as in the case of UNIX.

In which directory Hadoop is installed?
Cloudera and Apache has the same directory structure. Hadoop is installed in cd
/usr/lib/hadoop/

What are the port numbers of Namenode, job tracker and task tracker?
The port number for Namenode is ’50070′, for job tracker is ’50030′ and for task tracker is ’50060′.


What is the Hadoop-core configuration?
Hadoop core is configured by two xml files:
1. hadoop-default.xml which was renamed to 2. hadoop-site.xml.
These files are written in xml format. We have certain properties in these xml files, which consist of name and value.

What are the Hadoop configuration files at present?
There are 3 configuration files in Hadoop:
1. core-site.xml
2. hdfs-site.xml
3. mapred-site.xml
These files are located in the hadoop/conf/ subdirectory.

How to exit the Vi editor?
To exit the Vi Editor, press ESC and type :q and then press enter.

What is a spill factor with respect to the RAM?
Spill factor is the size after which your files move to the temp file. Hadoop-temp directory is used for this.

Is fs.mapr.working.dir a single directory?
Yes, fs.mapr.working.dir it is just one directory.

Which are the three main hdfs-site.xml properties?
The three main hdfs-site.xml properties are:
1. dfs.name.dir which gives you the location on which metadata will be stored and where DFS is located – on disk or onto the remote.
2. dfs.data.dir which gives you the location where the data is going to be stored.
3. fs.checkpoint.dir which is for secondary Namenode.

How to come out of the insert mode?
To come out of the insert mode, press ESC, type :q (if you have not written anything) OR type :wq (if you have written anything in the file) and then press ENTER.

What is Cloudera and why it is used?
Cloudera is the distribution of Hadoop. It is a user created on VM by default. Cloudera belongs to Apache and is used for data processing.

What happens if you get a ‘connection refused java exception’ when you type hadoop fsck /?
It could mean that the Namenode is not working on your VM.
We are using Ubuntu operating system with Cloudera, but from where we can


What does ‘jps’ command do?
This command checks whether your Namenode, datanode, task tracker, job tracker, etc are working or not.

How can I restart Namenode?
1. Click on stop-all.sh and then click on start-all.sh OR
2. Write sudo hdfs (press enter), su-hdfs (press enter), /etc/init.d/ha (press enter) and then /etc/init.d/hadoop-namenode start (press enter).

What is the full form of fsck?
Full form of fsck is File System Check.

How can we check whether Namenode is working or not?
To check whether Namenode is working or not, use the command /etc/init.d/hadoop-namenode status or as simple as jps.

What does the command mapred.job.tracker do?
The command mapred.job.tracker lists out which of your nodes is acting as a job tracker.

What does /etc /init.d do?
/etc /init.d specifies where daemons (services) are placed or to see the status of these daemons. It is very LINUX specific, and nothing to do with Hadoop.

How can we look for the Namenode in the browser?
If you have to look for Namenode in the browser, you don’t have to give localhost:8021, the port number to look for Namenode in the brower is 50070.

How to change from SU to Cloudera?
To change from SU(super user) to Cloudera just type exit.

Which files are used by the startup and shutdown commands?
Slaves and Masters are used by the startup and the shutdown commands.

What do slaves consist of?
Slaves consist of a list of hosts, one per line, that host datanode and task tracker servers.

What do masters consist of?
Masters contain a list of hosts, one per line, that are to host secondary namenode servers.

What does hadoop-env.sh do?
hadoop-env.sh provides the environment for Hadoop to run. JAVA_HOME is set over here.

Can we have multiple entries in the master files?
Yes, we can have multiple entries in the Master files.

Where is hadoop-env.sh file present?
hadoop-env.sh file is present in the conf location.

In Hadoop_PID_DIR, what does PID stands for?
PID stands for ‘Process ID’.

What does /var/hadoop/pids do?
It stores the PID.

What does hadoop-metrics.properties file do?
hadoop-metrics.properties is used for ‘Reporting‘ purposes. It controls the reporting for Hadoop. The default status is ‘not to report‘.

What are the network requirements for Hadoop?
The Hadoop core uses Shell (SSH) to launch the server processes on the slave nodes. It requires password-less SSH connection between the master and all the slaves and the secondary machines.

Why do we need a password-less SSH in Fully Distributed environment?
We need a password-less SSH in a Fully-Distributed environment because when the cluster is LIVE and running in Fully Distributed environment, the communication is too frequent. The job tracker should be able to send a task to task tracker quickly.

Does this lead to security issues?
No, not at all. Hadoop cluster is an isolated cluster. And generally it has nothing to do with an internet. It has a different kind of a configuration. We needn’t worry about that kind of a security breach, for instance, someone hacking through the internet, and so on. Hadoop has a very secured way to connect to other machines to fetch and to process data.

On which port does SSH work?
SSH works on Port No. 22, though it can be configured. 22 is the default Port number.

Can you tell us more about SSH?
SSH is nothing but a secure shell communication, it is a kind of a protocol that works on a Port No. 22, and when you do an SSH, what you really require is a password.

Why password is needed in SSH localhost?
Password is required in SSH for security and in a situation where passwordless
communication is not set.

Do we need to give a password, even if the key is added in SSH?
Yes, password is still required even if the key is added in SSH.

What if a Namenode has no data?
If a Namenode has no data it is not a Namenode. Practically, Namenode will have some data.

What happens to job tracker when Namenode is down?
When Namenode is down, your cluster is OFF, this is because Namenode is the single point of failure in HDFS.

What happens to a Namenode, when job tracker is down?
When a job tracker is down, it will not be functional but Namenode will be present. So, cluster is accessible if Namenode is working, even if the job tracker is not working.

Can you give us some more details about SSH communication between Masters and the Slaves?
SSH is a password-less secure communication where data packets are sent across the slave. It has some format into which data is sent across. SSH is not only between masters and slaves but also between two hosts.

What is formatting of the DFS?
Just like we do for Windows, DFS is formatted for proper structuring. It is not usually done as it formats the Namenode too.

Does the HDFS client decide the input split or Namenode?
No, the Client does not decide. It is already specified in one of the  configurations through which input split is already configured.

In Cloudera there is already a cluster, but if I want to form a cluster on Ubuntu can we do it?
Yes, you can go ahead with this! There are installation steps for creating a new cluster. You can uninstall your present cluster and install the new cluster.

Can we create a Hadoop cluster from scratch?
Yes we can do that also once we are familiar with the Hadoop environment.

Can we use Windows for Hadoop?
Actually, Red Hat Linux or Ubuntu are the best Operating Systems for Hadoop. Windows is not used frequently for installing Hadoop as there are many support problems attached with Windows. Thus, Windows is not a preferred environment for Hadoop.



Related Posts Plugin for WordPress, Blogger...