Showing posts with label Administration. Show all posts
Showing posts with label Administration. Show all posts

Friday 18 March 2016

Hadoop-2.6.0 Cluster Commissioning and Decommissioning Nodes

To add new nodes to the cluster:

1. Add the network addresses of the new nodes to the include file.

hdfs-site.xml
<property>
<name>dfs.hosts</name>
<value>/<hadoop-home>/conf/include</value>
</property>


yarn-site.xml
<property>
<name>
yarn.resourcemanager.nodes.include-path</name>
<value>/<hadoop-home>/conf/include</value>
</property>


Datanodes that are permitted to connect to the NameNode are specified in a
file whose name is specified by the dfs.hosts property.

Includes file resides on the namenode’s local filesystem, and it contains a line for each Datanode, specified by network address (as reported by the Datanode; you can see what this is by looking at the NameNodes web UI).
If you need to specify multiple network addresses for a Datanode, put them on one line,  separated by whitespace. 
eg :
slave01
slave02
slave03
.....


Similarly, NodeManagers that may connect to the ResourceManager are specified in a file whose name is specified by the yarn.resourcemanager.nodes.include-path property. 

In most cases, there is one shared file, referred to as the include file, that both dfs.hosts and  yarn.resourcemanager.nodes.include-path refer to, since nodes in the cluster run both DataNode and NodeManager daemons.

2. Update the NameNode with the new set of permitted DataNodes using this
command:
hdfs dfsadmin –refreshNodes

3. Update the ResourceManager with the new set of permitted NodeManagers using this command:
yarn rmadmin –refreshNodes

4. Update the slaves file with the new nodes, so that they are included in future
operations performed by the Hadoop control scripts.

5. Start the new DataNodes and NodeManagers.
hdfs dfsadmin -refreshNodes
yarn rmadmin –refreshNodes

6. Check that the new DataNodes and NodeManagers appear in the web UI.


To remove nodes from the cluster:

1. Add the network addresses of the nodes to be decommissioned to the exclude file. Do not update the include file at this point.

hdfs-site.xml
<property>
<name>dfs.hosts.exclude</name>
<value>/<hadoop-home>/conf/exclude</value>
</property>

yarn-site.xml
<property>
<name>yarn.resourcemanager.nodes.exclude-path</name>
<value>/<hadoop-home>/conf/exclude</value>
</property>

The decommissioning process is controlled by an exclude file, which for HDFS is set by the dfs.hosts.exclude property and for YARN by the yarn.resourcemanager.nodes.exclude-path property. It is often the case that these properties refer to the same file. The exclude file lists the nodes that are not permitted to connect to the cluster.

2. Update the NameNode with the new set of permitted DataNodes, using this
command:
hdfs dfsadmin –refreshNodes

3. Update the ResourceManager with the new set of permitted NodeManagers using this command:
yarn rmadmin –refreshNodes

4. Go to the web UI and check whether the admin state has changed to “Decommission In Progress” for the DataNodes being decommissioned. They will start copying their blocks to other DataNodes in the cluster.

5. When all the DataNodes report their state as “Decommissioned,” all the blocks have been replicated. Shut down the decommissioned nodes.

6. Remove the nodes from the include file, and run:
hdfs dfsadmin -refreshNodes
yarn rmadmin –refreshNodes


Note: Refer Hadoop-1.2.1 Cluster Commissioning and Decommissioning Nodes this link



Monday 12 October 2015

What is Hadoop Rack Awareness and How to configure it in a cluster

Details about Hadoop Rack Awareness

The Hadoop HDFS and the Map/Reduce components are rack-aware.
The NameNode and the JobTracker obtains the rack id of the slaves in the cluster by invoking an API resolve in an administrator configured module. The API resolves the slave’s DNS name (also IP address) to a rack id. What module to use can be configured using the configuration item topology.node.switch.mapping.impl. The default implementation of the same runs a script/command configured using topology.script.file.name
If topology.script.file.name is not set, the rack id /default-rack is returned for any passed IP address. The additional configuration in the Map/Reduce part is mapred.cache.task.levels which determines the number of levels (in the network topology) of caches.
So, for example, if it is the default value of 2, two levels of caches will be constructed – one for hosts (host -> task mapping) and another for racks (rack -> task mapping).

What is Rack Awareness in Hadoop

For small clusters in which all servers are connected by a single switch, there are only two levels of locality: “on-machine” and “off-machine.” When loading data from a DataNode’s local drive into HDFS, the NameNode will schedule one copy to go into the local DataNode, and will pick two other machines at random from the cluster.
For larger Hadoop installations which span multiple racks, it is important to ensure that replicas of data exist on multiple racks. This way, the loss of a switch does not render portions of the data unavailable due to all replicas being underneath it.
HDFS can be made rack-aware by the use of a script which allows the master node to map the network topology of the cluster. While alternate configuration strategies can be used, the default implementation allows you to provide an executable script which returns the “rack address” of each of a list of IP addresses.
The network topology script receives as arguments one or more IP addresses of nodes in the cluster. It returns on stdout a list of rack names, one for each input. The input and output order must be consistent.
To set the rack mapping script, specify the key topology.script.file.name in conf/hadoop-site.xml. This provides a command to run to return a rack id; it must be an executable script or program. By default, Hadoop will attempt to send a set of IP addresses to the file as several separate command line arguments. You can control the maximum acceptable number of arguments with the topology.script.number.args key.
Rack ids in Hadoop are hierarchical and look like path names. By default, every node has a rack id of /default-rack. You can set rack ids for nodes to any arbitrary path, e.g., /foo/bar-rack. Path elements further to the left are higher up the tree. Thus a reasonable structure for a large installation may be /top-switch-name/rack-name.
Hadoop rack ids are not currently expressive enough to handle an unusual routing topology such as a 3-d torus; they assume that each node is connected to a single switch which in turn has a single upstream switch. This is not usually a problem, however. Actual packet routing will be directed using the topology discovered by or set in switches and routers. The Hadoop rack ids will be used to find “near” and “far” nodes for replica placement (and in 0.17, MapReduce task placement).
The following example script performs rack identification based on IP addresses given a hierarchical IP addressing scheme enforced by the network administrator. This may work directly for simple installations; more complex network configurations may require a file- or table-based lookup process. Care should be taken in that case to keep the table up-to-date as nodes are physically relocated, etc. This script requires that the maximum number of arguments be set to 1.
#!/bin/bash # Set rack id based on IP address. # Assumes network administrator has complete control # over IP addresses assigned to nodes and they are # in the 10.x.y.z address space. Assumes that # IP addresses are distributed hierarchically. e.g., # 10.1.y.z is one data center segment and 10.2.y.z is another; # 10.1.1.z is one rack, 10.1.2.z is another rack in # the same segment, etc.) # # This is invoked with an IP address as its only argument
# get IP address from the input ipaddr=$0
# select “x.y” and convert it to “x/y” segments=`echo $ipaddr | cut –delimiter=. –fields=2-3 –output-delimiter=/` echo /${segments}



Configuring Rack Awareness in Hadoop

We are aware of the fact that hadoop divides the data into multiple file blocks and stores them on different machines. If Rack Awareness is not configured, there may be a possibility that hadoop will place all the copies of the block in same rack which results in loss of data when that rack fails.
Although rare, as rack failure is not as frequent as node failure, this can be avoided by explicitly configuring the Rack Awareness in conf-site.xml.
Rack awareness is configured using the property “topology.script.file.name” in the core-site.xml.
If topology.script.file.name is not configured, /default-rack is passed for any ip address i.e., all nodes are placed on same rack.
Configuring Rack awareness in hadoop involves two steps:
configure the “topology.script.file.name” in core-site.xml ,
<property> <name>topology.node.switch.mapping.impl</name> <value>org.apache.hadoop.net.ScriptBasedMapping</value> <description> The default implementation of the DNSToSwitchMapping. It invokes a script specified in topology.script.file.name to resolve node names. If the value for topology.script.file.name is not set, the default value of DEFAULT_RACK is returned for all node names. </description> </property>
<property> <name>topology.script.file.name</name> <value>core/rack-awareness.sh</value> </property>

Implement the rack-awareness.sh scripts as desired, Sample rack-awareness scripts can be found here,
1. Topology Script file named as : rack-awareness.sh

A sample Bash shell script:

HADOOP_CONF=$HADOOP_HOME/conf 

while [ $# -gt 0 ] ; do
  nodeArg=$1
  exec< ${HADOOP_CONF}/topology.data 
  result="" 
  while read line ; do
    ar=( $line ) 
    if [ "${ar[0]}" = "$nodeArg" ] ; then
      result="${ar[1]}"
    fi
  done 
  shift 
  if [ -z "$result" ] ; then
    echo -n "/default/rack "
  else
    echo -n "$result "
  fi
done 

2. Topology data file named as : topology.data

orienit.node1.com                     /dc1/rack1
orienit.node2.com                     /dc1/rack1
orienit.node3.com                     /dc1/rack1
orienit.node11                           /dc1/rack2
orienit.node12                           /dc1/rack2
orienit.node13                           /dc1/rack2
10.1.1.1                                     /dc1/rack3
10.1.1.2                                     /dc1/rack3
10.1.1.3                                     /dc1/rack3

Sunday 11 October 2015

Setting up Apache ZooKeeper Cluster

This tutorial provides step by step instructions to configure and start up Apache ZooKeeper 3.4.6 Multi-node cluster.

Pre-requisites for zookeeper

First thing that we would need in order to install Apache ZooKeeper are multiple machines. 

In this tutorial, We will be utilizing following virtual machines to install Apache ZooKeeper -

Parameter NameServer 1Server 2Server 3
Host Namenode1node2node3
IP Address192.168.0.1192.168.0.2192.168.0.3
Operating SystemUbuntuUbuntuUbuntu
No of CPU Cores444
RAM6 GB6 GB6 GB

Apart from above machines, please ensure that the following pre-requisites have been fulfilled to ensure that you are able to follow this article without any issues-
  1. JDK 6 or higher installed on all the virtual machines
  2. JAVA_HOME variable set to the path where JDK is installed
  3. Root access on all the virtual machines as all the steps should ideally be performed by root user
  4. Updated /etc/hosts file on both the Servers with below details
                   192.168.0.1   node1
                   192.168.0.2   node2
                   192.168.0.3   node3

Installing Apache Zookeeper

First step to install Apache ZooKeeper is to download its binaries on both the Servers. In this article, we will be installing Apache ZooKeeper 3.4.6 to set up cluster which can be downloaded from here.

Once the libraries have been downloaded on the Servers, you can extract it to a directory where you would like ZooKeeper to be installed. We will refer this directory as $ZOOKEEPER_HOME throughout this tutorial.


Configuring Multi-node Cluster
Once Apache ZooKeeper has been extracted on all the Servers, next step is to configure these. 
We don't need to mark any node as Leader node during configuration as the leader is automatically chosen by ZooKeeper service. So, configuration for all the nodes will be same. 

First part of configuration involves creating/updating a configuration file called zoo.cfg in $ZOOKEEPER_HOME/conf directory with following contents:


ZooKeeper Configuration - $ZOOKEEPER_HOME/conf/zoo.cfg

tickTime=2000
initLimit=10
syncLimit=5
# Where you would like ZooKeeper to save its data
dataDir=$ZOOKEEPER_HOME/data
# Where you would like ZooKeeper to log
dataLogDir=$ZOOKEEPER_HOME/logs
clientPort=2181
server.1=192.168.0.1:2888:3888
server.2=192.168.0.2:2888:3888
server.3=192.168.0.3:2888:3888

First thing that you would need to do in above zoo.cfg file is to replace the value of dataDir and dataLogDir with the directory where you would like ZooKeeper to save its data and log respectively. Now, let's talk about some of the important parts of above configuration.

clientPort property, as the name suggests, is for the clients to connect to ZooKeeper Service.

Next let's talk about the last two entries in server.x=hostname:port1:port2 format. 

Firstly, there are two port numbers port1(2888) and port2(3888). The first followers use to connect to the leader, and the second is for leader election. 
Secondly, x in server.x denotes the id of node. 

Each server.x row must have unique id. Each server is assigned an id by creating a file named myid, one for each server, which resides in that server's data directory, as specified by the configuration file parameter dataDir.

The myid file consists of a single line containing only the text of that machine's id. So myid of server 1 would contain the text 1 and nothing else. 

The id must be unique within the ensemble and should have a value between 1 and 255.


Starting Up Multi-node Cluster

Once you are all set up, next step is to start the cluster. 

On all the Servers, go to bin directory of Apache ZooKeeper and execute the following commands -

ZOOKEEPER_HOME/bin on all machines
./zkServer.sh start

You can execute the follow command to check the status of Apache ZooKeeper -

ZOOKEEPER_HOME/bin on all machines
./zkServer.sh status


Stopping Multi-node Cluster

In order to stop Apache ZooKeeper, execute the following command on all the Servers -

$ZOOKEEPER_HOME/bin on all machines
./zkServer.sh stop
Related Posts Plugin for WordPress, Blogger...