Saturday 7 September 2013

Hadoop-1.2.1 Cluster Commissioning and Decommissioning Nodes

To add new nodes to the cluster:

1. Add the network addresses of the new nodes to the include file.

hdfs-site.xml
<property>
<name>dfs.hosts</name>
<value>/<hadoop-home>/conf/include</value>
</property>


mapred-site.xml
<property>
<name>mapred.hosts</name>
<value>/<hadoop-home>/conf/include</value>
</property>


Datanodes that are permitted to connect to the namenode are specified in a
file whose name is specified by the dfs.hosts property.

Includes file resides on the NameNodes local filesystem, and it contains a line for each DataNode, specified by network address (as reported by the DataNode; you can see what this is by looking at the NameNodes web UI). 

If you need to specify multiple network addresses for a DataNode, put them on one line,  separated by whitespace. 
eg :
slave01
slave02
slave03
.....


Similarly, TaskTrackers that may connect to the JobTracker are specified in a file whose name is specified by the mapred.hosts property. 

In most cases, there is one shared file, referred to as the include file, that both dfs.hosts and  mapred.hosts refer to, since nodes in the cluster run both DataNode and TaskTracker daemons.

2. Update the namenode with the new set of permitted datanodes using this
command:
% hadoop dfsadmin –refreshNodes

3. Update the JobTracker with the new set of permitted TaskTrackers using this command:
% hadoop mradmin –refreshNodes

4. Update the slaves file with the new nodes, so that they are included in future
operations performed by the Hadoop control scripts.

5. Start the new DataNodes and TaskTrackers.
hadoop dfsadmin -refreshNodes

hadoop mradmin –refreshNodes

6. Check that the new DataNodes and TaskTrackers appear in the web UI.


To remove nodes from the cluster:

1. Add the network addresses of the nodes to be decommissioned to the exclude file. Do not update the include file at this point.

hdfs-site.xml
<property>
<name>dfs.hosts.exclude</name>
<value>/<hadoop-home>/conf/exclude</value>
</property>

mapred-site.xml
<property>
<name>mapred.hosts.exclude </name>
<value>/<hadoop-home>/conf/exclude</value>
</property>

The decommissioning process is controlled by an exclude file, which for HDFS is set by the dfs.hosts.exclude property and for MapReduce by the mapred.hosts.exclude property. It is often the case that these properties refer to the same file. 

The exclude file lists the nodes that are not permitted to connect to the cluster.

2. Update the Nameodes with the new set of permitted DataNodes, using this
command:
% hadoop dfsadmin –refreshNodes

3. Update the JobTracker with the new set of permitted TaskTrackers using this command:
% hadoop mradmin –refreshNodes

4. Go to the web UI and check whether the admin state has changed to “Decommission In Progress” for the DataNodes being decommissioned. They will start copying their blocks to other DataNodes in the cluster.

5. When all the DataNodes report their state as “Decommissioned,” all the blocks have been replicated. Shut down the decommissioned nodes.

6. Remove the nodes from the include file, and run:
% hadoop dfsadmin -refreshNodes
% hadoop mradmin –refreshNodes


Note: Refer Hadoop-2.6.0 Cluster Commissioning and Decommissioning Nodes this link

No comments:

Post a Comment

Related Posts Plugin for WordPress, Blogger...