文档介绍:第十三讲
1、改变负载
三台机器,改变负载
host2(NameNode、DataNode、TaskTracker)
host6(SecondaryNameNode、DataNode、TaskTracker)
host8(JobTracker 、DataNode、TaskTracker)
指定SecondaryNameNode为host6:
vi conf/masters指定host6
scp conf/masters host6:/home/hadoop/hadoop-
scp conf/masters host8:/home/hadoop/hadoop-
vi conf/hdfs-
   <property>
     <name></name>
     <value>host2:50070</value>
   </property>
   <property>
<name></name>
<value>host6:50090</value>
   </property>
scp conf/hdfs- host6:/home/hadoop/hadoop--
scp conf/hdfs- host8:/home/hadoop/hadoop--
指定JobTracker为host8:
vi conf/mapred-
<property>
<name></name>
<value>host8:9001</value>
</property>
scp conf/mapred- host6:/home/hadoop/hadoop--
scp conf/mapred- host8:/home/hadoop/hadoop--
vi conf/core-
<property>
<name></name>
<value>/home/hadoop/dfs/filesystem/namesecondary</value>
</property>
scp conf/core- host6:/home/hadoop/hadoop--
scp conf/core- host8:/home/hadoop/hadoop--
配置host8:
host8上的脚本start-,所以需要对host8执行:
ssh-keygen -t rsa(密码为空,路径默认)
ssh-copy-id -i .ssh/ ******@host2
ssh-copy-id -i .ssh/ ******@host6
ssh-copy-id -i .ssh/ ******@host8
可以在host8上面通过ssh无密码登陆host2和host6
ssh host2
ssh host6
ssh host8
在/home/hadoop/.bashrc 中追加:
export PATH=/home/hadoop/hadoop-:$PATH
host2: 执行start-
host8: 执行start-
2、SecondaryNameNode
ssh host6
停止secondarynamenode
hadoop-- stop secondarynamenode
强制合并fsimage和eidts
hadoop- secondarynamenode -checkpoint force
启动secondarynamenode
hadoop-- start secondarynamenode
3、启