본문 바로가기
dev/hbase

Steps to install Hadoop 2.2.0 release on multi-node cluster

by igooo 2014. 5. 9.
728x90

HBase 를 위해서 hadoop, zookeeper를 설치하고 HBase 연동까지 테스트한다.


Environment

  • OS : Ubuntu 12.04
  • Hadoop : Hadoop 2.2.0
  • ZooKeeper : ZooKeeper 3.4.5
  • HBase : HBase 0.98.0
  • Java : oracle java 7

버전은 아래 문서를 참고하여 설치하려는 HBase 지원여부에 따라 선택하였다.

http://hbase.apache.org/book/configuration.html#hadoop



java

$ sudo add-apt-repository ppa:webupd8team/java

$ sudo apt-get update

$ sudo apt-get install oracle-java[6-8]-installer 

$ java -version

java version "1.7.0_55"
Java(TM) SE Runtime Environment (build 1.7.0_55-b13)

Java HotSpot(TM) 64-Bit Server VM (build 24.55-b03, mixed mode



SSH

Make sure that the master is able to do a password-less ssh to all the slaves.

$ su - hduser
$ ssh-keygen -t rsa -P '' -f ~/.ssh/id_rsa
$ cat ~/.ssh/id_rsa.pub >> ~/.ssh/authorized_keys



Edit ~/.bashrc

# java

export JAVA_HOME=/usr/lib/jvm/java-7-oracle


# hadoop

pexport HADOOP_HOME={hadoop 설치 위치}

export PATH=$PATH:$HADOOP_HOME/bin

export PATH=$PATH:$HADOOP_HOME/sbin

export HADOOP_MAPRED_HOME=${HADOOP_HOME}

export HADOOP_COMMON_HOME=${HADOOP_HOME}

export HADOOP_HDFS_HOME=${HADOOP_HOME}

export YARN_HOME=${HADOOP_HOME}

export HADOOP_CONF_DIR=${HADOOP_HOME}/etc/hadoop

export YARN_CONF_DIR=${HADOOP_HOME}/etc/hadoop


# Native Path

export HADOOP_COMMON_LIB_NATIVE_DIR=${HADOOP_HOME}/lib/native

export HADOOP_OPTS="-Djava.library.path=$HADOOP_HOME/lib/native"



Hadoop

Download

$ wget http://mirror.apache-kr.org/hadoop/common/hadoop-2.2.0/hadoop-2.2.0-src.tar.gz

$ tar zxvf hadoop-2.2.0-src.tar.gz



Configuration

hadoop-env.sh, yarn-env.sh 자바 설정

export JAVA_HOME=/usr/lib/jvm/java-7-oracle

export HADOOP_HOME={hadoop 설치 위치}

export HADOOP_COMMON_LIB_NATIVE_DIR=${HADOOP_HOME}/lib/native

export HADOOP_OPTS="-Djava.library.path=$HADOOP_HOME/lib/native"


Site-specific configuration

etc/hadoop/core-site.xml, etc/hadoop/hdfs-site.xml, etc/hadoop/yarn-site.xml and etc/hadoop/mapred-site.xml.


core-site.xml

<configuration>

        <property>

                <name>fs.defaultFS</name>

                <value>hdfs://master:9000</value>

        </property>

        <property>

                <name>io.file.buffer.size</name>

                <value>131072</value>

        </property>

        <property>

                <name>hadoop.tmp.dir</name>

                <value>/home/hadoop/tmp</value>

        </property>

</configuration>


hdfs-site.xml

<configuration>

        <property>

                <name>dfs.permissions</name>

                <value>false</value>

        </property>

        <property>

                <name>dfs.blocksize</name>

                <value>268435456</value>

        </property>

        <property>

                <name>dfs.namenode.handler.count</name>

                <value>100</value>

        </property>

        <property>

                <name>dfs.replication</name>

                <value>3</value>

        </property>

</configuration>


mapred-site.xml

<configuration>

        <property>

                <name>mapreduce.framework.name</name>

                <value>yarn</value>

        </property>

        <property>

                <name>mapreduce.map.memory.mb</name>

                <value>1536</value>

        </property>

        <property>

                <name>mapreduce.map.java.opts</name>

                <value>-Xmx1024M</value>

        </property>

        <property>

                <name>mapreduce.reduce.memory.mb</name>

                <value>-Xmx1024M</value>

        </property>

        <property>

                <name>mapreduce.task.io.sort.mb</name>

                <value>512</value>

        </property>

        <property>

                <name>mapreduce.task.io.sort.factor</name>

                <value>100</value>

        </property>

        <property>

                <name>mapreduce.reduce.shuffle.parallelcopies</name>

                <value>50</value>

        </property>

</configuration>


yarn-site.xml

<configuration>

       <property>

                <name>yarn.nodemanager.aux-services</name>

                <value>mapreduce_shuffle</value>

        </property>

        <property>

                <name>yarn.nodemanager.aux-services.mapreduce_shuffle.class</name>

                <value>org.apache.hadoop.mapred.ShuffleHandler</value>

        </property>

        <property>

                <name>yarn.resourcemanager.resource-tracker.address</name>

                <value>master:8025</value>

        </property>

        <property>

                <name>yarn.resourcemanager.scheduler.address</name>

                <value>master:8030</value>

        </property>

        <property>

                <name>yarn.resourcemanager.address</name>

                <value>master:8035</value>

        </property>

</configuration>



Add slaves

slaves 파일에 노드 추가 

slave1

slave2



Format the namenode

$ bin/hadoop namenode -format



Start/Stop Hadoop

start hadoop

$ sbin/hadoop-daemon.sh start namenode

$ sbin/hadoop-daemons.sh start datanode

$ sbin/yarn-daemon.sh start resourcemanager

$ sbin/yarn-daemons.sh start nodemanager

$ sbin/mr-jobhistory-daemon.sh start historyserver


stop hadoop

$ sbin/mr-jobhistory-daemon.sh stop historyserver

$ sbin/yarn-daemons.sh stop nodemanager

$ sbin/yarn-daemon.sh stop resourcemanager

$ sbin/hadoop-daemons.sh stop datanode

$ sbin/hadoop-daemon.sh stop namenode



Check installation

for master

$ jps

13934 DataNode

17412 ResourceManager

25521 NameNode

7432 JobHistoryServer

5093 Jps

14190 NodeManager


for slaves

$ jps

11433 NodeManager

11175 DataNode

17350 Jps


check log 



Web interface

  • http://master:50070/dfshealth.jsp
  • http://master:8088/cluster
  • http://master:19888/jobhistory



Reference

http://hadoop.apache.org/

http://hidka.tistory.com/220

http://raseshmori.wordpress.com/2012/10/14/install-hadoop-nextgen-yarn-multi-node-cluster/



Next : Install ZooKeeper


'dev > hbase' 카테고리의 다른 글

Steps to install Distributed HBase  (0) 2014.05.09
Steps to install Replicated ZooKeeper  (0) 2014.05.09