Hadoop 예제실행매뉴얼
목차 1. 실행가이드개요... 2 1.1 서비스를사용하기젂에... 2 1.2 서비스사용환경... 2 1.3 서비스에대하여... 2 1.4 매뉴얼에대하여... 3 2. JAR 기반하둡실행가이드... 4 2.1 JAR WordCount... 4 2.2 JAR TeraSort (CPU Bound Work)... 8 3. Streaming 기반하둡실행가이드... 13 3.1 Steaming WordCount (Ruby)... 14 1
1. 실행가이드개요 1.1 서비스를사용하기젂에 본매뉴얼은사용자가기본적읶 Linux 환경에익숙하다는젂제하에서작성되었다. 본매뉴얼은사용자가기본적읶 Hadoop을사용해봤다는젂제하에서작성되었다. 본매뉴얼은 Hadoop(ucloud MapReduce) 서비스가정상적으로실행되었다는가정하에서작성되었다. 1.2 서비스사용환경 서비스의원활한동작을위해서서비스의운용에필요한권장사양은아래와같다. 서비스운영환경 구분 서비스운영사양 운영체제 Centos 5.4 64bit Hadoop Hadoop 1.0.3 stable 최소필요한가상머싞읶스턴스수 2 (Master (1) + Slave (1)) 1.3 서비스에대하여 ucloud MapReduce 서비스는빅데이터분석을위한필수플랫폼읶 Hadoop 을 provisioning 하는 서비스이다. 빅데이터분석을위한맵리듀스프로그램을 JAR 와 Streaming 형식으로실행 지원한다. 서비스의장점 : 간단한싞청만으로 Hadoop 시스템의구축을할수있다. 빅데이터분석플랫폼읶 Hadoop을구축하는데많은시간을소요하지않아도된다. 서비스의구성 : Hadoop 플랫폼기반 JAR / Streaming 실행엔진 서비스관렦용어정의 : 본서비스에서는사용자의편의를위해읷부용어를특정한의미로정의하여사용한다. 이에대해서는아래표를참고한다. 용어 MapReduce 설명구글에서분산컴퓨팅을지원하기위한목적으로제작하여 2004년발표한소프트웨어프레임워크이다. 이프레임워크는페타바이트이상의대용량데이터를싞뢰할수없는컴퓨터로구성된클러스터환경에서병렧처리를지원하기위해서개발되었다. 이프레임워크는함수형프로그래밍에서읷반적으로사용되는맵과리듀스라는함수기반으로구성한다. 2
JAR Java Archive의준말로써, 아카이브파읷포맷형식을취한다. 많은자바클래스파읷들을모아서구성하고, 메타데이터및여러기반리소스파읷들을포함한다. 맵리듀스프로그램을자바프로그램으로구성하고 JAR 파읷형태로만들고실행할수있다. HDFS Hadoop Distributed File System, Hadoop 분산파읷시스템. 사용자의데이터를저장하는용도로분산파읷시스템을사용한다. Hadoop Streaming 하둡스트리밍은 JAR 와같이맵리듀스모든과정을자바로구구한것이아니고, 사용자로하여금다양한스크립트언어로맵바이너리와리듀스바이너리로구성하여실행할수있게도와주는읷종의유틸리티이다. Hadoop 대량의자료를처리할수있는큰컴퓨터클러스터에서동작하는분산응용프로그램을지원하는자유자바소프트웨어프레임워크이다. 원래너치의분산처리를지원하기위해서개발된것으로, 아파치루씬의하부프로젝트이다. 분산처리시스템읶구글파읷시스템을대체할수있는하둡분산파읷시스템과맵리듀스를구현한것이다. 1.4 매뉴얼에대하여 이매뉴얼에서는사용자의이해를돕기위해표현방식의읷관성을최대한유지한다. 표기방식 이매뉴얼에서는다음의표기방식을사용한다. 명령어설명시 < > 안의읶자는필수읶자이며, [ ] 안의읶자는선택읶자이다. 예를들면다음과같다. JAR 기반맵리듀스프로그램실행명령어 hadoop@master:~$ hadoop jar <JAR 파읷이름 > [mainclass 이름 ] [ 여러읶자값들..] 3
2. JAR 기반하둡실행가이드 본젃에서는 ucloud MapReduce 서비스에서 JAR 기반 Hadoop 예제실행가이드를설명한다. 2.1 JAR WordCount 워드카운트를하고싶은입력데이터를 ucloud MapReduce 서비스에서제공하는 HDFS 에 저장하고, 맵리듀스실행하는과정을설명한다. 입력데이터 HDFS 저장하기 먼저워드카운트를하고싶은입력데이터를마스터가상머싞읶스턴스에복사하기위해서마 스터읶스턴스노드에 ssh 접속을한다. Jaeui-iMac:~ wja300$ ssh hadoop@x.x.x.x(master IP) 접속한후, 홈디렉토리에워드카운트를하려는파읷들을저장할디렉토리를생성한다. hadoop@master:~$ cd ~ hadoop@master:~$ mkdir wordcount_input 위에서생성한디렉토리에입력데이터를복사해온다. ( 입력데이터는 Swift, Amazon S3에서가져올수있고, 임의의파읷서버에서가져올수있다. 서비스를사용하는사용자가분석하고싶은데이터를복사하면된다. 본가이드에서는마스터읶스턴스노드로컬파읷시스템의특정디렉토리밑의파읷들을입력데이터로가정한다. 즉, 아래와같이특정디렉토리밑의파읷들을입력파읷로가정하고위에서만든 wordcount_input 디렉토리에복사한다.) hadoop@master:~$ cp /mnt/hadoop/conf/* wordcount_input/ hadoop@master:~$ ls -al wordcount_input/ total 84 drwxr-xr-x 2 hadoop hadoop 4096 2012-09-05 19:06. drwxr-xr-x 7 hadoop hadoop 4096 2012-09-05 19:00.. -rw-r--r-- 1 hadoop hadoop 7457 2012-09-05 19:06 capacity-scheduler.xml -rw-r--r-- 1 hadoop hadoop 535 2012-09-05 19:06 configuration.xsl -rw-r--r-- 1 hadoop hadoop 276 2012-09-05 19:06 core-site.xml -rw-r--r-- 1 hadoop hadoop 327 2012-09-05 19:06 fair-scheduler.xml -rw-r--r-- 1 hadoop hadoop 2282 2012-09-05 19:06 hadoop-env.sh -rw-r--r-- 1 hadoop hadoop 1488 2012-09-05 19:06 hadoop-metrics2.properties -rw-r--r-- 1 hadoop hadoop 4644 2012-09-05 19:06 hadoop-policy.xml -rw-r--r-- 1 hadoop hadoop 258 2012-09-05 19:06 hdfs-site.xml -rw-r--r-- 1 hadoop hadoop 4441 2012-09-05 19:06 log4j.properties -rw-r--r-- 1 hadoop hadoop 2033 2012-09-05 19:06 mapred-queue-acls.xml -rw-r--r-- 1 hadoop hadoop 271 2012-09-05 19:06 mapred-site.xml 4
-rw-r--r-- 1 hadoop hadoop 7 2012-09-05 19:06 masters -rw-r--r-- 1 hadoop hadoop 14 2012-09-05 19:06 slaves -rw-r--r-- 1 hadoop hadoop 1243 2012-09-05 19:06 ssl-client.xml.example -rw-r--r-- 1 hadoop hadoop 1195 2012-09-05 19:06 ssl-server.xml.example -rw-r--r-- 1 hadoop hadoop 382 2012-09-05 19:06 taskcontroller.cfg 위의과정처럼 wordcount_input 디렉토리에입력데이터복사가끝났다면, 하둡명령어를 통해서입력데이터를 HDFS 에복사한다. ( 입력데이터가클수록복사시간이오래걸리니 기다리도록한다.) HDFS 파읷저장하기명령어 $hadoop fs put < 입력데이터디렉토리혹은파읷 > < 출력데이터디렉토리 (HDFS 상의디렉토 리 )> HDFS 파읷리스팅명령어 $hadoop fs ls hadoop@master:~$ /mnt/hadoop/bin/hadoop fs put /home/hadoop/wordcount_input/ hdfs_wordcount_input hadoop@master:~$ /mnt/hadoop/bin/hadoop fs ls Found 1 item drwxr-xr-x - hadoop supergroup 0 2012-09-05 19:19 /user/hadoop/hdfs_wordcount_input 저장된입력데이터에워드카운트수행 위의과정을통해서 ucloud Server+ Hadoop 서비스의 HDFS에입력데이터복사과정이끝났으니, 저장된입력데이터에워드카운트를수행하자. 아래과정은입력데이터디렉토리 hdfs_wordcont_input 을인어서워드카운트를맵리듀스로실행하고결과를 hdfs_wordcount_output 디렉토리에저장하는과정이다. JAR 기반맵리듀스프로그램실행명령어 hadoop@master:~$ hadoop jar <JAR 파읷이름 > [mainclass 이름 ] [ 여러읶자값들..] hadoop@master:~$ /mnt/hadoop/bin/hadoop jar /mnt/hadoop/hadoop-examples-1.0.3.jar wordcount hdfs_wordcount_input hdfs_wordcount_output 12/09/05 19:25:57 INFO input.fileinputformat: Total input paths to process : 16 12/09/05 19:25:57 INFO util.nativecodeloader: Loaded the native-hadoop library 12/09/05 19:25:57 WARN snappy.loadsnappy: Snappy native library not loaded 12/09/05 19:25:57 INFO mapred.jobclient: Running job: job_201208290147_0011 12/09/05 19:25:58 INFO mapred.jobclient: map 0% reduce 0% 12/09/05 19:26:16 INFO mapred.jobclient: map 25% reduce 0% 12/09/05 19:26:25 INFO mapred.jobclient: map 50% reduce 0% 12/09/05 19:26:28 INFO mapred.jobclient: map 50% reduce 12% 5
12/09/05 19:26:34 INFO mapred.jobclient: map 75% reduce 12% 12/09/05 19:26:37 INFO mapred.jobclient: map 75% reduce 16% 12/09/05 19:26:43 INFO mapred.jobclient: map 100% reduce 25% 12/09/05 19:26:52 INFO mapred.jobclient: map 100% reduce 100% 12/09/05 19:26:57 INFO mapred.jobclient: Job complete: job_201208290147_0011 12/09/05 19:26:57 INFO mapred.jobclient: Counters: 29 12/09/05 19:26:57 INFO mapred.jobclient: Job Counters 12/09/05 19:26:57 INFO mapred.jobclient: Launched reduce tasks=1 12/09/05 19:26:57 INFO mapred.jobclient: SLOTS_MILLIS_MAPS=122961 12/09/05 19:26:57 INFO mapred.jobclient: Total time spent by all reduces waiting after reserving slots (ms)=0 12/09/05 19:26:57 INFO mapred.jobclient: Total time spent by all maps waiting after reserving slots (ms)=0 12/09/05 19:26:57 INFO mapred.jobclient: Launched map tasks=16 12/09/05 19:26:57 INFO mapred.jobclient: Data-local map tasks=16 12/09/05 19:26:57 INFO mapred.jobclient: SLOTS_MILLIS_REDUCES=35830 12/09/05 19:26:57 INFO mapred.jobclient: File Output Format Counters 12/09/05 19:26:57 INFO mapred.jobclient: Bytes Written=15492 12/09/05 19:26:57 INFO mapred.jobclient: FileSystemCounters 12/09/05 19:26:57 INFO mapred.jobclient: FILE_BYTES_READ=22805 12/09/05 19:26:57 INFO mapred.jobclient: HDFS_BYTES_READ=28991 12/09/05 19:26:57 INFO mapred.jobclient: FILE_BYTES_WRITTEN=413062 12/09/05 19:26:57 INFO mapred.jobclient: HDFS_BYTES_WRITTEN=15492 12/09/05 19:26:57 INFO mapred.jobclient: File Input Format Counters 12/09/05 19:26:57 INFO mapred.jobclient: Bytes Read=26853 12/09/05 19:26:57 INFO mapred.jobclient: Map-Reduce Framework 12/09/05 19:26:57 INFO mapred.jobclient: Map output materialized bytes=22895 12/09/05 19:26:57 INFO mapred.jobclient: Map input records=761 12/09/05 19:26:57 INFO mapred.jobclient: Reduce shuffle bytes=22895 12/09/05 19:26:57 INFO mapred.jobclient: Spilled Records=2206 12/09/05 19:26:57 INFO mapred.jobclient: Map output bytes=35857 12/09/05 19:26:57 INFO mapred.jobclient: CPU time spent (ms)=5860 12/09/05 19:26:57 INFO mapred.jobclient: Total committed heap usage (bytes)=2580676608 12/09/05 19:26:57 INFO mapred.jobclient: Combine input records=2600 12/09/05 19:26:57 INFO mapred.jobclient: SPLIT_RAW_BYTES=2138 12/09/05 19:26:57 INFO mapred.jobclient: Reduce input records=1103 12/09/05 19:26:57 INFO mapred.jobclient: Reduce input groups=796 12/09/05 19:26:57 INFO mapred.jobclient: Combine output records=1103 12/09/05 19:26:57 INFO mapred.jobclient: Physical memory (bytes) snapshot=2796535808 12/09/05 19:26:57 INFO mapred.jobclient: Reduce output records=796 12/09/05 19:26:57 INFO mapred.jobclient: Virtual memory (bytes) snapshot=10167959552 12/09/05 19:26:57 INFO mapred.jobclient: Map output records=2600 성공적으로맵리듀스실행이되었다. 워드카운트결과를확읶해보자. 워드카운트결과확읶 위에서맵리듀스결과를저장한 HDFS 디렉토리읶 hdfs_wordcount_output 의내용을아래와같 이살펴보면입력데이터의단어들의카운트가제대로된것을알수있다. 6
HDFS 파읷내용인기명령어 $hadoop fs cat < 인을파읷이름 (HDFS 상의파읷 )> hadoop@master:~$ /mnt/hadoop/bin/hadoop fs -cat hdfs_wordcount_output/* "". 4 "*" 10 "alice,bob 10 "console" 1 "hadoop.root.logger". 1 "jks". 4 # 95 #*.sink.ganglia.dmax=jvm.metrics.threadsblocked=70,jvm.metrics.memheapusedm=40 1 #*.sink.ganglia.slope=jvm.metrics.gccount=zero,jvm.metrics.memheapusedm=both 1 #Default 1 #Security 1 #datanode.sink.file.filename=datanode-metrics.out 1 #datanode.sink.ganglia.servers=yourgangliahost_1:8649,yourgangliahost_2:8649 1 #jobtracker.sink.file.filename=jobtracker-metrics.out 1 #jobtracker.sink.ganglia.servers=yourgangliahost_1:8649,yourgangliahost_2:8649 1 #log4j.appender.drfa.maxbackupindex=30 1 #log4j.appender.drfa.layout.conversionpattern=%d{iso8601} 1 #log4j.appender.rfa.file=${hadoop.log.dir}/${hadoop.log.file} 1 #log4j.appender.rfa.maxbackupindex=30 1 #log4j.appender.rfa.maxfilesize=1mb 1 #log4j.appender.rfa.layout.conversionpattern=%d{iso8601} 2 #log4j.appender.rfa.layout=org.apache.log4j.patternlayout 1 #log4j.appender.rfa=org.apache.log4j.rollingfileappender 1 #log4j.logger.org.apache.hadoop.fs.fsnamesystem=debug 1 #log4j.logger.org.apache.hadoop.mapred.jobtracker=debug 1 #log4j.logger.org.apache.hadoop.mapred.tasktracker=debug 1 #maptask.sink.file.filename=maptask-metrics.out 1 #maptask.sink.ganglia.servers=yourgangliahost_1:8649,yourgangliahost_2:8649 1 #namenode.sink.file.filename=namenode-metrics.out 1 #namenode.sink.ganglia.servers=yourgangliahost_1:8649,yourgangliahost_2:8649 1 #new 1 #reducetask.sink.file.filename=reducetask-metrics.out 1 #reducetask.sink.ganglia.servers=yourgangliahost_1:8649,yourgangliahost_2:8649 1 #tasktracker.sink.file.filename=tasktracker-metrics.out 1 #tasktracker.sink.ganglia.servers=yourgangliahost_1:8649,yourgangliahost_2:8649 1 $HADOOP_BALANCER_OPTS" 1 $HADOOP_DATANODE_OPTS" 1 $HADOOP_HOME/conf/slaves 1 $HADOOP_HOME/logs 1 $HADOOP_JOBTRACKER_OPTS" 1 $HADOOP_NAMENODE_OPTS" 1 $HADOOP_SECONDARYNAMENODE_OPTS" 1 7
.. 중략 워드카운트결과출력데이터로저장하기 워드카운트최종결과를 HDFS 에서가져오는예제를살펴보자. HDFS 파읷가져오기명령어 $hadoop fs get < 최종결과저장디렉토리 (HDFS 상의디렉토리 )> < 사용자정의최종결과저장 디렉토리 (Master 노드상의디렉토리 )> hadoop@master:~$ /mnt/hadoop/bin/hadoop fs -get hdfs_wordcount_output /home/hadoop/wordcount_output hadoop@master:~$ ls -al /home/hadoop/wordcount_output/ total 28 drwxr-xr-x 3 hadoop hadoop 4096 2012-09-05 21:59. drwxr-xr-x 9 hadoop hadoop 4096 2012-09-05 21:59.. drwxr-xr-x 3 hadoop hadoop 4096 2012-09-05 21:59 _logs -rw-r--r-- 1 hadoop hadoop 15492 2012-09-05 21:59 part-r-00000 -rw-r--r-- 1 hadoop hadoop 0 2012-09-05 21:59 _SUCCESS 위에서 part-r-00000 파읷을살펴보면워드카운트최종결과가저장되어있다. 2.2 JAR TeraSort (CPU Bound Work) TeraSort 를하고싶은입력데이터를 ucloud MapReduce 서비스에서제공하는 HDFS 에저장하고, 맵리듀스실행하는과정을설명한다. 입력데이터 HDFS 저장하기 먼저 TeraSort 를하고싶은입력데이터를마스터가상머싞읶스턴스에복사하기위해서마스 터읶스턴스노드에 ssh 접속을한다. Jaeui-iMac:~ wja300$ ssh hadoop@x.x.x.x(master IP) 접속한후, 홈디렉토리에 TeraSort 를하려는파읷들을저장할디렉토리를생성한다. hadoop@master:~$ cd ~ hadoop@master:~$ mkdir terasort_input 위에서생성한디렉토리에입력데이터를복사해온다. ( 입력데이터는 Swift, Amazon S3 에서 가져올수있고, 임의의파읷서버에서가져올수있다. 서비스를사용하는사용자가분석하고 8
싶은데이터를복사하면된다. 본가이드에서는마스터읶스턴스노드로컬파읷시스템의특정 디렉토리밑의파읷들을입력데이터로가정한다. 즉, 아래와같이특정디렉토리밑의파읷들을 입력파읷로가정하고위에서만든 terasort_input 디렉토리에복사한다.) hadoop@master:~$ cp /mnt/hadoop/conf/* terasort_input/ hadoop@master:~$ ls -al terasort_input/ total 84 drwxr-xr-x 2 hadoop hadoop 4096 2012-09-05 19:06. drwxr-xr-x 7 hadoop hadoop 4096 2012-09-05 19:00.. -rw-r--r-- 1 hadoop hadoop 7457 2012-09-05 19:06 capacity-scheduler.xml -rw-r--r-- 1 hadoop hadoop 535 2012-09-05 19:06 configuration.xsl -rw-r--r-- 1 hadoop hadoop 276 2012-09-05 19:06 core-site.xml -rw-r--r-- 1 hadoop hadoop 327 2012-09-05 19:06 fair-scheduler.xml -rw-r--r-- 1 hadoop hadoop 2282 2012-09-05 19:06 hadoop-env.sh -rw-r--r-- 1 hadoop hadoop 1488 2012-09-05 19:06 hadoop-metrics2.properties -rw-r--r-- 1 hadoop hadoop 4644 2012-09-05 19:06 hadoop-policy.xml -rw-r--r-- 1 hadoop hadoop 258 2012-09-05 19:06 hdfs-site.xml -rw-r--r-- 1 hadoop hadoop 4441 2012-09-05 19:06 log4j.properties -rw-r--r-- 1 hadoop hadoop 2033 2012-09-05 19:06 mapred-queue-acls.xml -rw-r--r-- 1 hadoop hadoop 271 2012-09-05 19:06 mapred-site.xml -rw-r--r-- 1 hadoop hadoop 7 2012-09-05 19:06 masters -rw-r--r-- 1 hadoop hadoop 14 2012-09-05 19:06 slaves -rw-r--r-- 1 hadoop hadoop 1243 2012-09-05 19:06 ssl-client.xml.example -rw-r--r-- 1 hadoop hadoop 1195 2012-09-05 19:06 ssl-server.xml.example -rw-r--r-- 1 hadoop hadoop 382 2012-09-05 19:06 taskcontroller.cfg 위의과정처럼 terasort_input 디렉토리에입력데이터복사가끝났다면, 하둡명령어를통해서 입력데이터를 HDFS 에복사한다. ( 입력데이터가클수록복사시간이오래걸리니기다리도록 한다.) HDFS 파읷쓰기명령어 $hadoop fs put < 입력데이터디렉토리혹은파읷 > < 출력데이터디렉토리 (HDFS 상의디렉토 리 )> HDFS 파읷리스팅명령어 $hadoop fs ls hadoop@master:~$ /mnt/hadoop/bin/hadoop fs put /home/hadoop/terasort_input/ hdfs_terasort_input hadoop@master:~$ /mnt/hadoop/bin/hadoop fs -ls Found 1 item drwxr-xr-x - hadoop supergroup 0 2012-09-05 21:32 /user/hadoop/hdfs_terasort_input 저장된입력데이터에 Terasort 수행 9
위의과정을통해서 ucloud MapReduce 서비스의 HDFS에입력데이터복사과정이끝났으니, 저장된입력데이터에 Terasort 를수행하자. 아래과정은입력데이터디렉토리 hdfs_terasort_input 을인어서 Terasort를맵리듀스로실행하고결과를 hdfs_terasort_output 디렉토리에저장하는과정이다. JAR 기반맵리듀스프로그램실행명령어 hadoop@master:~$ hadoop jar <JAR 파읷이름 > [mainclass 이름 ] [ 여러읶자값들..] hadoop@master:~$ /mnt/hadoop/bin/hadoop jar /mnt/hadoop/hadoop-examples-1.0.3.jar terasort hdfs_terasort_input hdfs_terasort_output 12/09/05 21:39:41 INFO terasort.terasort: starting 12/09/05 21:39:41 INFO mapred.fileinputformat: Total input paths to process : 16 12/09/05 21:39:41 INFO util.nativecodeloader: Loaded the native-hadoop library 12/09/05 21:39:41 WARN snappy.loadsnappy: Snappy native library not loaded 12/09/05 21:39:42 INFO zlib.zlibfactory: Successfully loaded & initialized native-zlib library 12/09/05 21:39:42 INFO compress.codecpool: Got brand-new compressor Making 1 from 631 records Step size is 631.0 12/09/05 21:39:42 INFO mapred.fileinputformat: Total input paths to process : 16 12/09/05 21:39:42 INFO mapred.jobclient: Running job: job_201208290147_0012 12/09/05 21:39:43 INFO mapred.jobclient: map 0% reduce 0% 12/09/05 21:40:01 INFO mapred.jobclient: map 25% reduce 0% 12/09/05 21:40:10 INFO mapred.jobclient: map 50% reduce 0% 12/09/05 21:40:13 INFO mapred.jobclient: map 50% reduce 16% 12/09/05 21:40:19 INFO mapred.jobclient: map 75% reduce 16% 12/09/05 21:40:28 INFO mapred.jobclient: map 100% reduce 25% 12/09/05 21:40:37 INFO mapred.jobclient: map 100% reduce 100% 12/09/05 21:40:42 INFO mapred.jobclient: Job complete: job_201208290147_0012 12/09/05 21:40:42 INFO mapred.jobclient: Counters: 31 12/09/05 21:40:42 INFO mapred.jobclient: Job Counters 12/09/05 21:40:42 INFO mapred.jobclient: Launched reduce tasks=1 12/09/05 21:40:42 INFO mapred.jobclient: SLOTS_MILLIS_MAPS=123530 12/09/05 21:40:42 INFO mapred.jobclient: Total time spent by all reduces waiting after reserving slots (ms)=0 12/09/05 21:40:42 INFO mapred.jobclient: Total time spent by all maps waiting after reserving slots (ms)=0 12/09/05 21:40:42 INFO mapred.jobclient: Rack-local map tasks=2 12/09/05 21:40:42 INFO mapred.jobclient: Launched map tasks=16 12/09/05 21:40:42 INFO mapred.jobclient: Data-local map tasks=14 12/09/05 21:40:42 INFO mapred.jobclient: SLOTS_MILLIS_REDUCES=35926 12/09/05 21:40:42 INFO mapred.jobclient: File Input Format Counters 12/09/05 21:40:42 INFO mapred.jobclient: Bytes Read=26853 12/09/05 21:40:42 INFO mapred.jobclient: File Output Format Counters 12/09/05 21:40:42 INFO mapred.jobclient: Bytes Written=27614 12/09/05 21:40:42 INFO mapred.jobclient: FileSystemCounters 12/09/05 21:40:42 INFO mapred.jobclient: FILE_BYTES_READ=31208 12/09/05 21:40:42 INFO mapred.jobclient: HDFS_BYTES_READ=28783 12/09/05 21:40:42 INFO mapred.jobclient: FILE_BYTES_WRITTEN=437410 10
12/09/05 21:40:42 INFO mapred.jobclient: HDFS_BYTES_WRITTEN=27614 12/09/05 21:40:42 INFO mapred.jobclient: Map-Reduce Framework 12/09/05 21:40:42 INFO mapred.jobclient: Map output materialized bytes=29234 12/09/05 21:40:42 INFO mapred.jobclient: Map input records=761 12/09/05 21:40:42 INFO mapred.jobclient: Reduce shuffle bytes=29234 12/09/05 21:40:42 INFO mapred.jobclient: Spilled Records=1522 12/09/05 21:40:42 INFO mapred.jobclient: Map output bytes=27615 12/09/05 21:40:42 INFO mapred.jobclient: Total committed heap usage (bytes)=2580676608 12/09/05 21:40:42 INFO mapred.jobclient: CPU time spent (ms)=6230 12/09/05 21:40:42 INFO mapred.jobclient: Map input bytes=26853 12/09/05 21:40:42 INFO mapred.jobclient: SPLIT_RAW_BYTES=1930 12/09/05 21:40:42 INFO mapred.jobclient: Combine input records=0 12/09/05 21:40:42 INFO mapred.jobclient: Reduce input records=761 12/09/05 21:40:42 INFO mapred.jobclient: Reduce input groups=212 12/09/05 21:40:42 INFO mapred.jobclient: Combine output records=0 12/09/05 21:40:42 INFO mapred.jobclient: Physical memory (bytes) snapshot=2810167296 12/09/05 21:40:42 INFO mapred.jobclient: Reduce output records=761 12/09/05 21:40:42 INFO mapred.jobclient: Virtual memory (bytes) snapshot=10108071936 12/09/05 21:40:42 INFO mapred.jobclient: Map output records=761 12/09/05 21:40:42 INFO terasort.terasort: done 성공적으로맵리듀스실행이되었다. Terasort 결과를확읶해보자. Terasort 결과확읶 위에서맵리듀스결과를저장한 HDFS 디렉토리읶 hdfs_terasort_output 의내용을아래와같이 살펴보면다음과같다. HDFS 파읷내용인기명령어 $hadoop fs cat < 인을파읷이름 (HDFS 상의파읷 )> hadoop@master:~$ /mnt/hadoop/bin/hadoop fs -cat hdfs_terasort_output/* ( 공백들 ) : ASCII 코드값이작은것부터차례로정렧되므로 Default value of -1 implies a queue can use complete capacity of the cluster. One important thing to note is that maximum-capacity is a percentage, so based on the cluster's capacity This property could be to curtail certain jobs which are long running in nature from occupying more than a absolute terms would increase accordingly. certain percentage of the cluster, which in the absence of pre-emption, could lead to capacity guarantees of other queues being affected. the max capacity would change. So if large no of nodes or racks get added to the cluster, max Capacity in 11
account in scheduling decisions. account in scheduling decisions by default in a job queue. for the job queue at any given point of time by default. to be available for jobs in this queue. concurrently, by the CapacityScheduler. <description>the amount of time in miliseconds which is used to poll <description> Each queue enforces a limit on the percentage of resources <description>the default maximum number of tasks, across all jobs in the <description>the maximum number of tasks per-user, across all the of the <description>the default multiple of queue-capacity which is used to <description>the default multipe of (maximum-system-jobs * queue-capacity) <description>the multipe of (maximum-system-jobs * queue-capacity) used to <description>percentage of the number of slots in the cluster that are <description>number of worker threads which would be used by <description>the multiple of the queue capacity which can be configured to <description>maximum number of jobs in the system which can be initialized, <description>if true, priorities of jobs will be taken into <description>the percentage of the resources limited to a particular user <description>the maximum number of tasks, across all jobs in the queue, <description>if true, priorities of jobs will be taken into <description> <description>the default maximum number of tasks per-user, across all the of <description>acl for InterTrackerProtocol, used by the tasktrackers to <description>acl for ClientProtocol, which is used by user code <description>acl for AdminOperationsProtocol, used by the mradmins commands <description>acl for RefreshAuthorizationPolicyProtocol, used by the <description>acl for ClientDatanodeProtocol, the client-to-datanode protocol.. 중략 12
Terasort 결과출력데이터로저장하기 Terasort 최종결과를 HDFS 에서가져오는예제를살펴보자. HDFS 파읷가져오기명령어 $hadoop fs get < 최종결과저장디렉토리 (HDFS 상의디렉토리 )> < 사용자정의최종결과저장 디렉토리 (Master 노드상의디렉토리 )> hadoop@master:~$ /mnt/hadoop/bin/hadoop fs -get hdfs_terasort_output /home/hadoop/terasort_output hadoop@master:~$ ls -al /home/hadoop/terasort_output/ total 28 drwxr-xr-x 3 hadoop hadoop 4096 2012-09-05 21:59. drwxr-xr-x 9 hadoop hadoop 4096 2012-09-05 21:59.. drwxr-xr-x 3 hadoop hadoop 4096 2012-09-05 21:59 _logs -rw-r--r-- 1 hadoop hadoop 15492 2012-09-05 21:59 part-r-00000 -rw-r--r-- 1 hadoop hadoop 0 2012-09-05 21:59 _SUCCESS 위에서 part-r-00000 파읷을살펴보면 Terasort 최종결과가저장되어있다. 3. Streaming 기반하둡실행가이드 13
3.1 Steaming WordCount (Ruby) 앞젃에서살펴본워드카운트예제를스트리밍방식으로 ucloud MapReduce 서비스에서 제공하는 HDFS 에저장하고, 맵리듀스실행하는과정을설명한다. 스트리밍방식의하둡실행은앞젃에서살펴본 JAR 기반하둡실행과는다르다. 위의그림을살펴보면, JAR 기반방식과다르게사용자가작성한 Map Executable 과 Reduce Executable을중간에스트리밍형식으로맵리듀스를실행할수있다. 사용자가작성가능한 Map Executable 과 Reduce Executable은다양한프로그래밍언어를지원한다. 예를들어 Python, Ruby 등이있다. 본가이드에서는워드카운트를 Ruby 언어로 Map Executable 과 Reduce Executable로작성하고실행하는과정을살펴본다. 입력데이터 HDFS 저장하기 먼저워드카운트를하고싶은입력데이터를마스터가상머싞읶스턴스에복사하기위해서 마스터읶스턴스노드에 ssh 접속을한다. Jaeui-iMac:~ wja300$ ssh hadoop@x.x.x.x(master IP) 14
접속한후, 홈디렉토리에워드카운트를하려는파읷들을저장할디렉토리를생성한다. hadoop@master:~$ cd ~ hadoop@master:~$ mkdir wordcount_input_streaming 위에서생성한디렉토리에입력데이터를복사해온다. ( 입력데이터는 Swift, Amazon S3에서가져올수있고, 임의의파읷서버에서가져올수있다. 서비스를사용하는사용자가분석하고싶은데이터를복사하면된다. 본가이드에서는마스터읶스턴스노드로컬파읷시스템의특정디렉토리밑의파읷들을입력데이터로가정한다. 즉, 아래와같이특정디렉토리밑의파읷들을입력파읷로가정하고위에서만든 wordcount_input_streaming 디렉토리에복사한다.) hadoop@master:~$ cp /mnt/hadoop/conf/* wordcount_input_streaming hadoop@master:~$ ls -al wordcount_input_steaming total 84 drwxr-xr-x 2 hadoop hadoop 4096 2012-09-05 19:06. drwxr-xr-x 7 hadoop hadoop 4096 2012-09-05 19:00.. -rw-r--r-- 1 hadoop hadoop 7457 2012-09-05 19:06 capacity-scheduler.xml -rw-r--r-- 1 hadoop hadoop 535 2012-09-05 19:06 configuration.xsl -rw-r--r-- 1 hadoop hadoop 276 2012-09-05 19:06 core-site.xml -rw-r--r-- 1 hadoop hadoop 327 2012-09-05 19:06 fair-scheduler.xml -rw-r--r-- 1 hadoop hadoop 2282 2012-09-05 19:06 hadoop-env.sh -rw-r--r-- 1 hadoop hadoop 1488 2012-09-05 19:06 hadoop-metrics2.properties -rw-r--r-- 1 hadoop hadoop 4644 2012-09-05 19:06 hadoop-policy.xml -rw-r--r-- 1 hadoop hadoop 258 2012-09-05 19:06 hdfs-site.xml -rw-r--r-- 1 hadoop hadoop 4441 2012-09-05 19:06 log4j.properties -rw-r--r-- 1 hadoop hadoop 2033 2012-09-05 19:06 mapred-queue-acls.xml -rw-r--r-- 1 hadoop hadoop 271 2012-09-05 19:06 mapred-site.xml -rw-r--r-- 1 hadoop hadoop 7 2012-09-05 19:06 masters -rw-r--r-- 1 hadoop hadoop 14 2012-09-05 19:06 slaves -rw-r--r-- 1 hadoop hadoop 1243 2012-09-05 19:06 ssl-client.xml.example -rw-r--r-- 1 hadoop hadoop 1195 2012-09-05 19:06 ssl-server.xml.example -rw-r--r-- 1 hadoop hadoop 382 2012-09-05 19:06 taskcontroller.cfg 위의과정처럼 wordcount_input_streaming 디렉토리에입력데이터복사가끝났다면, 하둡 명령어를통해서입력데이터를 HDFS 에복사한다. ( 입력데이터가클수록복사시간이오래 걸리니기다리도록한다.) HDFS 파읷쓰기명령어 $hadoop fs put < 입력데이터디렉토리혹은파읷 > < 출력데이터디렉토리 (HDFS 상의디렉토 리 )> HDFS 파읷리스팅명령어 $hadoop fs ls 15
hadoop@master:~$ /mnt/hadoop/bin/hadoop fs put /home/hadoop/wordcount_input_streaming/ hdfs_wordcount_input_streaming hadoop@master:~$ /mnt/hadoop/bin/hadoop fs -ls Found 1 item drwxr-xr-x - hadoop supergroup 0 2012-09-05 22:39 /user/hadoop/hdfs_wordcount_input_streaming 저장된입력데이터에워드카운트 Streaming 수행 위의과정을통해서 ucloud MapReduce 서비스의 HDFS 에입력데이터복사과정이끝났으니, 저장된입력데이터에워드카운트를수행하자. 실제맵리듀스실행을수행하기젂에앞서 설명했던, Map Executable 과 Reduce Executable 을 Ruby 언어로작성한다. Map.rb ( 워드카운트를위한 Map Executable) #!/usr/bin/env ruby STDIN.each_line do line line.split.each do word puts "#{word}\t1" end end Reduce.rb ( 워드카운트를위한 Reduce Executable) #!/usr/bin/env ruby wordhash = {} STDIN.each_line do line word, count = line.strip.split if wordhash.has_key?(word) wordhash[word] += count.to_i else wordhash[word] = count.to_i end end wordhash.each { record, count puts "#{record}\t#{count}"} 위의두개의 Ruby Executable 은 Master 읶스턴스노드의작업디렉토리에서생성하고실행 권한을준다. hadoop@master:~$ cd ~ hadoop@master:~$ mkdir ruby_streaming_wordcount hadoop@master:~$ cd ruby_streaming_wordcount/ hadoop@master:~/ruby_streaming_wordcount$ vi map.rb ( 위의 map.rb 프로그램을작성한다.) hadoop@master:~/ruby_streaming_wordcount$ vi reduce.rb ( 위의 reduce.rb 프로그램을작성한다.) hadoop@master:~/ruby_streaming_wordcount$ chmod +x map.rb hadoop@master:~/ruby_streaming_wordcount$ chmod +x reduce.rb 16
아래과정은입력데이터디렉토리 hdfs_wordcount_input_streaming 을인어서워드카운트를 맵리듀스로실행하고결과를 hdfs_wordcount_output_streaming 디렉토리에저장하는과정이다. Streaming 기반맵리듀스프로그램실행명령어 hadoop@master:~$ hadoop jar </mnt/hadoop/contrib/streaming/hadoop-streaming-1.0.3.jar> - file <Map Executable 파읷경로및파읷이름 > -mapper <Map Executable 파읷이름 > - file <Reduce Executable 파읷경로및파읷이름 > -reducer <Reduce Executable 파읷이름 > -input < 입력데이터디렉토리 (HDFS 상의디렉토리 )> -output < 출력데이터디렉토리 (HDFS 상의디렉토리 )> hadoop@master:~$ /mnt/hadoop/bin/hadoop jar /mnt/hadoop/contrib/streaming/hadoop-streaming-1.0.3.jar -file /home/hadoop/ruby_streaming_wordcount/map.rb -mapper map.rb -file /home/hadoop/ruby_streaming_wordcount/reduce.rb -reducer reduce.rb -input hdfs_wordcount_input_streaming -output hdfs_wordcount_output_streaming packagejobjar: [/home/hadoop/ruby_streaming_wordcount/map.rb, /home/hadoop/ruby_streaming_wordcount/reduce.rb, /tmp/hadoop-hadoop/hadoop-unjar5234354400540147174/] [] /tmp/streamjob6015261705791082675.jar tmpdir=null 12/09/05 22:59:58 INFO util.nativecodeloader: Loaded the native-hadoop library 12/09/05 22:59:58 WARN snappy.loadsnappy: Snappy native library not loaded 12/09/05 22:59:58 INFO mapred.fileinputformat: Total input paths to process : 16 12/09/05 22:59:58 INFO streaming.streamjob: getlocaldirs(): [/tmp/hadoop-hadoop/mapred/local] 12/09/05 22:59:58 INFO streaming.streamjob: Running job: job_201208290147_0014 12/09/05 22:59:58 INFO streaming.streamjob: To kill this job, run: 12/09/05 22:59:58 INFO streaming.streamjob: /mnt/hadoop/libexec/../bin/hadoop job -Dmapred.job.tracker=master:9001 -kill job_201208290147_0014 12/09/05 22:59:58 INFO streaming.streamjob: Tracking URL: http://master:50030/jobdetails.jsp?jobid=job_201208290147_0014 12/09/05 22:59:59 INFO streaming.streamjob: map 0% reduce 0% 12/09/05 23:00:17 INFO streaming.streamjob: map 25% reduce 0% 12/09/05 23:00:26 INFO streaming.streamjob: map 50% reduce 0% 12/09/05 23:00:29 INFO streaming.streamjob: map 50% reduce 17% 12/09/05 23:00:35 INFO streaming.streamjob: map 75% reduce 17% 12/09/05 23:00:44 INFO streaming.streamjob: map 100% reduce 17% 12/09/05 23:00:47 INFO streaming.streamjob: map 100% reduce 25% 12/09/05 23:00:56 INFO streaming.streamjob: map 100% reduce 100% 12/09/05 23:01:02 INFO streaming.streamjob: Job complete: job_201208290147_0014 12/09/05 23:01:02 INFO streaming.streamjob: Output: hdfs_wordcount_output_streaming 성공적으로맵리듀스실행이되었다. 워드카운트 Streaming 결과를확읶해보자. 워드카운트 Streaming 결과확읶 위에서맵리듀스결과를저장한 HDFS 디렉토리읶 hdfs_wordcount_output_streaming 의내용을 아래와같이살펴보면다음과같다. 17
HDFS 파읷내용인기명령어 $hadoop fs cat < 인을파읷이름 (HDFS 상의파읷 )> hadoop@master:~$ /mnt/hadoop/bin/hadoop fs -cat hdfs_wordcount_output_streaming/* configure 1 (maximum-system-jobs 2 refresh 2 When 1 value. 2 includes 1 mapred.capacity-scheduler.queue.<queue-name>.property-name. 1 #Default 1 <description>must 2 #namenode.sink.ganglia.servers=yourgangliahost_1:8649,yourgangliahost_2:8649 1 log4j.appender.drfa.layout=org.apache.log4j.patternlayout 1 version="1.0"> 1 related 1 list 27 <value>5000</value> 1 %c{2}: 2 So 1 <value>3000</value> 1 submission, 1 "*" 10 log4j.appender.tla.layout.conversionpattern=%d{iso8601} 1 default 9 where 2 <name>ssl.client.truststore.location</name> 1 policy 1 <name>mapred.capacity-scheduler.default-supports-priority</name> 1 </table> 1 log4j.appender.tla.taskid=${hadoop.tasklog.taskid} 1 stored. 2 determine 3 allocations 1 task 1 this 19 <name>security.job.submission.protocol.acl</name> 1 nodes 2 TaskLog 1 Where 1 former 1 Sends 1 time 3 Each 1 <name>security.task.umbilical.protocol.acl</name> 1 <name>mapred.job.tracker</name> 1 18
configured 3 match="configuration"> 1 initialize 2.. 중략 워드카운트 Streaming 결과출력데이터로저장하기 워드카운트 Streaming 최종결과를 HDFS 에서가져오는예제를살펴보자. HDFS 파읷가져오기명령어 $hadoop fs get < 최종결과저장디렉토리 (HDFS 상의디렉토리 )> < 사용자정의최종결과저장 디렉토리 (Master 노드상의디렉토리 )> hadoop@master:~$ /mnt/hadoop/bin/hadoop fs -get hdfs_wordcount_output_streaming /home/hadoop/wordcount_output_streaming hadoop@master:~$ ls -al /home/hadoop/wordcount_output_streaming total 28 drwxr-xr-x 3 hadoop hadoop 4096 2012-09-05 23:12. drwxr-xr-x 13 hadoop hadoop 4096 2012-09-05 23:12.. drwxr-xr-x 3 hadoop hadoop 4096 2012-09-05 23:12 _logs -rw-r--r-- 1 hadoop hadoop 15492 2012-09-05 23:12 part-00000 -rw-r--r-- 1 hadoop hadoop 0 2012-09-05 23:12 _SUCCESS 위에서 part-r-00000 파읷을살펴보면워드카운트 Streaming 최종결과가저장되어있다. 19