linux下安装hadoop步骤
作者 佚名技术
来源 Linux系统
浏览
发布时间 2012-03-30
oop/HadoopInstall/hadoop
export HADOOP_CONF_DIR=/home/hadoop/hadoop-conf
export PATH=$HADOOP_HOME/bin:$PATH
配置 namenode #vi $HADOOP_CONF_DIR/slaves 192.168.13.108 192.168.13.110 #vi $HADOOP_CONF_DIR/core-site.xml <?xml version="1.0"?> <?xml-stylesheet type="text/xsl" href="configuration.xsl"?> <!-- Put site-specific property overrides in this file. --> <configuration> <property> <name>fs.default.name</name> <value>hdfs://192.168.13.100:9000</value> </property> </configuration> #vi $HADOOP_CONF_DIR/hdfs-site.xml <?xml version="1.0"?> <?xml-stylesheet type="text/xsl" href="configuration.xsl"?> <!-- Put site-specific property overrides in this file. --> <configuration> <property> <name>dfs.replication</name> <value>3</value> <description>Default block replication. The actual number of replications can be specified when the file is created. The default is used if replication is not specified in create time. </description> </property> </configuration> #vi $HADOOP_CONF_DIR/mapred-site.xml <?xml version="1.0"?> <?xml-stylesheet type="text/xsl" href="configuration.xsl"?> <!-- Put site-specific property overrides in this file. --> <configuration> <property> <name>mapred.job.tracker</name> <value>192.168.13.100:11000</value> </property> </configuration> ~ 在slave上的配置文件如下(hdfs-site.xml不需要配置): [root@test12 conf]# cat core-site.xml <?xml version="1.0"?> <?xml-stylesheet type="text/xsl" href="configuration.xsl"?> <!-- Put site-specific property overrides in this file. --> <configuration> <property> <name>fs.default.name</name> <value>hdfs://namenode:9000</value> </property> </configuration> [root@test12 conf]# cat mapred-site.xml <?xml version="1.0"?> <?xml-stylesheet type="text/xsl" href="configuration.xsl"?> <!-- Put site-specific property overrides in this file. --> <configuration> <property> <name>mapred.job.tracker</name> <value>namenode:11000</value> </property> </configuration> 启动 export PATH=$HADOOP_HOME/bin:$PATH hadoop namenode -format start-all.sh 停止stop-all.sh 在hdfs上创建danchentest文件夹,上传文件到此目录下 $HADOOP_HOME/bin/hadoop fs -mkdir danchentest $HADOOP_HOME/bin/hadoop fs -put $HADOOP_HOME/README.txt danchentest cd $HADOOP_HOME hadoop jar hadoop-0.20.1-examples.jar wordcount /user/hadoop/danchentest/README.txt output1 09/12/2 |
凌众科技专业提供服务器租用、服务器托管、企业邮局、虚拟主机等服务,公司网站:http://www.lingzhong.cn 为了给广大客户了解更多的技术信息,本技术文章收集来源于网络,凌众科技尊重文章作者的版权,如果有涉及你的版权有必要删除你的文章,请和我们联系。以上信息与文章正文是不可分割的一部分,如果您要转载本文章,请保留以上信息,谢谢! |
你可能对下面的文章感兴趣
上一篇: Linux下搭建大型SNV平台下一篇: Linux下bash的PS1
关于linux下安装hadoop步骤的所有评论