安装单机habase报错,好像是spark zookeeper ha问题,单机还需要spark zookeeper ha么

Hadoop2的HA安装(high&availability):JournalNode+&zookeeper
使用NFS+zookeeper来解决namenode单点失败问题,因为NFS可能也会存在单点问题,所以hadoop提供了一种叫做JournalNode技术,这项技术可以在JournalNode节点间共享数据
我们来介绍这一种技术:JournalNode+ zookeeper
Hadoop 版本:2.2.0
OS 版本: Centos6.4
Jdk 版本: jdk1.6.0_32
192.168.124.135
NameNode, DataNode,
ResourceManager, NodeManager
JournalNode
192.168.124.136
DataNode, NodeManager
JournalNode
192.168.124.137
DataNode, NodeManager
JournalNode
在这里就不介绍安装zookeeper,直接使用前面nfs+zookeeper安装好的zookeeper
JournalNode也不需要安装,通过配置就可以。
直接配置hadooper
vi etc/hadoop/hadoop-env.sh
修改jdk位置
export JAVA_HOME=/home/hadoop/jdk1.6.0_32
etc/hadoop/mapred-env.sh修改jdk位置
JAVA_HOME=/home/hadoop/jdk1.6.0_32
vi etc/hadoop/yarn-env.sh修改jdk位置
JAVA_HOME=/home/hadoop/jdk1.6.0_32
vi etc/hadoop/core-site.xml
hadoop.tmp.dir
/home/hadoop/repo3/tmp
A base for other temporary directories.
fs.defaultFS
hdfs://mycluster
dfs.journalnode.edits.dir
/home/hadoop/repo3/journal
ha.zookeeper.quorum
hadoop1:2181,hadoop2:2181,hadoop3:2181
vi etc/hadoop/hdfs-site.xml
dfs.replication
dfs.namenode.name.dir
/home/hadoop/repo3/name
dfs.datanode.data.dir
/home/hadoop/repo3/data
dfs.nameservices
dfs.ha.namenodes.mycluster
hadoop1,hadoop2
dfs.namenode.rpc-address.mycluster.hadoop1
hadoop1:9000
dfs.namenode.http-address.mycluster.hadoop1
hadoop1:50070
dfs.namenode.rpc-address.mycluster.hadoop2
hadoop2:9000
dfs.namenode.http-address.mycluster.hadoop2
hadoop2:50070
dfs.namenode.shared.edits.dir
qjournal://hadoop1:8485;hadoop2:8485;hadoop3:8485/mycluster
dfs.client.failover.proxy.provider.mycluster&&&&&&&
org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider
dfs.ha.fencing.methods
dfs.ha.fencing.ssh.private-key-files
/home/hadoop/.ssh/id_rsa
dfs.ha.automatic-failover.enabled
vi etc/hadoop/yarn-site.xml
the valid service name
yarn.nodemanager.aux-services
mapreduce_shuffle
The hostname of the RM.
yarn.resourcemanager.hostname
vi etc/hadoop/mapred-site.xml
mapreduce.framework.name
vi etc/hadoop/slaves
运行hadoop
启动zookeeper,在hadoop1 ,hadoop2,
hadoop3上运行
/home/hadoop/zookeeper-3.4.5/bin
./zkServer.sh&
格式化namenode和failovercontroler
failovercontroler也需要格式化:&bin/hdfs zkfc
在hadoop1节点上运行: bin/hdfs namenode
-format -clusterid mycluster
在hadoop2节点上的namenode信息需要与hadoop1节点同步,不能通过简单的格式化做到,hadoop2节点上的namenode需要向hadoop1的namenode发送数据请求。因此我们还需要启动hadoop1上的namenode.
在hadoop1上运行:&bin/hdfs namenode
在hadoop3上运行:bin/hdfs namenode&
-bootstrapStandby
最后关闭hadoop1上的namenode,然后启动整个hadoop集群。
启动hadoop集群
cd /home/hadoop/hadoop-2.2.0
sbin/start-all.sh
从图上可以看出,先启动namenode,再启动datanode,
再启动journalnode, 再启动ZK failover controller, 再启动resourcemanger,
最后启动nodemanager。
<img src="/blog7style/images/common/sg_trans.gif" real_src ="/i/405/501.png" ALT=""
TITLE="Hadoop2的HA安装(high&availability):JournalNode+&zookeeper" />
使用jps查看启动的进程
在hadoop1上运行jps
<img src="/blog7style/images/common/sg_trans.gif" real_src ="/i/405/213.png" ALT=""
TITLE="Hadoop2的HA安装(high&availability):JournalNode+&zookeeper" />
在hadoop2上运行jps
<img src="/blog7style/images/common/sg_trans.gif" real_src ="/i/405/739.png" ALT=""
TITLE="Hadoop2的HA安装(high&availability):JournalNode+&zookeeper" />
在hadoop3上运行jps
<img src="/blog7style/images/common/sg_trans.gif" real_src ="/i/405/018.png" ALT=""
TITLE="Hadoop2的HA安装(high&availability):JournalNode+&zookeeper" />
查看namenode的状态
& bin/hdfs haadmin
-getServiceState hadoop1
<img src="/blog7style/images/common/sg_trans.gif" real_src ="/i/405/136.png" ALT=""
TITLE="Hadoop2的HA安装(high&availability):JournalNode+&zookeeper" />
& bin/hdfs haadmin
-getServiceState hadoop2
<img src="/blog7style/images/common/sg_trans.gif" real_src ="/i/405/141.png" ALT=""
TITLE="Hadoop2的HA安装(high&availability):JournalNode+&zookeeper" />
从图上可以看出hadoop2上的namenode处于active状态,而hadoop1上的namenode处于standby状态
这些信息也可以通过Hadoop的web界面得到。
在浏览器里输入:http://hadoop1:50070
<img src="/blog7style/images/common/sg_trans.gif" real_src ="/i/405/519.png" ALT=""
TITLE="Hadoop2的HA安装(high&availability):JournalNode+&zookeeper" />
在浏览器里输入:http://hadoop2:50070
<img src="/blog7style/images/common/sg_trans.gif" real_src ="/i/405/688.png" ALT=""
TITLE="Hadoop2的HA安装(high&availability):JournalNode+&zookeeper" />
转载:/easycloud/p/3724908.html
已投稿到:
以上网友发言只代表其个人观点,不代表新浪网的观点或立场。4072人阅读
hadoop(37)
master: java.net.BindException: Address already in use
master:&&&&&&&& at sun.nio.ch.Net.bind(Native Method)
master:&&&&&&&& at sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:124)
master:&&&&&&&& at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:59)
master:&&&&&&&& at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:52)
master:&&&&&&&& at org.apache.zookeeper.server.NIOServerCnxnFactory.configure(NIOServerCnxnFactory.java:111)
master:&&&&&&&& at org.apache.zookeeper.server.quorum.QuorumPeerMain.runFromConfig(QuorumPeerMain.java:130)
master:&&&&&&&& at org.apache.hadoop.hbase.zookeeper.HQuorumPeer.runZKServer(HQuorumPeer.java:73)
master:&&&&&&&& at org.apache.hadoop.hbase.zookeeper.HQuorumPeer.main(HQuorumPeer.java:63)
报错的原因是 zookeeper已经启动了。
export HBASE_MANAGES_ZK=true
这个参数表示启动hbase之前自动启动zk
解决的办法有2种:
1.启动hbase的之前kill掉所有的 zk进程 让hbase启动zk
2.将参数HBASE_MANAGES_ZK 改成false
在hbase之前手动启动zk.
* 以上用户言论只代表其个人观点,不代表CSDN网站的观点或立场
访问:687008次
积分:8199
积分:8199
排名:第1334名
原创:183篇
转载:84篇
评论:46条
(2)(2)(2)(3)(1)(1)(6)(1)(3)(1)(4)(9)(1)(3)(17)(11)(4)(5)(9)(6)(5)(7)(6)(13)(8)(12)(22)(18)(10)(2)(13)(3)(18)(9)(22)(10)Hbase单机安装出现的错误 - Hadoop - ITeye群组
执行“bin/hbase shell”时无问题,继续执行“create 'test', 'cf'”时,出现如下错误: 21:39:41,261 INFO org.apache.zookeeper.server.NIOServerCnxnFactory: binding to port 0.0.0.0/0.0.0.0:2181
21:39:41,279 INFO org.apache.zookeeper.server.persistence.FileTxnSnapLog: Snapshotting: 0x0 to /tmp/hbase-hadoopor/zookeeper/zookeeper_0/version-2/snapshot.0
21:39:41,407 INFO org.apache.zookeeper.server.NIOServerCnxnFactory: Accepted socket connection from /127.0.0.1:34223
21:39:41,412 INFO org.apache.zookeeper.server.NIOServerCnxn: Processing stat command from /127.0.0.1:34223
21:39:41,427 INFO org.apache.zookeeper.server.NIOServerCnxn: Stat command output
21:39:41,428 INFO org.apache.zookeeper.server.NIOServerCnxn: Closed socket connection for client /127.0.0.1:34223 (no session established for client)
21:39:41,429 INFO org.apache.hadoop.hbase.zookeeper.MiniZooKeeperCluster: Started MiniZK Cluster and connect 1 ZK server on client port: 2181
21:39:41,548 DEBUG org.apache.hadoop.hbase.master.HMaster: Set serverside HConnection retries=100
21:39:42,422 ERROR org.apache.hadoop.hbase.master.HMasterCommandLine: Failed to start master
java.lang.RuntimeException: Failed construction of Master: class org.apache.hadoop.hbase.master.HMasterCommandLine$LocalHMasterCannot assign requested address
at org.apache.hadoop.hbase.util.JVMClusterUtil.createMasterThread(JVMClusterUtil.java:134)
at org.apache.hadoop.hbase.LocalHBaseCluster.addMaster(LocalHBaseCluster.java:198)
at org.apache.hadoop.hbase.LocalHBaseCluster.&init&(LocalHBaseCluster.java:148)
at org.apache.hadoop.hbase.master.HMasterCommandLine.startMaster(HMasterCommandLine.java:140)
at org.apache.hadoop.hbase.master.HMasterCommandLine.run(HMasterCommandLine.java:103)
at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:65)
at org.apache.hadoop.hbase.util.ServerCommandLine.doMain(ServerCommandLine.java:76)
at org.apache.hadoop.hbase.master.HMaster.main(HMaster.java:1754)
Caused by: java.net.BindException: Problem binding to /221.238.203.46:0 : Cannot assign requested address
at org.apache.hadoop.hbase.ipc.HBaseServer.bind(HBaseServer.java:242)
at org.apache.hadoop.hbase.ipc.HBaseServer$Listener.&init&(HBaseServer.java:456)
at org.apache.hadoop.hbase.ipc.HBaseServer.&init&(HBaseServer.java:1505)
at org.apache.hadoop.hbase.ipc.WritableRpcEngine$Server.&init&(WritableRpcEngine.java:296)
at org.apache.hadoop.hbase.ipc.WritableRpcEngine.getServer(WritableRpcEngine.java:245)
at org.apache.hadoop.hbase.ipc.WritableRpcEngine.getServer(WritableRpcEngine.java:55)
at org.apache.hadoop.hbase.ipc.HBaseRPC.getServer(HBaseRPC.java:401)
at org.apache.hadoop.hbase.ipc.HBaseRPC.getServer(HBaseRPC.java:390)
at org.apache.hadoop.hbase.master.HMaster.&init&(HMaster.java:251)
at org.apache.hadoop.hbase.master.HMasterCommandLine$LocalHMaster.&init&(HMasterCommandLine.java:215)
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:39)
at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:27)
at java.lang.reflect.Constructor.newInstance(Constructor.java:513)
at org.apache.hadoop.hbase.util.JVMClusterUtil.createMasterThread(JVMClusterUtil.java:131)
... 7 more
Caused by: java.net.BindException: Cannot assign requested address
at sun.nio.ch.Net.bind(Native Method)
at sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:119)
at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:59)
at org.apache.hadoop.hbase.ipc.HBaseServer.bind(HBaseServer.java:240)
... 21 more
Hbase的standalone模式需要运行在127.0.0.1上,你可以去/etc/hosts里检查一下,看看你的用户名所对应的IP是不是127.0.0.1,像ubuntu这种linux一般来说是会把你的IP设置成127.0.1.1的
试一下这个,修改hbase-site.xml 配置文件中的“hbase.zookeeper.quorum”属性的值为“127.0.0.1”。
HMasterCommandLine这个类是启动时加载配置调用,看看你的配置文件
你好,这个问题你解决了吗,解决方法是怎样的?
可能是什么原因也不去分析了,直接说答案:
把ubuntu的host文件里面的内容统统删除,然后手动加上一行
192.168.10.48 ubuntu
重启ubuntu和hbase,问题解决。}

我要回帖

更多关于 zookeeper单机安装 的文章

更多推荐

版权声明:文章内容来源于网络,版权归原作者所有,如有侵权请点击这里与我们联系,我们将及时删除。

点击添加站长微信