新手提问 这个steam和官服数据是不是5e和steam皮肤通用吗的

Hebernate 一对多的数据获取(新手提问) - ITeye问答
private void getRoleModelList(){
user=this.userDAO.getUser(); //获取用户
modelList=new ArrayList();
Set&UserRole& userroles=user.getUserRoles();//获取用户的角色列表
Object[] userrole=userroles.toArray();
Model model=new Model();
for(int i=0;i&userrole.i++) //循环用户角色
Set&RoleModel& rolemodels=((UserRole)userrole[i]).getRole().getRoleModels(); //获取角色模块
Object[] rolemodel=rolemodels.toArray();
for(int j=0;j&rolemodel.j++)
model=modelDAO.findByid(((RoleModel)rolemodel[j]).getModel().getModelId().toString()); //根据角色模块中模块ID获取模块
modelList.add(model); //添加到模块列表
this.setSession("modelList",modelList);
我想问 下我这个方法写的有什么问题,为什么在 根据角色模块中模块ID获取模块 的时候循环几圈就会卡住不动。也不报错,有谁遇到过这种问题啊?问题补充没有人在吗?问题补充:问题已经解决。
改了一下方法
private void getRoleModelList(){
user=this.userDAO.getUser();
modelList=new ArrayList();
for (Iterator iter = user.getUserRoles().iterator(); iter.hasNext();) {
UserRole userrole = (UserRole) iter.next();
for(Iterator iterA=userrole.getRole().getRoleModels().iterator();iterA.hasNext();){
modelList.add(((RoleModel)iterA.next()).getModel());
}
&&&&&&& }
this.setSession("modelList",modelList);
谢谢各位的积极作答
modelDAO.findByid(((RoleModel)rolemodel[j]).getModel().getModelId().toString());
modelDAO.findByid()这个方法参数是什么类型的?
是否对参数为NULL时候进行处理!·
有,是不是().getModelId().toString()这错了,为什么还要加tostring。再说 getModeiId()得到的 id 是什么类型的,如果是int的 话,那加tostring就错了
循环几次不动了呀?每次尝试都是同样多次数就不动了吗?可不可能是数据库连接没了呀。。或者哪里阻塞死锁了呀。
已解决问题
未解决问题查看: 29287|回复: 11
[驾乘心得]
声望5 精华0帖子八豆10953 积分11452注册时间最后登录
电脑公司部门经理
<em id="authorposton13-4-25 14:58
如题,是不是只有被交警现场逮到这样的情况下,才会扣分?
刚看到一条消息,南京48条重点道路违停的话,罚100块+扣3分。
这个要怎么实施呢?扣谁的3分?
声望0 精华0帖子八豆12598 积分21849注册时间最后登录
<em id="authorposton13-4-25 15:00
随便谁的三分 关键钱要交
声望66 精华61帖子八豆10628 积分35753注册时间最后登录
<em id="authorposton13-4-25 15:07
老新闻了 10日起南京48条重点道路违停贴黄单 罚100元记3分_新浪江苏新闻_新浪江苏
声望0 精华0帖子八豆11318 积分12367注册时间最后登录
柜台小老板
<em id="authorposton13-4-25 15:35
拿黄单子去的时候,你不是要带驾证吗?
电子拍的看代码,有的扣,有的不扣
租婚车找我,电话 微信同步!
声望5 精华0帖子八豆10953 积分11452注册时间最后登录
电脑公司部门经理
<em id="authorposton13-4-25 15:53
ltklove00 发表于
拿黄单子去的时候,你不是要带驾证吗?
电子拍的看代码,有的扣,有的不扣 ...
拿别的驾照去不是一样吗?我说我借给朋友开的。
声望0 精华0帖子八豆11318 积分12367注册时间最后登录
柜台小老板
<em id="authorposton13-4-25 15:55
justcode 发表于
拿别的驾照去不是一样吗?我说我借给朋友开的。
当然可以,反正又个本扣分就行,前提找个C照吧,不然年审要学习的
租婚车找我,电话 微信同步!
声望5 精华0帖子八豆1387 积分1667注册时间最后登录
珠江路上卖盒饭的
<em id="authorposton13-4-25 17:35
闯红灯扣6分是交警当场抓住在这样扣的吗?还是拍到就这样扣?上次听一个出租司机说两个交警当场抓住才是6分的,不知道真假
声望0 精华0帖子八豆11318 积分12367注册时间最后登录
柜台小老板
<em id="authorposton13-4-25 18:22
kevin21 发表于
闯红灯扣6分是交警当场抓住在这样扣的吗?还是拍到就这样扣?上次听一个出租司机说两个交警当场抓住才是6分 ...
现在抓到200元记6分,电子曝光200元不记分
租婚车找我,电话 微信同步!
声望0 精华0帖子八豆11424 积分13097注册时间最后登录
柜台小老板
<em id="authorposton13-4-26 12:20
违停扣分不需驾驶员在场,驾驶员在场还会给警察机会贴黄单?
声望0 精华0帖子八豆7024 积分9340注册时间最后登录
电脑公司部门经理
<em id="authorposton13-4-26 12:23
黄单是要扣3分的
这个需要去N大队处理,处理时,提供驾照等,然后扣这个人的分
再领罚款单去银行缴费
声望16 精华11帖子八豆17669 积分24747注册时间最后登录
<em id="authorposton13-4-26 14:04
justcode 发表于
拿别的驾照去不是一样吗?我说我借给朋友开的。
拿别人的驾照去不行,要驾照的持有人到场才行
声望7 精华2帖子八豆32856 积分42565注册时间最后登录
<em id="authorposton13-4-26 15:33
所以有一种驾照叫F照(分照)
累计在线时长超过1000小时
国货当自强
国产车主,用户在指定帖子中申请获得
十二生肖o马
参加神秘活动获得
别逼我变形
美系车主,用户在指定帖子中申请获得
江苏电信翼友会会员
论坛元老,论坛创始到2004年间注册且发帖数大于200的论坛用户。
论坛老用户在新版论坛发贴数量达到10个
老用户发帖300个,在线300小时
亚洲工业典范
日系车主,用户在指定帖子中申请获得
Powered by
本站不良信息举报 邮箱: 电话:(025)查看: 5359|回复: 3
新手提问关于hadoop搭建时 namenode无法正常启动的问题
主题帖子积分
注册会员, 积分 58, 距离下一级还需 142 积分
注册会员, 积分 58, 距离下一级还需 142 积分
我之前搭建了1.2.1版本的hadoop完全没有问题的集群成功了,后来换了2.6.0版本的hadoop,在解压修改了配置文件后,我尝试-format。
但是之后发现有shutdown message。jps之后发现namenode没有启动,尝试启动hdfs发现secondaryNameNode可以启动。
之后浏览器打开50070端口也不行。
-format信息如下,希望诸位大神帮我看看是哪里出了问题。我昨天花了接近5个小时还是没找出来问题,拜托啦!
[root@hadoop0 hadoop]# ./bin/hdfs namenode -format
16/05/06 22:11:07 INFO namenode.NameNode: STARTUP_MSG:
/************************************************************
STARTUP_MSG: Starting NameNode
STARTUP_MSG:& &host = hadoop0/192.168.81.129
STARTUP_MSG:& &args = [-format]
STARTUP_MSG:& &version = 2.6.0
STARTUP_MSG:& &classpath = /usr/local/hadoop/etc/hadoop:/usr/local/hadoop/share/hadoop/common/lib/curator-framework-2.6.0.jar:/usr/local/hadoop/share/hadoop/common/lib/jersey-server-1.9.jar:/usr/local/hadoop/share/hadoop/common/lib/hamcrest-core-1.3.jar:/usr/local/hadoop/share/hadoop/common/lib/junit-4.11.jar:/usr/local/hadoop/share/hadoop/common/lib/htrace-core-3.0.4.jar:/usr/local/hadoop/share/hadoop/common/lib/java-xmlbuilder-0.4.jar:/usr/local/hadoop/share/hadoop/common/lib/jsp-api-2.1.jar:/usr/local/hadoop/share/hadoop/common/lib/protobuf-java-2.5.0.jar:/usr/local/hadoop/share/hadoop/common/lib/api-asn1-api-1.0.0-M20.jar:/usr/local/hadoop/share/hadoop/common/lib/commons-codec-1.4.jar:/usr/local/hadoop/share/hadoop/common/lib/jasper-runtime-5.5.23.jar:/usr/local/hadoop/share/hadoop/common/lib/curator-client-2.6.0.jar:/usr/local/hadoop/share/hadoop/common/lib/jsr305-1.3.9.jar:/usr/local/hadoop/share/hadoop/common/lib/slf4j-log4j12-1.7.5.jar:/usr/local/hadoop/share/hadoop/common/lib/commons-httpclient-3.1.jar:/usr/local/hadoop/share/hadoop/common/lib/activation-1.1.jar:/usr/local/hadoop/share/hadoop/common/lib/jetty-6.1.26.jar:/usr/local/hadoop/share/hadoop/common/lib/jackson-jaxrs-1.9.13.jar:/usr/local/hadoop/share/hadoop/common/lib/xz-1.0.jar:/usr/local/hadoop/share/hadoop/common/lib/commons-configuration-1.6.jar:/usr/local/hadoop/share/hadoop/common/lib/commons-cli-1.2.jar:/usr/local/hadoop/share/hadoop/common/lib/snappy-java-1.0.4.1.jar:/usr/local/hadoop/share/hadoop/common/lib/jets3t-0.9.0.jar:/usr/local/hadoop/share/hadoop/common/lib/jackson-xc-1.9.13.jar:/usr/local/hadoop/share/hadoop/common/lib/commons-compress-1.4.1.jar:/usr/local/hadoop/share/hadoop/common/lib/zookeeper-3.4.6.jar:/usr/local/hadoop/share/hadoop/common/lib/apacheds-i18n-2.0.0-M15.jar:/usr/local/hadoop/share/hadoop/common/lib/mockito-all-1.8.5.jar:/usr/local/hadoop/share/hadoop/common/lib/commons-el-1.0.jar:/usr/local/hadoop/share/hadoop/common/lib/avro-1.7.4.jar:/usr/local/hadoop/share/hadoop/common/lib/commons-beanutils-core-1.8.0.jar:/usr/local/hadoop/share/hadoop/common/lib/xmlenc-0.52.jar:/usr/local/hadoop/share/hadoop/common/lib/hadoop-annotations-2.6.0.jar:/usr/local/hadoop/share/hadoop/common/lib/slf4j-api-1.7.5.jar:/usr/local/hadoop/share/hadoop/common/lib/gson-2.2.4.jar:/usr/local/hadoop/share/hadoop/common/lib/commons-logging-1.1.3.jar:/usr/local/hadoop/share/hadoop/common/lib/netty-3.6.2.Final.jar:/usr/local/hadoop/share/hadoop/common/lib/jersey-json-1.9.jar:/usr/local/hadoop/share/hadoop/common/lib/commons-net-3.1.jar:/usr/local/hadoop/share/hadoop/common/lib/jettison-1.1.jar:/usr/local/hadoop/share/hadoop/common/lib/asm-3.2.jar:/usr/local/hadoop/share/hadoop/common/lib/jsch-0.1.42.jar:/usr/local/hadoop/share/hadoop/common/lib/apacheds-kerberos-codec-2.0.0-M15.jar:/usr/local/hadoop/share/hadoop/common/lib/hadoop-auth-2.6.0.jar:/usr/local/hadoop/share/hadoop/common/lib/log4j-1.2.17.jar:/usr/local/hadoop/share/hadoop/common/lib/jackson-mapper-asl-1.9.13.jar:/usr/local/hadoop/share/hadoop/common/lib/stax-api-1.0-2.jar:/usr/local/hadoop/share/hadoop/common/lib/jasper-compiler-5.5.23.jar:/usr/local/hadoop/share/hadoop/common/lib/api-util-1.0.0-M20.jar:/usr/local/hadoop/share/hadoop/common/lib/commons-lang-2.6.jar:/usr/local/hadoop/share/hadoop/common/lib/jaxb-impl-2.2.3-1.jar:/usr/local/hadoop/share/hadoop/common/lib/jetty-util-6.1.26.jar:/usr/local/hadoop/share/hadoop/common/lib/paranamer-2.3.jar:/usr/local/hadoop/share/hadoop/common/lib/commons-math3-3.1.1.jar:/usr/local/hadoop/share/hadoop/common/lib/commons-digester-1.8.jar:/usr/local/hadoop/share/hadoop/common/lib/guava-11.0.2.jar:/usr/local/hadoop/share/hadoop/common/lib/jackson-core-asl-1.9.13.jar:/usr/local/hadoop/share/hadoop/common/lib/commons-beanutils-1.7.0.jar:/usr/local/hadoop/share/hadoop/common/lib/jersey-core-1.9.jar:/usr/local/hadoop/share/hadoop/common/lib/commons-collections-3.2.1.jar:/usr/local/hadoop/share/hadoop/common/lib/httpcore-4.2.5.jar:/usr/local/hadoop/share/hadoop/common/lib/servlet-api-2.5.jar:/usr/local/hadoop/share/hadoop/common/lib/httpclient-4.2.5.jar:/usr/local/hadoop/share/hadoop/common/lib/jaxb-api-2.2.2.jar:/usr/local/hadoop/share/hadoop/common/lib/curator-recipes-2.6.0.jar:/usr/local/hadoop/share/hadoop/common/lib/commons-io-2.4.jar:/usr/local/hadoop/share/hadoop/common/hadoop-nfs-2.6.0.jar:/usr/local/hadoop/share/hadoop/common/hadoop-common-2.6.0-tests.jar:/usr/local/hadoop/share/hadoop/common/hadoop-common-2.6.0.jar:/usr/local/hadoop/share/hadoop/hdfs:/usr/local/hadoop/share/hadoop/hdfs/lib/jersey-server-1.9.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/xml-apis-1.3.04.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/htrace-core-3.0.4.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/jsp-api-2.1.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/protobuf-java-2.5.0.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/commons-codec-1.4.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/jasper-runtime-5.5.23.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/jsr305-1.3.9.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/jetty-6.1.26.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/commons-cli-1.2.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/commons-el-1.0.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/xmlenc-0.52.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/commons-daemon-1.0.13.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/commons-logging-1.1.3.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/netty-3.6.2.Final.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/xercesImpl-2.9.1.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/asm-3.2.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/log4j-1.2.17.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/jackson-mapper-asl-1.9.13.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/commons-lang-2.6.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/jetty-util-6.1.26.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/guava-11.0.2.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/jackson-core-asl-1.9.13.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/jersey-core-1.9.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/servlet-api-2.5.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/commons-io-2.4.jar:/usr/local/hadoop/share/hadoop/hdfs/hadoop-hdfs-2.6.0.jar:/usr/local/hadoop/share/hadoop/hdfs/hadoop-hdfs-2.6.0-tests.jar:/usr/local/hadoop/share/hadoop/hdfs/hadoop-hdfs-nfs-2.6.0.jar:/usr/local/hadoop/share/hadoop/yarn/lib/jersey-server-1.9.jar:/usr/local/hadoop/share/hadoop/yarn/lib/jline-0.9.94.jar:/usr/local/hadoop/share/hadoop/yarn/lib/guice-servlet-3.0.jar:/usr/local/hadoop/share/hadoop/yarn/lib/javax.inject-1.jar:/usr/local/hadoop/share/hadoop/yarn/lib/protobuf-java-2.5.0.jar:/usr/local/hadoop/share/hadoop/yarn/lib/commons-codec-1.4.jar:/usr/local/hadoop/share/hadoop/yarn/lib/jsr305-1.3.9.jar:/usr/local/hadoop/share/hadoop/yarn/lib/commons-httpclient-3.1.jar:/usr/local/hadoop/share/hadoop/yarn/lib/activation-1.1.jar:/usr/local/hadoop/share/hadoop/yarn/lib/jetty-6.1.26.jar:/usr/local/hadoop/share/hadoop/yarn/lib/aopalliance-1.0.jar:/usr/local/hadoop/share/hadoop/yarn/lib/jackson-jaxrs-1.9.13.jar:/usr/local/hadoop/share/hadoop/yarn/lib/jersey-client-1.9.jar:/usr/local/hadoop/share/hadoop/yarn/lib/xz-1.0.jar:/usr/local/hadoop/share/hadoop/yarn/lib/commons-cli-1.2.jar:/usr/local/hadoop/share/hadoop/yarn/lib/jersey-guice-1.9.jar:/usr/local/hadoop/share/hadoop/yarn/lib/guice-3.0.jar:/usr/local/hadoop/share/hadoop/yarn/lib/jackson-xc-1.9.13.jar:/usr/local/hadoop/share/hadoop/yarn/lib/commons-compress-1.4.1.jar:/usr/local/hadoop/share/hadoop/yarn/lib/leveldbjni-all-1.8.jar:/usr/local/hadoop/share/hadoop/yarn/lib/zookeeper-3.4.6.jar:/usr/local/hadoop/share/hadoop/yarn/lib/commons-logging-1.1.3.jar:/usr/local/hadoop/share/hadoop/yarn/lib/netty-3.6.2.Final.jar:/usr/local/hadoop/share/hadoop/yarn/lib/jersey-json-1.9.jar:/usr/local/hadoop/share/hadoop/yarn/lib/jettison-1.1.jar:/usr/local/hadoop/share/hadoop/yarn/lib/asm-3.2.jar:/usr/local/hadoop/share/hadoop/yarn/lib/log4j-1.2.17.jar:/usr/local/hadoop/share/hadoop/yarn/lib/jackson-mapper-asl-1.9.13.jar:/usr/local/hadoop/share/hadoop/yarn/lib/stax-api-1.0-2.jar:/usr/local/hadoop/share/hadoop/yarn/lib/commons-lang-2.6.jar:/usr/local/hadoop/share/hadoop/yarn/lib/jaxb-impl-2.2.3-1.jar:/usr/local/hadoop/share/hadoop/yarn/lib/jetty-util-6.1.26.jar:/usr/local/hadoop/share/hadoop/yarn/lib/guava-11.0.2.jar:/usr/local/hadoop/share/hadoop/yarn/lib/jackson-core-asl-1.9.13.jar:/usr/local/hadoop/share/hadoop/yarn/lib/jersey-core-1.9.jar:/usr/local/hadoop/share/hadoop/yarn/lib/commons-collections-3.2.1.jar:/usr/local/hadoop/share/hadoop/yarn/lib/servlet-api-2.5.jar:/usr/local/hadoop/share/hadoop/yarn/lib/jaxb-api-2.2.2.jar:/usr/local/hadoop/share/hadoop/yarn/lib/commons-io-2.4.jar:/usr/local/hadoop/share/hadoop/yarn/hadoop-yarn-server-tests-2.6.0.jar:/usr/local/hadoop/share/hadoop/yarn/hadoop-yarn-server-resourcemanager-2.6.0.jar:/usr/local/hadoop/share/hadoop/yarn/hadoop-yarn-server-web-proxy-2.6.0.jar:/usr/local/hadoop/share/hadoop/yarn/hadoop-yarn-api-2.6.0.jar:/usr/local/hadoop/share/hadoop/yarn/hadoop-yarn-applications-unmanaged-am-launcher-2.6.0.jar:/usr/local/hadoop/share/hadoop/yarn/hadoop-yarn-server-common-2.6.0.jar:/usr/local/hadoop/share/hadoop/yarn/hadoop-yarn-applications-distributedshell-2.6.0.jar:/usr/local/hadoop/share/hadoop/yarn/hadoop-yarn-server-nodemanager-2.6.0.jar:/usr/local/hadoop/share/hadoop/yarn/hadoop-yarn-client-2.6.0.jar:/usr/local/hadoop/share/hadoop/yarn/hadoop-yarn-common-2.6.0.jar:/usr/local/hadoop/share/hadoop/yarn/hadoop-yarn-registry-2.6.0.jar:/usr/local/hadoop/share/hadoop/yarn/hadoop-yarn-server-applicationhistoryservice-2.6.0.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/jersey-server-1.9.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/hamcrest-core-1.3.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/junit-4.11.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/guice-servlet-3.0.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/javax.inject-1.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/protobuf-java-2.5.0.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/aopalliance-1.0.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/xz-1.0.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/snappy-java-1.0.4.1.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/jersey-guice-1.9.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/guice-3.0.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/commons-compress-1.4.1.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/leveldbjni-all-1.8.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/avro-1.7.4.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/hadoop-annotations-2.6.0.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/netty-3.6.2.Final.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/asm-3.2.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/log4j-1.2.17.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/jackson-mapper-asl-1.9.13.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/paranamer-2.3.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/jackson-core-asl-1.9.13.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/jersey-core-1.9.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/commons-io-2.4.jar:/usr/local/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-hs-2.6.0.jar:/usr/local/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-shuffle-2.6.0.jar:/usr/local/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-core-2.6.0.jar:/usr/local/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-examples-2.6.0.jar:/usr/local/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-hs-plugins-2.6.0.jar:/usr/local/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-app-2.6.0.jar:/usr/local/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-jobclient-2.6.0.jar:/usr/local/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-common-2.6.0.jar:/usr/local/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-jobclient-2.6.0-tests.jar:/usr/local/hadoop/contrib/capacity-scheduler/*.jar
STARTUP_MSG:& &build =
-r e3496499ecb8d220fba99dc5ed4c99c8f9e33bb1; compiled by 'jenkins' on T21:10Z
STARTUP_MSG:& &java = 1.6.0_45
************************************************************/
16/05/06 22:11:07 INFO namenode.NameNode: registered UNIX signal handlers for [TERM, HUP, INT]
16/05/06 22:11:07 INFO namenode.NameNode: createNameNode [-format]
16/05/06 22:11:09 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
Formatting using clusterid: CID-f5-4882-89dc-ecf
16/05/06 22:11:10 INFO namenode.FSNamesystem: No KeyProvider found.
16/05/06 22:11:10 INFO namenode.FSNamesystem: fsLock is fair:true
16/05/06 22:11:10 INFO blockmanagement.DatanodeManager: dfs.block.invalidate.limit=1000
16/05/06 22:11:10 INFO blockmanagement.DatanodeManager: dfs.namenode.datanode.registration.ip-hostname-check=true
16/05/06 22:11:10 INFO blockmanagement.BlockManager: dfs.namenode.startup.delay.block.deletion.sec is set to 000:00:00:00.000
16/05/06 22:11:10 INFO blockmanagement.BlockManager: The block deletion will start around 2016 May 06 22:11:10
16/05/06 22:11:10 INFO util.GSet: Computing capacity for map BlocksMap
16/05/06 22:11:10 INFO util.GSet: VM type& && & = 32-bit
16/05/06 22:11:10 INFO util.GSet: 2.0% max memory 888.9 MB = 17.8 MB
16/05/06 22:11:10 INFO util.GSet: capacity& && &= 2^22 = 4194304 entries
16/05/06 22:11:10 INFO blockmanagement.BlockManager: dfs.block.access.token.enable=false
16/05/06 22:11:10 INFO blockmanagement.BlockManager: defaultReplication& && && &= 1
16/05/06 22:11:10 INFO blockmanagement.BlockManager: maxReplication& && && && & = 512
16/05/06 22:11:10 INFO blockmanagement.BlockManager: minReplication& && && && & = 1
16/05/06 22:11:10 INFO blockmanagement.BlockManager: maxReplicationStreams& && &= 2
16/05/06 22:11:10 INFO blockmanagement.BlockManager: shouldCheckForEnoughRacks&&= false
16/05/06 22:11:10 INFO blockmanagement.BlockManager: replicationRecheckInterval = 3000
16/05/06 22:11:10 INFO blockmanagement.BlockManager: encryptDataTransfer& && &&&= false
16/05/06 22:11:10 INFO blockmanagement.BlockManager: maxNumBlocksToLog& && && & = 1000
16/05/06 22:11:10 INFO namenode.FSNamesystem: fsOwner& && && && & = root (auth:SIMPLE)
16/05/06 22:11:10 INFO namenode.FSNamesystem: supergroup& && && & = supergroup
16/05/06 22:11:10 INFO namenode.FSNamesystem: isPermissionEnabled = true
16/05/06 22:11:10 INFO namenode.FSNamesystem: HA Enabled: false
16/05/06 22:11:10 INFO namenode.FSNamesystem: Append Enabled: true
16/05/06 22:11:11 INFO util.GSet: Computing capacity for map INodeMap
16/05/06 22:11:11 INFO util.GSet: VM type& && & = 32-bit
16/05/06 22:11:11 INFO util.GSet: 1.0% max memory 888.9 MB = 8.9 MB
16/05/06 22:11:11 INFO util.GSet: capacity& && &= 2^21 = 2097152 entries
16/05/06 22:11:11 INFO namenode.NameNode: Caching file names occuring more than 10 times
16/05/06 22:11:11 INFO util.GSet: Computing capacity for map cachedBlocks
16/05/06 22:11:11 INFO util.GSet: VM type& && & = 32-bit
16/05/06 22:11:11 INFO util.GSet: 0.25% max memory 888.9 MB = 2.2 MB
16/05/06 22:11:11 INFO util.GSet: capacity& && &= 2^19 = 524288 entries
16/05/06 22:11:11 INFO namenode.FSNamesystem: dfs.namenode.safemode.threshold-pct = 0.6033
16/05/06 22:11:11 INFO namenode.FSNamesystem: dfs.namenode.safemode.min.datanodes = 0
16/05/06 22:11:11 INFO namenode.FSNamesystem: dfs.namenode.safemode.extension& &&&= 30000
16/05/06 22:11:11 INFO namenode.FSNamesystem: Retry cache on namenode is enabled
16/05/06 22:11:11 INFO namenode.FSNamesystem: Retry cache will use 0.03 of total heap and retry cache entry expiry time is 600000 millis
16/05/06 22:11:11 INFO util.GSet: Computing capacity for map NameNodeRetryCache
16/05/06 22:11:11 INFO util.GSet: VM type& && & = 32-bit
16/05/06 22:11:11 INFO util.GSet: 0.447746% max memory 888.9 MB = 273.1 KB
16/05/06 22:11:11 INFO util.GSet: capacity& && &= 2^16 = 65536 entries
16/05/06 22:11:11 INFO namenode.NNConf: ACLs enabled? false
16/05/06 22:11:11 INFO namenode.NNConf: XAttrs enabled? true
16/05/06 22:11:11 INFO namenode.NNConf: Maximum size of an xattr: 16384
Re-format filesystem in Storage Directory /usr/local/hadoop/tmp/dfs/name ? (Y or N) Y
16/05/06 22:11:30 INFO namenode.FSImage: Allocated new BlockPoolId: BP--192.168.81.129-9
16/05/06 22:11:30 INFO common.Storage: Storage directory /usr/local/hadoop/tmp/dfs/name has been successfully formatted.
16/05/06 22:11:30 INFO namenode.NNStorageRetentionManager: Going to retain 1 images with txid &= 0
16/05/06 22:11:30 INFO util.ExitUtil: Exiting with status 0
16/05/06 22:11:30 INFO namenode.NameNode: SHUTDOWN_MSG:
/************************************************************
SHUTDOWN_MSG: Shutting down NameNode at hadoop0/192.168.81.129
************************************************************/
主题帖子积分
高级会员, 积分 1676, 距离下一级还需 3324 积分
高级会员, 积分 1676, 距离下一级还需 3324 积分
16/05/06 22:11:30 INFO common.Storage: Storage directory /usr/local/hadoop/tmp/dfs/name has been successfully formatted.
已经成功了
其它没有成功,你看下日志
配置方面楼主可参考这个
hadoop(2.x)以hadoop2.2为例完全分布式最新高可靠安装文档
主题帖子积分
中级会员, 积分 623, 距离下一级还需 377 积分
中级会员, 积分 623, 距离下一级还需 377 积分
磁盘空间呢?
主题帖子积分
注册会员, 积分 58, 距离下一级还需 142 积分
注册会员, 积分 58, 距离下一级还需 142 积分
谢谢各位我已经把hadoop搭建起来了。namenode这个问题后来是发现因为我改了几次设置后重复-format格式化了namenode 导致namenode的ID变了 但是datanode的ID是不变的。我用datanode的ID覆盖了namenode的ID之后,集群就成功的启动了
站长推荐 /3
会员注册不成功的原因
新手获取积分方法
hadoop3.0学习:零基础安装部署hadoop集群
Powered by}

我要回帖

更多关于 steam通用破解补丁 的文章

更多推荐

版权声明:文章内容来源于网络,版权归原作者所有,如有侵权请点击这里与我们联系,我们将及时删除。

点击添加站长微信