问题: 格式化HDFS文件系统的namenode失败,求大神帮忙看看配置是哪里出错了~
描述:hadoop ubuntu HDFS
************************************************************/
15/07/17 23:34:59 INFO namenode.NameNode: registered UNIX signal handlers for [TERM, HUP, INT]
15/07/17 23:34:59 INFO namenode.NameNode: createNameNode [-format]
15/07/17 23:34:59 WARN common.Util: Path /usr/local/hd/dfs/name should be specified as a URI in configuration files. Please update hdfs configuration.
15/07/17 23:34:59 WARN common.Util: Path /usr/local/hd/dfs/name should be specified as a URI in configuration files. Please update hdfs configuration.
Formatting using clusterid: CID-cda62f09-97a1-4f13-a58f-e64c01e4c1ba
15/07/17 23:34:59 INFO namenode.FSNamesystem: No KeyProvider found.
15/07/17 23:34:59 INFO namenode.FSNamesystem: fsLock is fair:true
15/07/17 23:35:00 INFO blockmanagement.DatanodeManager: dfs.block.invalidate.limit=1000
15/07/17 23:35:00 INFO blockmanagement.DatanodeManager: dfs.namenode.datanode.registration.ip-hostname-check=true
15/07/17 23:35:00 INFO blockmanagement.BlockManager: dfs.namenode.startup.delay.block.deletion.sec is set to 000:00:00:00.000
15/07/17 23:35:00 INFO blockmanagement.BlockManager: The block deletion will start around 2015 七月 17 23:35:00
15/07/17 23:35:00 INFO util.GSet: Computing capacity for map BlocksMap
15/07/17 23:35:00 INFO util.GSet: VM type = 64-bit
15/07/17 23:35:00 INFO util.GSet: 2.0% max memory 889 MB = 17.8 MB
15/07/17 23:35:00 INFO util.GSet: capacity = 2^21 = 2097152 entries
15/07/17 23:35:00 INFO blockmanagement.BlockManager: dfs.block.access.token.enable=false
15/07/17 23:35:00 INFO blockmanagement.BlockManager: defaultReplication = 1
15/07/17 23:35:00 INFO blockmanagement.BlockManager: maxReplication = 512
15/07/17 23:35:00 INFO blockmanagement.BlockManager: minReplication = 1
15/07/17 23:35:00 INFO blockmanagement.BlockManager: maxReplicationStreams = 2
15/07/17 23:35:00 INFO blockmanagement.BlockManager: shouldCheckForEnoughRacks = false
15/07/17 23:35:00 INFO blockmanagement.BlockManager: replicationRecheckInterval = 3000
15/07/17 23:35:00 INFO blockmanagement.BlockManager: encryptDataTransfer = false
15/07/17 23:35:00 INFO blockmanagement.BlockManager: maxNumBlocksToLog = 1000
15/07/17 23:35:00 INFO namenode.FSNamesystem: fsOwner = tangxinyu (auth:SIMPLE)
15/07/17 23:35:00 INFO namenode.FSNamesystem: supergroup = supergroup
15/07/17 23:35:00 INFO namenode.FSNamesystem: isPermissionEnabled = true
15/07/17 23:35:00 INFO namenode.FSNamesystem: HA Enabled: false
15/07/17 23:35:00 INFO namenode.FSNamesystem: Append Enabled: true
15/07/17 23:35:00 INFO util.GSet: Computing capacity for map INodeMap
15/07/17 23:35:00 INFO util.GSet: VM type = 64-bit
15/07/17 23:35:00 INFO util.GSet: 1.0% max memory 889 MB = 8.9 MB
15/07/17 23:35:00 INFO util.GSet: capacity = 2^20 = 1048576 entries
15/07/17 23:35:00 INFO namenode.NameNode: Caching file names occuring more than 10 times
15/07/17 23:35:00 INFO util.GSet: Computing capacity for map cachedBlocks
15/07/17 23:35:00 INFO util.GSet: VM type = 64-bit
15/07/17 23:35:00 INFO util.GSet: 0.25% max memory 889 MB = 2.2 MB
15/07/17 23:35:00 INFO util.GSet: capacity = 2^18 = 262144 entries
15/07/17 23:35:00 INFO namenode.FSNamesystem: dfs.namenode.safemode.threshold-pct = 0.9990000128746033
15/07/17 23:35:00 INFO namenode.FSNamesystem: dfs.namenode.safemode.min.datanodes = 0
15/07/17 23:35:00 INFO namenode.FSNamesystem: dfs.namenode.safemode.extension = 30000
15/07/17 23:35:00 INFO namenode.FSNamesystem: Retry cache on namenode is enabled
15/07/17 23:35:00 INFO namenode.FSNamesystem: Retry cache will use 0.03 of total heap and retry cache entry expiry time is 600000 millis
15/07/17 23:35:00 INFO util.GSet: Computing capacity for map NameNodeRetryCache
15/07/17 23:35:00 INFO util.GSet: VM type = 64-bit
15/07/17 23:35:00 INFO util.GSet: 0.029999999329447746% max memory 889 MB = 273.1 KB
15/07/17 23:35:00 INFO util.GSet: capacity = 2^15 = 32768 entries
15/07/17 23:35:00 INFO namenode.NNConf: ACLs enabled? false
15/07/17 23:35:00 INFO namenode.NNConf: XAttrs enabled? true
15/07/17 23:35:00 INFO namenode.NNConf: Maximum size of an xattr: 16384
15/07/17 23:35:00 INFO namenode.FSImage: Allocated new BlockPoolId: BP-1283852822-127.0.1.1-1437147300357
15/07/17 23:35:00 WARN namenode.NameNode: Encountered exception during format:
java.io.IOException: Cannot create directory /usr/local/hd/dfs/name/current
at org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.clearDirectory(Storage.java:337)
at org.apache.hadoop.hdfs.server.namenode.NNStorage.format(NNStorage.java:548)
at org.apache.hadoop.hdfs.server.namenode.NNStorage.format(NNStorage.java:569)
at org.apache.hadoop.hdfs.server.namenode.FSImage.format(FSImage.java:148)
at org.apache.hadoop.hdfs.server.namenode.NameNode.format(NameNode.java:941)
at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1379)
at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1504)
15/07/17 23:35:00 FATAL namenode.NameNode: Failed to start namenode.
java.io.IOException: Cannot create directory /usr/local/hd/dfs/name/current
at org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.clearDirectory(Storage.java:337)
at org.apache.hadoop.hdfs.server.namenode.NNStorage.format(NNStorage.java:548)
at org.apache.hadoop.hdfs.server.namenode.NNStorage.format(NNStorage.java:569)
at org.apache.hadoop.hdfs.server.namenode.FSImage.format(FSImage.java:148)
at org.apache.hadoop.hdfs.server.namenode.NameNode.format(NameNode.java:941)
at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1379)
at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1504)
15/07/17 23:35:00 INFO util.ExitUtil: Exiting with status 1
15/07/17 23:35:00 INFO namenode.NameNode: SHUTDOWN_MSG:
/************************************************************
SHUTDOWN_MSG: Shutting down NameNode at master/127.0.1.1
**********************************************************
我的配置如下:
core-site.xml
hadoop.tmp.dir
/home/tangxinyu/hadoop-2.6.0/tmp
Abase for other temporary directories.
fs.default.name
hdfs://master:9000
dfs.namenode.name.dir
/home/tangxinyu/hadoop-2.6.0/dfs/name
hdfs-site.xml
dfs.name.dir
/home/tangxinyu/hadoop-2.6.0/dfs/name
Path on the local filesystem where the NameNode stores the namespace and transactions logs persistently.
dfs.data.dir
/home/tangxinyu/hadoop-2.6.0/dfs/data
Comma separated list of paths on the local filesystem of a DataNode where it should store its blocks.
dfs.replication
1
dfs.namenode.name.dir
/usr/local/hd/dfs/name
mapred-site.xml.template
mapred.job.tracker
master:9001
Host or IP and port of JobTracker.
解决方案1: 格式化的目录之前是不能存在的,HDFS这个机制就是为了避免删除掉其他数据
解决方案2: 看错误信息 貌似是不能创建 /usr/local/hd/dfs/name/current。是不是权限问题
解决方案3: 新版已经没有这个文件了吧
解决方案4: /usr/local/hd/dfs/name 这个目录存在,或者有权限吗
- 明星图片
- 相关文章
-
联系邮箱:mxgf168#qq.com(#改为@)