hdfs getconf command examples

Submitted by admin on Mon, 06/03/2019 - 03:45

hdfs getconf is utility for getting configuration information from the config file.Not only for HDFS, but also actual configuration values for Yarn, core-site etc.

 

Get list of name nodes in the cluster

$ hdfs getconf -namenodes <namenode>

Get list of secondary namenodes in the cluster

$ hdfs getconf -secondaryNameNodes 0.0.0.0

Get list of backup nodes in the cluster

$ hdfs getconf -backupNodes 0.0.0.0

Get include file path that defines the datanodes that can join the cluster

$ hdfs getconf -includeFile Configuration dfs.hosts is missing

Get exclude file path that defines the datanodes that need to be excluded

Gets the namenode rpc addresses

$ hdfs getconf -nnRpcAddresses <namenode>:9000

Gets a specific key from the configuration

This command is very interesting and you can use it to debug your system see what actual variable is set to your running cluster.

dfs.namenode.name.dir

$ hdfs getconf -confKey dfs.namenode.name.dir file:///disk/c0t1,/disk/c1t1

fs.defaultFS

$ hdfs getconf -confKey fs.defaultFS hdfs://zpool02:9000

yarn.resourcemanager.address

$ hdfs getconf -confKey yarn.resourcemanager.address 0.0.0.0:8032

mapreduce.framework.name

$ hdfs getconf -confKey mapreduce.framework.name yarn

dfs.namenode.name.dir

$ hdfs getconf -confKey dfs.namenode.name.dir file:///disk/c0t1,/disk/c1t1

dfs.default.chunk.view.size

$ hdfs getconf -confKey dfs.default.chunk.view.size 32768

dfs.namenode.fs-limits.max-blocks-per-file

$ hdfs getconf -confKey dfs.namenode.fs-limits.max-blocks-per-file 1048576

dfs.permissions.enabled

$ hdfs getconf -confKey dfs.permissions.enabled true

dfs.namenode.acls.enabled

$ hdfs getconf -confKey dfs.namenode.acls.enabled false

dfs.replication

$ hdfs getconf -confKey dfs.replication 2

dfs.replication.max

$ hdfs getconf -confKey dfs.replication.max 512

dfs.namenode.replication.min

$ hdfs getconf -confKey dfs.namenode.replication.min 1

dfs.blocksize

$ hdfs getconf -confKey dfs.blocksize 134217728

dfs.client.block.write.retries


$ hdfs getconf -confKey dfs.client.block.write.retries
3

dfs.hosts.exclude

$ hdfs getconf -confKey dfs.hosts.exclude /opt/hadoop/hadoop-2.7.3/etc/hadoop/dfs.exclude

dfs.namenode.checkpoint.edits.dir

$ hdfs getconf -confKey dfs.namenode.checkpoint.edits.dir file:///tmp/hadoop-hadoop/dfs/namesecondary

dfs.image.compress

$ hdfs getconf -confKey dfs.image.compress false

dfs.image.compression.codec

$ hdfs getconf -confKey dfs.image.compression.codec org.apache.hadoop.io.compress.DefaultCodec

dfs.user.home.dir.prefix

$ hdfs getconf -confKey dfs.user.home.dir.prefix /user

As mentioned in the begging of this article, hdfs getconf -conKey can also get values from core.xml, yarn etc..

dfs.permissions.enabled

$ hdfs getconf -confKey dfs.permissions.enabled true

io.file.buffer.size

$ hdfs getconf -confKey io.file.buffer.size 4096

io.bytes-per-checksum

$ hdfs getconf -confKey io.bytes-per-checksum 512

io.seqfile.local.dir

$ hdfs getconf -confKey io.seqfile.local.dir /tmp/hadoop-hadoop/io/local

You can dump Hadoop config by running:

$ hadoop org.apache.hadoop.conf.Configuration

Blog tags