To the hdfs-site.xml file on every host in your cluster, you must add the following
                information:
Table 13.4. hdfs-site.xml
| Property Name | Property Value | Description | 
|---|---|---|
| dfs.block.access.token.enable | true | If true, access tokens are used as capabilities for accessing datanodes.
                                    Iffalse, no access tokens are checked on accessing datanodes. | 
| dfs.namenode.kerberos.principal | nn/_HOST@EXAMPLE.COM | Kerberos principal name for the NameNode. | 
| dfs.secondary.namenode.kerberos.principal | nn/_HOST@EXAMPLE.COM | Kerberos principal name for the secondary NameNode. | 
| dfs.secondary.http.address Note: cluster variant | Example: ip-10-72-235-178.ec2.internal:50090 | Address of secondary namenode web server. | 
| dfs.secondary.https.port | 50490 | The https port to which the secondary-namenode binds | 
| dfs.web.authentication.kerberos.principal | HTTP/_HOST@EXAMPLE.COM | The HTTP Kerberos principal used by Hadoop-Auth in the HTTP endpoint. The HTTP Kerberos principal MUST start with 'HTTP/' per Kerberos HTTP SPNEGO specification. | 
| dfs.web.authentication.kerberos.keytab | /etc/security/keytabs/spnego.service.keytab  | The Kerberos keytab file with the credentials for the HTTP Kerberos principal used by Hadoop-Auth in the HTTP endpoint. | 
| dfs.datanode.kerberos.principal | dn/_HOST@EXAMPLE.COM | The Kerberos principal that the DataNode runs as. "_HOST" is replaced by the real host name . | 
| dfs.namenode.keytab.file | /etc/security/keytabs/nn.service.keytab | Combined keytab file containing the NameNode service and host principals. | 
| dfs.secondary.namenode.keytab.file | /etc/security/keytabs/nn.service.keytab | Combined keytab file containing the NameNode service and host principals. <question?> | 
| dfs.datanode.keytab.file | /etc/security/keytabs/dn.service.keytab | The filename of the keytab file for the DataNode. | 
| dfs.https.port | 50470 | The https port to which the NameNode binds | 
| dfs.https.address | Example: ip-10-111-59-170.ec2.internal:50470 | The https address to which the NameNode binds | 
| dfs.namenode.kerberos.internal.spnego.principal | ${dfs.web.authentication.kerberos.principal} | |
| dfs.secondary.namenode.kerberos.internal.spnego.principal | ${dfs.web.authentication.kerberos.principal} | |
| dfs.datanode.address | The address, with a privileged port - any port number under 1023. Example: 0.0.0.0:1019 | |
| dfs.datanode.http.address | The address, with a privileged port - any port number under 1023. Example: 0.0.0.0:1022 | 
The XML for these entries:
<property> 
        <name>dfs.block.access.token.enable</name> 
        <value>true</value> 
        <description> If "true", access tokens are used as capabilities
        for accessing datanodes. If "false", no access tokens are checked on
        accessing datanodes. </description> 
</property>   
<property> 
        <name>dfs.namenode.kerberos.principal</name> 
        <value>nn/_HOST@EXAMPLE.COM</value> 
        <description> Kerberos principal name for the
        NameNode </description> 
</property>   
<property> 
        <name>dfs.secondary.namenode.kerberos.principal</name> 
        <value>nn/_HOST@EXAMPLE.COM</value>    
        <description>Kerberos principal name for the secondary NameNode.    
        </description>          
</property>      
<property>     
        <!--cluster variant -->    
        <name>dfs.secondary.http.address</name>    
        <value>ip-10-72-235-178.ec2.internal:50090</value>    
        <description>Address of secondary namenode web server</description>  
</property>    
<property>    
        <name>dfs.secondary.https.port</name>    
        <value>50490</value>    
        <description>The https port where secondary-namenode
        binds</description>  
</property>    
<property>    
        <name>dfs.web.authentication.kerberos.principal</name>    
        <value>HTTP/_HOST@EXAMPLE.COM</value>    
        <description> The HTTP Kerberos principal used by Hadoop-Auth in the HTTP endpoint. 
        The HTTP Kerberos principal MUST start with 'HTTP/' per Kerberos HTTP
        SPNEGO specification.    
        </description>  
</property>    
<property>    
        <name>dfs.web.authentication.kerberos.keytab</name>    
        <value>/etc/security/keytabs/spnego.service.keytab</value>    
        <description>The Kerberos keytab file with the credentials for the HTTP
        Kerberos principal used by Hadoop-Auth in the HTTP endpoint.    
        </description>  
</property>    
<property>    
        <name>dfs.datanode.kerberos.principal</name>    
        <value>dn/_HOST@EXAMPLE.COM</value>  
        <description>        
        The Kerberos principal that the DataNode runs as. "_HOST" is replaced by the real
        host name.    
        </description>  
</property>    
<property>    
        <name>dfs.namenode.keytab.file</name>    
        <value>/etc/security/keytabs/nn.service.keytab</value>  
        <description>        
        Combined keytab file containing the namenode service and host
        principals.    
        </description>  
</property>    
<property>     
        <name>dfs.secondary.namenode.keytab.file</name>    
        <value>/etc/security/keytabs/nn.service.keytab</value>  
        <description>        
        Combined keytab file containing the namenode service and host
        principals.    
        </description>  
</property>    
<property>     
        <name>dfs.datanode.keytab.file</name>    
        <value>/etc/security/keytabs/dn.service.keytab</value>  
        <description>        
        The filename of the keytab file for the DataNode.    
        </description>  
</property>    
<property>    
        <name>dfs.https.port</name>    
        <value>50470</value>  
        <description>The https port where namenode
        binds</description>    
</property>    
<property>    
        <name>dfs.https.address</name>    
        <value>ip-10-111-59-170.ec2.internal:50470</value>  
        <description>The https address where namenode binds</description>    
</property>    
<property>  
        <name>dfs.namenode.kerberos.internal.spnego.principal</name>  
        <value>${dfs.web.authentication.kerberos.principal}</value> 
</property>   
<property>  
        <name>dfs.secondary.namenode.kerberos.internal.spnego.principal</name>  
        <value>${dfs.web.authentication.kerberos.principal}</value> 
</property>
<property>
        <name>dfs.datanode.address</name>
        <value>The address, with a privileged port - any port number under 1023. Example: 0.0.0.0:1019</value>
</property>
<property>
        <name>def.datanode.http.address</name>
        <value>The address, with a privileged port - any port number under 1023.  For example: 0.0.0.0:1022</value>
</property> On all secure DataNodes, you must set the user to run the DataNode as after dropping privileges. For example:
export HADOOP_SECURE_DN_USER=$HDFS_USER
| ![[Note]](../common/images/admon/note.png) | Note | 
|---|---|
| The DataNode daemon must be started as  | 
Optionally, you can allow that user to access the directories where pid and log files are kept. For example:
export HADOOP_SECURE_DN_PID_DIR=/var/run/hadoop/$HADOOP_SECURE_DN_USER export HADOOP_SECURE_DN_LOG_DIR=/var/run/hadoop/$HADOOP_SECURE_DN_USER


