Back up the following critical data before attempting an upgrade.
On the node that hosts the NameNode, open the Hadoop Command Line shortcut (or open a command window in the Hadoop directory). As the
hadoopuser, go to the HDFS home directory:runas /user:hadoop "cmd /K cd %HDFS_HOME%"Run the
fsckcommand to fix any file system errors.hdfs fsck / -files -blocks -locations > dfs-old-fsck-1.logThe console output is printed to the
dfs-old-fsck-1.logfile.Capture the complete namespace directory tree of the file system:
If you are upgrading from HDP 1.3:
hdfs dfs -lsr / > dfs-old-lsr-1.logIf you are upgrading from a more recent release:
hdfs dfs -ls -R / > dfs-old-lsr-1.log
Create a list of DataNodes in the cluster:
hdfs dfsadmin -report > dfs-old-report-1.logCapture output from the
fsckcommand:hdfs fsck / -blocks -locations -files > fsck-old-report-1.logVerify that there are no missing or corrupted files/replicas in the
fsckcommand output.Save the HDFS namespace:
Place the NameNode in safe mode, to keep HDFS from accepting any new writes:
hdfs dfsadmin -safemode enterSave the namespace.
hdfs dfsadmin -saveNamespace![[Warning]](../common/images/admon/warning.png)
Warning From this point on, HDFS should not accept any new writes. Stay in safe mode!
Finalize the namespace:
hdfs namenode -finalizeOn the machine that hosts the NameNode, copy the following checkpoint directories into a backup directory:
%HADOOP_HDFS_HOME%\hdfs\nn\edits\current %HADOOP_HDFS_HOME%\hdfs\nn\edits\image %HADOOP_HDFS_HOME%\hdfs\nn\edits\previous.checkpoint

