Upgrading to CDH 5.x Using a Rolling Upgrade
Minimum Required Role: Cluster Administrator (also provided by Full Administrator)
This topic describes how to perform a rolling upgrade from any version of CDH 5.x to a higher version of CDH 5.x, using Cloudera Manager and parcels.The minor version of Cloudera Manager you use to perform the upgrade must be equal to or greater than the CDH minor version. To upgrade Cloudera Manager, see Overview of Upgrading Cloudera Manager.
- CDH was installed using Cloudera Manager and parcels. You can migrate your cluster from using packages to using parcels.
- The cluster uses a Cloudera Enterprise license.
- High availability is enabled for HDFS.

- After the upgrade has completed, do not remove the old parcels if there are MapReduce or Spark jobs currently running. These jobs still use the old parcels and must be restarted in order to use the newly upgraded parcel.
- Ensure that Oozie jobs are idempotent.
- Do not use Oozie Shell Actions to run Hadoop-related commands.
- Rolling upgrade of Spark Streaming jobs is not supported. Restart the streaming job once the upgrade is complete, so that the newly deployed version starts being used.
- Runtime libraries must be packaged as part of the Spark application.
- You must use the distributed cache to propagate the job configuration files from the client gateway machines.
- Do not build "uber" or "fat" JAR files that contain third-party dependencies or CDH classes as these can conflict with the classes that Yarn, Oozie, and other services automatically add to the CLASSPATH.
- Build your Spark applications without bundling CDH JARs.

[Not required for CDH maintenance release upgrades.].
The version numbers for maintenance releases differ only in the third digit, for example when upgrading from CDH 5.8.0 to CDH 5.8.2. See Maintenance Version Upgrades.
To upgrade CDH using a rolling upgrade:
- Step 1: Collect Upgrade Information
- Step 2: Complete Pre-Upgrade Steps
- Step 3: Ensure High Availability Is Enabled
- Step 4: Back Up HDFS Metadata
- Step 5: Back Up Databases
- Step 6: Run the Upgrade Wizard
- Step 7: Recover from Failed Steps or Perform a Manual Upgrade
- Step 8: Remove the Previous CDH Version Packages and Refresh Symlinks
- Step 9: Finalize HDFS Rolling Upgrade
- Step 10: Exit Maintenance Mode
- Step 11: Clear Browser Cache (Hue only)
Step 1: Collect Upgrade Information
- Host credentials. You must have SSH access and be able to log in using a root account or an account that has password-less sudo permission.
- The version of Cloudera Manager used in your cluster. Go to .
- The version of the JDK deployed in the cluster. Go to .
- The version of CDH. The CDH version number displays next to the cluster name on the Home page.
- Whether the cluster was installed using parcels or packages. This information displays next to the CDH version on the Home page of Cloudera Manager.
- The services enabled in your cluster. Go to .
- Operating system type and version. Go to Hosts and click on a hostname in the list. The operating system type and version displays in the Distribution row in the Details section.
- Database information for the databases used by Sqoop, Oozie, Hue, Hive Metastore, and Sentry Server (information is only required if theses services are enabled in the cluster).
Gather the following information:
- Type of database (PostgreSQL, Embedded PostgreSQL, MySQL, MariaDB, or Oracle)
- Hostnames of the databases
- Credentials for the databases
To locate database information:- Sqoop, Oozie, and Hue – Go to .
- Hive Metastore – Go to the Hive service, select Configuration, and select the Hive Metastore Database category.
- Sentry – Go to the Sentry service, select Configuration, and select the Sentry Server Database category.
Step 2: Complete Pre-Upgrade Steps
Step 3: Ensure High Availability Is Enabled
See HDFS High Availability for instructions. Enabling automatic failover is optional. Automatic failover does not affect the rolling restart operation. If you have JobTracker high availability configured, Cloudera Manager will fail over the JobTracker during the rolling restart, but configuring JobTrackrer high availability is not a requirement for performing a rolling upgrade.
Step 4: Back Up HDFS Metadata
[Not required for CDH maintenance release upgrades.]
- CDH 5.0 or 5.1 to 5.2 or higher
- CDH 5.2 or 5.3 to 5.4 or higher
Back up HDFS metadata using the following command:
hdfs dfsadmin -fetchImage local directory
Step 5: Back Up Databases

Service | Where to find database information |
---|---|
Sqoop | Go to Database category. | and select the
Hue | Go to Database category. | and select the
Oozie | Go to Database category. | and select the
Cloudera Navigator Audit Server | Go to Database category. | and select the
Cloudera Navigator Metadata Server | Go to Database category. | and select the
Activity Monitor | Go to Database category. | and select the
Reports Manager | Go to Database category. | and select the
Sentry Server | Go to Sentry Server Database category. | and select the
Hive Metastore | Go to Hive Metastore Database category. | and select the
- If not already stopped, stop the service:
- On the
to the right of the service name and select Stop.
tab, click - Click Stop in the next screen to confirm. When you see a Finished status, the service has stopped.
- On the
- Back up the database. See Backing Up Databases for detailed instructions for each supported type of database.
- Restart the service:
- On the
to the right of the service name and select Start.
tab, click - Click Start that appears in the next screen to confirm. When you see a Finished status, the service has started.
- On the
Step 6: Run the Upgrade Wizard

- If your cluster has Kudu 1.4.0 (or lower) installed, deactivate the existing Kudu parcel. Starting with Kudu 1.5.0 / CDH 5.13, Kudu is part of the CDH parcel and does not need to be installed separately.
- If your cluster has Spark 2.0 or Spark 2.1 installed and you want to upgrade to CDH 5.13 or higher, you must first upgrade to
Spark 2.1 release 2 or later before upgrading CDH. To install these versions of Spark, do the following before running the CDH Upgrade Wizard:
- Install the Custom Service Descriptor (CSD) file. See
- Installing Spark 2.1
- Installing Spark 2.2
Note: Spark 2.2 requires that JDK 1.8 be deployed throughout the cluster. JDK 1.7 is not supported for Spark 2.2.
- Download, distribute, and activate the Parcel for the version of Spark that you are installing:
- Spark 2.1 release 2: The parcel name includes "cloudera2" in its name.
- Spark 2.2 release 1: The parcel name includes "cloudera1" in its name.
- Install the Custom Service Descriptor (CSD) file. See
- From the
next to the cluster name and select Upgrade Cluster.
The Getting Started page of the upgrade wizard displays.
tab, click - If the option to pick between packages and parcels displays, select Use Parcels.
- In the Choose CDH Version (Parcels) field, select the CDH version. If no qualifying parcels are listed, or you want to upgrade to a different version, click the Modify the Remote Parcel Repository URLs link to go to the configuration page for Remote Parcel Repository URLs and add the appropriate URL to the configuration. See Parcel Configuration Settings for information about entering the correct URL for parcel repositories. Click Continue.
- If you previously installed the GPLEXTRAS parcel, download and distribute the version of the GPLEXTRAS parcel that matches the version of CDH that you are upgrading to.
- Read the notices for steps you must complete before upgrading, click the Yes, I ...
checkboxes after completing the steps, and click Continue. If you downloaded a new version of the GPLEXTRAS parcel, the Upgrade Wizard displays a message that the
GPLEXTRAS parcel conflicts with the version of the CDH parcel, similar to the following:
Select the option to resolve the conflicts automatically and click Continue.
Cloudera Manager deactivates the old version of the GPLEXTRAS parcel, activates the new version and verifies that all hosts have the correct software installed.
- Click Continue.
The selected parcels are downloaded and distributed.
- Click Continue.
The Host Inspector runs and displays the CDH version on the hosts.
- Click Continue.
The Choose Upgrade Procedure screen displays.
- Select Rolling Restart . Cloudera Manager upgrades services and performs a rolling restart. This option is only available if you have enabled high availability for HDFS. Services that do not support rolling restart undergo a normal restart, and are not available during the restart process.
- (Optional) Configure the following parameters for the rolling restart:
- Batch Size
- Number of roles to include in a batch. Cloudera Manager restarts the worker roles rack-by-rack, in alphabetical order, and within each rack, hosts are restarted in alphabetical order. If you use the default replication factor of 3, Hadoop tries to keep the replicas on at least 2 different racks. So if you have multiple racks, you can use a higher batch size than the default 1. However, using a batch size that is too high means that fewer worker roles are active at any time during the upgrade, which can cause temporary performance degradation. If you are using a single rack, restart one worker node at a time to ensure data availability during upgrade.
- Amount of time Cloudera Manager waits before starting the next batch.
- The number of batch failures that cause the entire rolling restart to fail. For example if you have a very large cluster, you can use this option to allow some failures when you know that the cluster is functional when some worker roles are down.
- Click Continue.
The Upgrade Cluster Command screen displays the result of the commands run by the wizard as it shuts down services, activates the new parcel, upgrades services, deploys client configuration files, restarts services, and performs a rolling restart of the services that support it.
If your cluster was previously installed or upgraded using packages, the wizard may indicate that some services cannot start because their parcels are not available. To download the required parcels:- In another browser tab, open the Cloudera Manager Admin Console.
- Select .
- Locate the row containing the missing parcel and click the button to Download, Distribute, and then Activate the parcel.
- Return to the upgrade wizard and click the Retry button.
The Upgrade Wizard continues upgrading the cluster.
- Click Finish to return to the Home page.
Step 7: Recover from Failed Steps or Perform a Manual Upgrade
If one or more hosts fail to restart, you can resume the rolling restart after fixing the problems that caused the upgrade to fail. Cloudera Manager will skip restarting roles that have already successfully restarted.
The actions performed by the upgrade wizard are listed in Performing Upgrade Wizard Actions Manually. If any of the steps in the Upgrade Cluster Command screen fail, complete the steps as described in that section before proceeding.
Step 8: Remove the Previous CDH Version Packages and Refresh Symlinks
[Not required for CDH maintenance release upgrades.]
Skip this step if your previous installation or upgrade used parcels.
If your previous installation of CDH was done using packages, remove those packages on all hosts where you installed the parcels and refresh the symlinks so that clients will run the new software versions.
- If your Hue service uses the embedded SQLite database, back up /var/lib/hue/desktop.db to a location that is not /var/lib/hue because this directory is removed when the packages are removed.
- Uninstall the CDH packages on each host:
- Not including Impala and Search
- RHEL
-
sudo yum remove bigtop-utils bigtop-jsvc bigtop-tomcat hue-common sqoop2-client
- SLES
-
sudo zypper remove bigtop-utils bigtop-jsvc bigtop-tomcat hue-common sqoop2-client
- Ubuntu or Debian
-
sudo apt-get purge bigtop-utils bigtop-jsvc bigtop-tomcat hue-common sqoop2-client
- Including Impala and Search
- RHEL
-
sudo yum remove 'bigtop-*' hue-common impala-shell solr-server sqoop2-client hbase-solr-doc avro-libs crunch-doc avro-doc solr-doc
- SLES
-
sudo zypper remove 'bigtop-*' hue-common impala-shell solr-server sqoop2-client hbase-solr-doc avro-libs crunch-doc avro-doc solr-doc
- Ubuntu or Debian
-
sudo apt-get purge 'bigtop-*' hue-common impala-shell solr-server sqoop2-client hbase-solr-doc avro-libs crunch-doc avro-doc solr-doc
- Not including Impala and Search
- Restart all the Cloudera Manager Agents to force an update of the symlinks to point to the newly installed components on each
host:
sudo service cloudera-scm-agent restart
- If your Hue service uses the embedded SQLite database, restore the database you backed up:
- Stop the Hue service.
- Copy the backup from the temporary location to the newly created Hue database directory, /var/lib/hue.
- Start the Hue service.
Step 9: Finalize HDFS Rolling Upgrade
[Not required for CDH maintenance release upgrades.]
- CDH 5.0 or 5.1 to 5.2 or higher
- CDH 5.2 or 5.3 to 5.4 or higher
To determine if you can finalize, run important workloads and ensure that they are successful. Once you have finalized the upgrade, you cannot roll back to a previous version of HDFS without using backups. Verifying that you are ready to finalize the upgrade can take a long time.
- Go to the HDFS service.
- Select Finalize Rolling Upgrade to confirm. and click
Step 10: Exit Maintenance Mode
If you entered maintenance mode during this upgrade, exit maintenance mode.
Step 11: Clear Browser Cache (Hue only)
If you have enabled the Hue service in your upgraded cluster, users may need to clear the cache in their Web browsers before accessing Hue.
<< Upgrading CDH and Managed Services Using Cloudera Manager | ©2016 Cloudera, Inc. All rights reserved | Upgrading to CDH 5.x Using Parcels >> |
Terms and Conditions Privacy Policy |