Reverting gpupgrade Actions

You can optionally execute gpupgrade revert after the initialize phase, or after the execute phase of your upgrade. The gpupgrade revert command stops the gpupgrade processes and returns the source Greenplum Database cluster to its original state before the most recent gpupgrade initialize command.

Perform the revert phase during a scheduled downtime. Users should receive sufficient notice that the Greenplum Database cluster will be off-line for an extended period. Send a maintenance notice a week or more before you plan to start the execute phase, and then a reminder notice before you begin.

The gpupgrade revert actions depend on:


  • the current phase of the gpupgrade process

  • whether copy or link mode was specified

  • whether the source cluster has standby master and mirrors enabled

When gpupgrade revert completes, the only remaining gpupgrade artifacts are the configuration file you provided for the gpupgrade initialize command and the archived log files.

Notes

  1. gpupgrade revert does not undo any manual or script-based changes in the source cluster that were done in preparation for the upgrade process. To fully restore and utilize the source cluster, recreate any dropped database objects, and reinstall any database extensions or libraries your cluster requires.

    IMPORTANT: Using the source cluster after gpupgrade revert without restoring these changes can lead to degraded performance and possible application failures.

  2. gpupgrade revert cannot restore the source cluster after you start gpupgrade finalize. To recover the source cluster after finalize, restore from a backup.

Reverting after Initialize Phase

Before Revert After Revert
Source Target Source Target
Master UP Initialized but DOWN UP Removed
Standby UP Non Existent UP Non Existent
Primaries UP Initialized but DOWN UP Removed
Mirrors UP Non Existent UP Non Existent

You can run gpupgrade revert anytime after you have entered gpupgrade initialize, including before gpupgrade initialize completes. Reverting at this stage completes quickly, since the source cluster has not been altered. gpupgrade revert will recover the source Greenplum cluster to its state before running gpupgrade initialize.

$ gpupgrade revert

The utility displays a summary message and waits for user confirmation before proceeding:

You are about to revert this upgrade.
This should be done only during a downtime window.

...

gpupgrade log files can be found on all hosts in <archive directory>

WARNING: Do not perform operations on the source and target clusters until gpupgrade revert
has completed.

Continue with gpupgrade revert?  Yy|Nn:
Revert in progress.

Stopping target cluster...                                         [SKIPPED]
Deleting target cluster data directories...                        [COMPLETE]
Deleting target tablespace directories...                          [COMPLETE]
Re-enabling source cluster...                                      [COMPLETE]
Starting source cluster...                                         [SKIPPED]
Archiving log directories...                                       [COMPLETE]
Deleting state directories on the segments...                      [COMPLETE]
Stopping hub and agents...                                         [COMPLETE]
Deleting master state directory...                                 [COMPLETE]

Revert completed successfully.

The source cluster is now running version <old-version>.
PGPORT: <source-port>
MASTER_DATA_DIRECTORY: <source-master-dir>

The gpupgrade logs can be found on the master and segment hosts in
/home/gpadmin/gpAdminLogs/gpupgrade-<upgradeID>-<timestamp>

NEXT ACTIONS
------------
To use the reverted cluster, run the "post-revert” data migration scripts, and
recreate any additional tables, indexes, and roles that were dropped or
altered to resolve migration issues.

To restart the upgrade, run "gpupgrade initialize" again.

The status of each step can be COMPLETE, FAILED, SKIPPED, or IN PROGRESS. SKIPPED indicates that the command has been run before and the step has already been executed.

When the revert completes, the source Greenplum cluster is running, and the target Greenplum cluster and all files and directories gpupgrade initialize created are removed. The configuration file you used with gpupgrade initialize is untouched, and the log files for all gpupupgrade commands you ran are archived. For information about the location and content of the log files, see Archived Log Files.

If gpupgrade initialize reports problems with the source database that need to be fixed in order to complete the upgrade, you can fix the reported problems and run gpupgrade initialize again. It is not necessary to revert before you re-run gpupgrade initialize.

Reverting after Execute Phase

Before Revert After Revert
Source Target Source Target
Master Stopped UP and populated UP Removed
Standby Stopped Non Existent UP Non Existent
Primaries Stopped UP and populated UP Removed
Mirrors Stopped Non Existent UP Non Existent

If your source cluster does not have mirroring enabled and a standby master, gpupgrade revert exits with an error message.

The revert speed depends on the gpupgrade mode. If you used the link mode, gpupgrade revert uses rsync to revert the source primary segments from the mirrors, therefore takes longer to complete the process. With copy mode, gpupgrade revert simply deletes the data directories created for the target cluster. Deleting the data can take some time, but it does not require transferring the data over the network as with link mode.

$ gpupgrade revert
Revert in progress.

Stopping target cluster...                                         [COMPLETE]
Restoring source cluster...                                        [COMPLETE]
Deleting primary segment data directories...                       [COMPLETE]
Deleting master data directory...                                  [COMPLETE]
Archiving log directories...                                       [COMPLETE]
Deleting state directories on the segments...                      [COMPLETE]
Starting source cluster...                                         [COMPLETE]
Stopping hub and agents...                                         [COMPLETE]
Deleting master state directory...                                 [COMPLETE]

Revert completed successfully.

Reverted to source cluster version <old-version>.

The source cluster is now running. The PGPORT is <sourcePort> and the
MASTER_DATA_DIRECTORY is <sourceMasterDataDir>

The gpupgrade logs can be found on the master and segment hosts in
/home/gpadmin/gpAdminLogs/gpupgrade-<upgradeID>-<timestamp>

NEXT ACTIONS
------------
To restart the upgrade, run "gpupgrade initialize" again.

To use the reverted cluster, you must recreate any tables, indexes, and/or
roles that were dropped or altered to pass the pg_upgrade checks.

Archived Log Files

The gpupgrade revert command archives the log files in the $GPADMIN_HOME/gpAdminLogs/gpupgrade directory by renaming the directory with a timestamp, in the format $GPADMIN_HOME/gpAdminLogs/gpupgrade-<upgradeID>-<timestamp>, for example $GPADMIN_HOME/gpAdminLogs/gpupgrade-5FIRuZh1v5Q-2020-07-20T23:45.

The directory contains these log files:

  • gpupgrade_cli_YYYYMMDD.log - logs a message when any gpupgrade command is executed on the given date.
  • initialize_YYYYMMDD.log - output from each execution of the gpupgrade initialize command on the given date.
  • execute_YYYYMMDD.log - output from each execution of the gpupgrade execute command on the given date.
  • finalize_YYYYMMDD.log - output from each execution of the gpupgrade finalize command on the given date.
  • revert_YYYYMMDD.log - output from each execution of the gpupgrade revert command on the given date.

Next Steps

When the gpupgrade revert command has completed successfully, run gpupgrade-migration-sql-executor.bash post-revert. Undo any manual or script-based changes in the source cluster that were done in preparation for the upgrade process.

To fully restore and utilize the source cluster, recreate any dropped database objects, and reinstall any database extensions or libraries your cluster requires.