gpupgrade Execute Phase

The gpupgrade execute command transforms the source Greenplum Database system to be compatible with the target Greenplum Database software. It updates the master segment instance, copies data and configuration files to the target cluster, and upgrades the primary segment instances. When gpupgrade execute completes, the target cluster is running and available for you to test.

The source standby master and mirror segment instances are unchanged until you run the gpupgrade finalize command.

Perform the execute phase during a scheduled downtime. Users should receive sufficient notice that the Greenplum Database cluster will be off-line for an extended period. Send a maintenance notice a week or more before you plan to start the execute phase, and then a reminder notice before you begin.

The following table summarises the cluster state before and after gpupgrade execute:

Before Execute After Execute
Source Target Source Target
Master UP Initialized but DOWN DOWN UP and populated
Standby UP Non Existent DOWN Non Existent
Primaries UP Initialized but DOWN DOWN UP and populated
Mirrors UP Non Existent DOWN Non Existent

Preparing for Execute

You can run gpupgrade execute after the gpupgrade Initialize Phase is finished.

  • Ensure you are in an pre-agreed downtime window. While gpupgrade execute runs, the source Greenplum Database cluster is unavailable. The execute phase can take a long time to complete, so you should wait for a scheduled downtime to start gpupgrade execute.

  • Check for sufficient disk space on the master and on all hosts. gpupgrade execute will not run with less than 60% available space on each host in copy mode, or 20% in link mode.

Running the gpupgrade execute Command

Log in to the master host as the gpadmin user and run the gpupgrade execute command.

$ gpupgrade execute

The utility displays a summary message and waits for user confirmation before proceeding:

You are about to run the "execute" command for a major-version upgrade of Greenplum.
This should be done only during a downtime window.


You will still have the opportunity to revert the cluster to its original state
after this step.

WARNING: Do not perform operations on the source cluster until gpupgrade is
finalized or reverted.

Continue with gpupgrade execute?  Yy|Nn:

gpupgrade displays progress as it executes the upgrade tasks:

Execute in progress.

Stopping source cluster...                                         [COMPLETE]
Upgrading master...                                                [COMPLETE]
Copying master catalog to primary segments...                      [COMPLETE]
Upgrading primary segments...                                      [COMPLETE]
Starting target cluster...                                         [COMPLETE]

Execute completed successfully.

The target cluster is now running. You may now run queries against the target
database and perform any other validation desired prior to finalizing your upgrade.
PGPORT: <target-port>
MASTER_DATA_DIRECTORY: <target-master-dir>

WARNING: If any queries modify the target database prior to gpupgrade finalize,
it will be inconsistent with the source database.

If you are satisfied with the state of the cluster, run "gpupgrade finalize"
to proceed with the upgrade.

To return the cluster to its original state, run "gpupgrade revert".

The status of each step can be COMPLETE, FAILED, SKIPPED, or IN PROGRESS. SKIPPED indicates that the command has been run before and the step has already been executed.

When gpupgrade execute has completed successfully, gpupgrade reports on the state of the source and target clusters and their master listen ports and data directories. In summary, gpupgrade performs the following tasks:

  • Stops the source cluster.
  • Runs pg_upgrade to upgrade the master instance on the target.
  • Re-runs the pg_upgrade consistency checks. You can see the pg_upgrade output when running execute in verbose mode.

    $ gpupgrade execute --verbose
    Upgrading master...                                                [IN PROGRESS]
    Performing Consistency Checks
    Checking cluster versions                                   ok
    Checking database user is a superuser                       ok
    Checking database connection settings                       ok
    Checking for prepared transactions                          ok
    Checking for reg* system OID user data types                ok
    Checking for contrib/isn with bigint-passing mismatch       ok

    See pg_upgrade Consistency Checks for more information.

  • Copies the master catalog to all primary segments on the target cluster.

  • Runs pg_upgrade to upgrade the primary segment instances in parallel.

  • Starts the target cluster.

Connecting to the Target Cluster

The target Greenplum Database cluster is running with new, temporary connection parameters, which you must specify when you connect to the cluster. The output of the gpupgrade execute command shows the values for the MASTER_DATA_DIRECTORY and PGPORT environment variables.

The MASTER_DATA_DIRECTORY directory name is the target cluster master directory name modified by inserting a hash code. See Target Cluster Directories for more information about the target cluster directory names.

The default master listen port for the target cluster is 50432. If you changed the temp_port_range from the default 50432-65535 in the gpupgrade initialize configuration file, the master port will be the first port in the list you specified.

Source the file in the target Greenplum Database installation directory to set the path and other environment variables.

This example sets the environment to connect to the target cluster and runs the gpstate utility:

$ export MASTER_DATA_DIRECTORY="/data/master/gpseg.AAAAAAAAAA.-1"
$ export PGPORT=50432
$ source <target-gpdb-install-dir>/
$ gpstate

Running gpstate against the target cluster at this point should show active master and primary segments, but the master standby and mirror segments are not yet configured. The standby master and mirror segments are upgraded when you run gpupgrade finalize.

To access the source cluster again after you have changed the environment variables, you must either reset the variables to their original values or log out and log in again to allow the start-up scripts to set the variables back to the values for the source cluster.

Troubleshooting the Execute Phase

The gpupgrade execute hub process runs on the Greenplum Database master host and logs messages to the gpAdminLogs/gpupgrade/execute.log file in the gpadmin user’s home directory.

Failed to connect to the upgrade hub
You must run gpupgrade initialize before you can run gpupgrade execute. If you already ran gpupgrade initialize try running gpupgrade restart-services to restart the hub and agent processes.
Make sure that gpupgrade is installed at the same path on all hosts in the cluster.

Could not create the ~gpadmin/gpAdminLogs/gpupgrade/execute.log file
Make sure you are logged in as gpadmin and that all files in the .gpupgrade and gpAdminLogs directories are owned by gpadmin and are writable by gpadmin.

Failed to start an agent on a segment host
Check the ~/.gpupgrade/start-agents/failed file for a list of segments with failed agent processes.
Check that the segment hosts are running and the gpadmin user can log in with SSH.
Make sure that gpupgrade is installed in the same location on all hosts in the Greenplum Database cluster.
The gpupgrade agent processes listen on port 6416. Stop any application using port 6416 on any host in the cluster.

The gpupgrade_hub process fails to start or crashes
Check the ~gpadmin/gpAdminLogs/gpupgrade/execute.log file for messages that identify the problem causing the failure.
The gpupgrade hub process listens on port 7527. Stop any other application that is using port 7527 on the master host.

Disk full on master or segment host
Delete unneeded files on affected hosts and run gpupgrade execute again.

Target cluster fails to start
Check the gpinitsystem and gpstart log files in the ~/gpAdminLogs directory.

Next Steps

When the gpupgrade execute command has completed successfully, you must test the upgraded cluster and decide whether to finalize the upgrade (see gpupgrade Finalize Phase) or return to the source Greenplum Database version see gpupgrade Revert.

While you test the cluster upgrade, do not change the database. Any changes that are made during the execute phase will persist in the target cluster and cause inconsistencies if you decide to finalize the upgrade.