Pivotal Greenplum 6.7 Release Notes
A newer version of this documentation is available. Use the version menu above to view the most up-to-date release of the Greenplum 6.x documentation.
Pivotal Greenplum 6.7 Release Notes
This document contains pertinent release information about Pivotal Greenplum Database 6.7 releases. For previous versions of the release notes for Greenplum Database, go to Pivotal Greenplum Database Documentation. For information about Greenplum Database end of life, see Pivotal Greenplum Database end of life policy.
Pivotal Greenplum 6 software is available for download from the Pivotal Greenplum page on Pivotal Network.
Pivotal Greenplum 6 is based on the open source Greenplum Database project code.
Release Date: 2020-04-30
- Version 6.7.1 updates PostGIS to version 2.5.4, which removes several previous limitations. See Geospatial Analytics for more information.
- The Greenplum R client is no longer considered a Beta feature.
Pivotal Greenplum 6.7.1 resolves these issues:
- n/a - MADlib
- In Greenplum 6.7.0 the MADlib download files that were originally provided, madlib-1.17.0+2-gp6-rhel7-x86_64.tar.gz and madlib-1.17.0+2-gp6-rhel6-x86_64.tar.gz, contained MADlib version 1.16 instead of version 1.17. This is resolved in Greenplum 6.7.1, and in Greenplum 6.7.0 with the newly-provided files madlib-1.17.0+3-gp6-rhel7-x86_64.tar.gz and madlib-1.17.0+3-gp6-rhel6-x86_64.tar.gz.
- 9790 - Server
- A crash could occur when performing a SELECT query against a column-oriented table, when the table was created using the WITH NO DATA clause. The problem occurred because the WITH clause options were not correctly added to the pg_attribute_encoding table. This problem has been resolved.
- 30499 - Server: Execution
- Fixed a memory leak that occurred when executing CHECKPOINT commands.
- 30559 - Query Optimizer
- Queries that contain an IN clause with a large number of constants took a long time to generate a query plan. Most of the time was spent estimating the cardinality of the IN clause predicate. The cardinality estimation algorithm has been enhanced and significantly reduces the cardinality estimation time for the specified type of query.
- 30579 - Interconnect
- In some cases during query execution, the query hung with the query dispatcher (QD) waiting for the query executor (QE) on a few segment instances to complete. This issue is resolved.
- 30844 - gpreload
- gpreload returned the error "more than one row returned" when attempting to reload a table and a view with the same name exists in a different schema in the database. This issue is resolved.
- 172163076 - Server
- A subtransaction would incorrectly use 1-phase commit instead of 2-phase commit if \set ON_ERROR_ROLLBACK interactive was enabled in a client's .psqlrc file. This problem has been resolved.
- 172324858, 9891 - MPP: Locking, Signals, Processes
- In some cases, Greenplum Database did not manage snapshots correctly when processing concurrent distributed transactions. This caused a concurrent transaction to access a distributed log file that was no longer available and generated the error message Could not open file ""pg_distributedlog/<file-name>"": No such file or directory. This issue is resolved.
- 172348849 - Postgres Planner
- Some queries that contain a UNION ALL that combines the results from SELECT command that uses a replicated table with another SELECT command returns the error ERROR: could not build Motion path. This issue is resolved.
- 172284550 9823 - ALTER DATABASE
- The ALTER DATABASE...FROM CURRENT command did not set a server configuration parameter for a database. This issue is resolved.
Release Date: 2020-04-17
Pivotal Greenplum 6.7.0 is a minor release that includes changed features and resolves several issues.
Greenplum Database 6.7.0 includes these new and changed features:
- Greenplum Database 6.7 introduces the new gp_resource_group_queuing_timeout server configuration parameter. When the resource group-based resource management scheme is active, gp_resource_group_queuing_timeout specifies the maximum amount of time a transaction waits for execution in a queue on a resource group before Greenplum Database cancels the transaction. By default, queued transactions in a resource group can wait indefinitely.
- Greenplum Database 6.7 includes MADlib version 1.17, which introduces new Deep
Learning features, k-Means clustering, and other improvements and bug fixes. See the
MADlib page for additional information and Release Notes.Note: In Greenplum 6.7.0 the MADlib download files that were originally provided, madlib-1.17.0+2-gp6-rhel7-x86_64.tar.gz and madlib-1.17.0+2-gp6-rhel6-x86_64.tar.gz, contained MADlib version 1.16 instead of version 1.17. This is resolved in Greenplum 6.7.1, and in Greenplum 6.7.0 with the newly-provided files madlib-1.17.0+3-gp6-rhel7-x86_64.tar.gz and madlib-1.17.0+3-gp6-rhel6-x86_64.tar.gz.
Pivotal Greenplum 6.7.0 resolves these issues:
- 8539 - Server
- Using NOWAIT in a SELECT FOR UPDATE statement could result in the error, ERROR: relation "<name>" does not exist, because locking was not correctly handled for the NOWAIT clause. This problem has been resolved. Note, however, that NOWAIT only affects how the SELECT statement obtains row-level locks. A SELECT FOR UPDATE NOWAIT statement will always wait for the required table-level lock; it behaves as if NOWAIT was omitted.
- 9089 - Server
- Fixed a problem where Greenplum Database failed to truncate an append-only, column-oriented table if the CREATE TABLE and TRUNCATE statements were executed in the same transaction.
- 30305 - Resource Groups
- A transaction may be queued for execution on a resource group for an extended period of time, particularly when the resource group reached its concurrent transaction limit. This could prevent queries initiated by Greenplum Database superusers from executing. Greenplum Database 6.7 resolves this issue by introducing the gp_resource_group_queuing_timeout server configuration parameter, which specifies the maximum amount of time a queued transaction waits for execution in a resource group before Greenplum cancels the transaction.
- 30531 - Query Optimizer
- An out of memory error occurred when running some queries that contain joins that perform a comparison operation on citext data. The error occurred because the query falls back to the Postgres Planner. This issue is resolved. Now the query does not fall back to the Postgres planner, the query is executed using GPORCA.
- 30536 - PL/pgSQL
- In a PL/pgSQL procedure, output text from a RAISE NOTICE statement was not displayed correctly if the text contained a newline (line feed) character. Only the text before the newline character was displayed. This issue is resolved.
Upgrading from Greenplum 6.x to Greenplum 6.7
See Upgrading from an Earlier Greenplum 6 Release to upgrade your existing Greenplum 6.x software to Greenplum 6.7.0.
Deprecated features will be removed in a future major release of Greenplum Database. Pivotal Greenplum 6.x deprecates:
- The analzyedb option --skip_root_stats (deprecated
If the option is specified, a warning is issued stating that the option will be ignored.
- The server configuration parameter gp_statistics_use_fkeys (deprecated since 6.2).
- The following PXF configuration properties (deprecated since 6.2):
- The PXF_USER_IMPERSONATION, PXF_PRINCIPAL, and PXF_KEYTAB settings in the pxf-env.sh file. You can use the pxf-site.xml file to configure Kerberos and impersonation settings for your new Hadoop server configurations.
- The pxf.impersonation.jdbc property setting in the jdbc-site.xml file. You can use the pxf.service.user.impersonation property to configure user impersonation for a new JDBC server configuration.
- The server configuration parameter gp_ignore_error_table
(deprecated since 6.0).
To avoid a Greenplum Database syntax error, set the value of this parameter to true when you run applications that execute CREATE EXTERNAL TABLE or COPY commands that include the now removed Greenplum Database 4.3.x INTO ERROR TABLE clause.
- Specifying => as an operator name in the CREATE OPERATOR command (deprecated since 6.0).
- The Greenplum external table C API (deprecated since 6.0).
Any developers using this API are encouraged to use the new Foreign Data Wrapper API in its place.
- Commas placed between a SUBPARTITION TEMPLATE clause and its
corresponding SUBPARTITION BY clause, and between consecutive
SUBPARTITION BY clauses in a CREATE TABLE
command (deprecated since 6.0).
Using this undocumented syntax will generate a deprecation warning message.
- The timestamp format YYYYMMDDHH24MISS (deprecated since 6.0).
This format could not be parsed unambiguously in previous Greenplum Database releases, and is not supported in PostgreSQL 9.4.
- The createlang and droplang utilities (deprecated since 6.0).
- The pg_resqueue_status system view (deprecated since 6.0).
Use the gp_toolkit.gp_resqueue_status view instead.
- The GLOBAL and LOCAL modifiers when
creating a temporary table with the CREATE TABLE and
CREATE TABLE AS commands (deprecated since 6.0).
These keywords are present for SQL standard compatibility, but have no effect in Greenplum Database.
- The Greenplum Platform Extension Framework (PXF) HDFS profile names
for the Text, Avro, JSON, Parquet, and SequenceFile data formats
(deprecated since 5.16).
Refer to Connectors, Data Formats, and Profiles in the PXF Hadoop documentation for more information.
- Using WITH OIDS or oids=TRUE to assign an OID system column when creating or altering a table (deprecated since 6.0).
- Allowing superusers to specify the SQL_ASCII encoding
regardless of the locale settings (deprecated since 6.0).
This choice may result in misbehavior of character-string functions when data that is not encoding-compatible with the locale is stored in the database.
- The @@@ text search operator (deprecated since 6.0).
This operator is currently a synonym for the @@ operator.
- The unparenthesized syntax for option lists in the VACUUM command
(deprecated since 6.0).
This syntax requires that the options to the command be specified in a specific order.
- The plain pgbouncer authentication type (auth_type = plain) (deprecated since 4.x).
Migrating Data to Greenplum 6
See Migrating Data from Greenplum 4.3 or 5 for guidelines and considerations for migrating existing Greenplum data to Greenplum 6, using standard backup and restore procedures.
Known Issues and Limitations
Pivotal Greenplum 6 has these limitations:
- Upgrading a Greenplum Database 4 or 5 release, or Greenplum 6 Beta release, to Pivotal Greenplum 6 is not supported.
- MADlib, GPText, and PostGIS are not yet provided for installation on Ubuntu systems.
- Greenplum 6 is not supported for installation on DCA systems.
- Greenplum for Kubernetes is not yet provided with this release.
The following table lists key known issues in Pivotal Greenplum 6.x.
|10216||ALTER TABLE, ALTER DOMAIN||In some cases, heap table data is lost when performing concurrent ALTER
TABLE or ALTER DOMAIN commands where one command alters
a table column and the other rewrites or redistributes the table data. For example,
performing concurrent ALTER TABLE commands where one command
changes a column data type from int to text might
cause data loss. This issue might also occur when altering a table column during the
data distribution phase of a Greenplum system expansion. Greenplum Database did not
correctly capture the current state of the table during command execution.
This issue is resolved in Pivotal Greenplum 6.9.0.
|N/A||Spark Connector||This version of Greenplum is not compatible with Greenplum-Spark Connector versions earlier than version 1.7.0, due to a change in how Greenplum handles distributed transaction IDs.|
|N/A||PXF||Starting in 6.x, Greenplum does not bundle cURL and
instead loads the system-provided library. PXF requires
cURL version 7.29.0 or newer. The officially-supported
cURL for the CentOS 6.x and Red Hat Enterprise Linux
6.x operating systems is version 7.19.*. Greenplum Database 6 does not
support running PXF on CentOS 6.x or RHEL 6.x due to this limitation.
Workaround: Upgrade the operating system of your Greenplum Database 6 hosts to CentOS 7+ or RHEL 7+, which provides a cURL version suitable to run PXF.
|29703||Loading Data from External Tables||Due to limitations in the Greenplum Database external table framework,
Greenplum Database cannot log the following types of errors that it encounters while
Workaround: Clean the input data before loading it into Greenplum Database.
|30594||Resource Management||Resource queue-related statistics may be inaccurate in certain cases. Pivotal recommends that you use the resource group resource management scheme that is available in Greenplum 6.|
|30522||Logging||Greenplum Database may write a FATAL message to the standby master or mirror log stating that the database system is in recovery mode when the instance is synchronizing with the master and Greenplum attempts to contact it before the operation completes. Ignore these messages and use gpstate -f output to determine if the standby successfully synchronized with the Greenplum master; the command returns Sync state: sync if it is synchronized.|
|30537||Postgres Planner||The Postgres Planner generates a very large query plan that causes out of
memory issues for the following type of CTE (common table expression) query: the
WITH clause of the CTE contains a partitioned table with a large
number partitions, and the WITH reference is used in a subquery
that joins another partitioned table.
Workaround: If possible, use the GPORCA query optimizer. With the server configuration parameter optimizer=on, Greenplum Database attempts to use GPORCA for query planning and optimization when possible and falls back to the Postgres Planner when GPORCA cannot be used. Also, the specified type of query might require a long time to complete.
pxf [cluster] init may fail to recognize a new
JAVA_HOME setting when the value is provided via
the shell environment.
Workaround: Edit $PXF_CONF/conf/pxf-env.sh and manually set JAVA_HOME to the new value, run pxf cluster sync to synchronize this configuration change across the Greenplum cluster, and then re-run pxf [cluster] init.
|170824967||gpfidsts||For Greenplum Database 6.x, a command that accesses an external table that uses the gpfdists protocol fails if the external table does not use an IP address when specifying a host system in the LOCATION clause of the external table definition.|
|n/a||Materialized Views||By default, certain gp_toolkit views do not display data for materialized views. If you want to include this information in gp_toolkit view output, you must redefine a gp_toolkit internal view as described in Including Data for Materialized Views.|
|168689202||PXF||PXF fails to run any query on Java 11 that specifies a
Hive* profile due to this Hive known issue:
ClassCastException when initializing HiveMetaStoreClient on JDK10 or newer.
Workaround: Run PXF on Java 8 or use the PXF JDBC Connector to access Hive.
|168957894||PXF||The PXF Hive Connector does not support using the
Hive* profiles to access Hive transactional tables.
Workaround: Use the PXF JDBC Connector to access Hive.
|169200795||Greenplum Stream Server||When loading Kafka data into Greenplum Database in UPDATE and MERGE modes, GPSS requires that a MAPPING exist for each column name identified in the MATCH_COLUMNS and UPDATE_COLUMNS lists.|
|170202002||Greenplum-Kafka Integration||Updating the METADATA:SCHEMA property and restarting a previously-run load job could cause gpkafka to re-read Kafka messages published to the topic, and load duplicate messages into Greenplum Database.|
|168548176||gpbackup||When using gpbackup to back up a Greenplum Database 5.7.1 or earlier 5.x release with resource groups enabled, gpbackup returns a column not found error for t6.value AS memoryauditor.|
|164791118||PL/R||PL/R cannot be installed using the deprecated createlang
utility, and displays the
createlang: language installation failed: ERROR: no schema has been selected to create inWorkaround: Use CREATE EXTENSION to install PL/R, as described in the documentation.
|N/A||Greenplum Client/Load Tools on Windows||The Greenplum Database client and load tools on Windows have not been tested with Active Directory Kerberos authentication.|
Differences Compared to Open Source Greenplum Database
- Product packaging and installation script
- Support for QuickLZ compression. QuickLZ compression is not provided in the open source version of Greenplum Database due to licensing restrictions.
- Support for data connectors:
- Greenplum-Spark Connector
- Greenplum-Informatica Connector
- Greenplum-Kafka Integration
- Greenplum Stream Server
- Data Direct ODBC/JDBC Drivers
- gpcopy utility for copying or migrating objects between Greenplum systems
- Support for managing Greenplum Database using Pivotal Greenplum Command Center
- Support for full text search and text analysis using Pivotal GPText
- Greenplum backup plugin for DD Boost
- Backup/restore storage plugin API (Beta)