Pivotal Greenplum 6.6 Release Notes
A newer version of this documentation is available. Use the version menu above to view the most up-to-date release of the Greenplum 6.x documentation.
Pivotal Greenplum 6.6 Release Notes
This document contains pertinent release information about Pivotal Greenplum Database 6.6 releases. For previous versions of the release notes for Greenplum Database, go to Pivotal Greenplum Database Documentation. For information about Greenplum Database end of life, see Pivotal Greenplum Database end of life policy.
Pivotal Greenplum 6 software is available for download from the Pivotal Greenplum page on Pivotal Network.
Pivotal Greenplum 6 is based on the open source Greenplum Database project code.
Release 6.6.0
Release Date: 2020-04-06
Pivotal Greenplum 6.6.0 is a minor release that includes changed features and resolves several issues.
Features
Greenplum Database 6.6.0 includes these new and changed features:
- For the CREATE EXTERNAL TABLE command, the LOG
ERRORS clause now supports the PERSISTENTLY keyword. The
LOG ERRORS clause logs information about external table data rows
with formatting errors. The error log data is stored internally. When you specify
LOG ERRORS PERSISTENTLY, the log data persists after the external
table is dropped.
If you use the PERSISTENTLY keyword, you must install the functions that manage the persistent error log information.
For information about the error log information and built-in functions for viewing and managing error log information, see See CREATE EXTERNAL TABLE
- PXF version 5.11.2 is included, which introduces these changes:
- PXF no longer validates the JDBC BATCH_SIZE write option during a read operation.
- PXF bundles a newer jackson-databind library.
- PXF removes references to the unused pxf-public.classpath file. This in turn removes spurious WARNING: Failed to read classpath file ... log messages.
- PXF now bundles Tomcat version 7.0.100.
- Greenplum Database 6.6 includes MADlib version 1.17, which introduces new Deep Learning features, k-Means clustering, and other improvements and bug fixes. See the MADlib 1.17 Release Notes for a complete list of changes.
Resolved Issues
Pivotal Greenplum 6.6.0 resolves these issues:
- 30483 - Query Optimizer
- A query that specified multiple constants in an IN clause generated a large number of spill files and returned the error workfile per query size limit exceeded when GPORCA incorrectly normalized a histogram that was not well-defined. This issue is resolved.
- 30488 - DLL
- For some append-optimized partitioned tables, performance was poor when adding a column to the table with the ALTER TABLE... ADD COLUMN command because the command performed a full table rewrite. Now only data corresponding to the new column is rewritten.
- 30518 - Query Optimizer
- A query that specified an aggregate function such as min() or count() that was invoked on a citext-type column failed with the error cache lookup failed for function 0 because GPORCA incorrectly generated a multi-stage aggregate for the query. This issue is resolved.
- 30525 - Logging
- In some cases, Greenplum Database encountered a segmentation fault and rotated the log file early when the logging level was set to WARNING or less severe and Greenplum attempted to write to the alert log file after it failed to open the file. This issue is resolved.
- 171506474 - COPY
- When COPY FROM SEGMENT command copied data into an append-only table, the command did not update the append-only table metadata tupcount (the number of tuples on a segment, including invisible tuples) and modcount (the number of data modification operations performed). This issue is resolved.
- n/a - gpperfmon
- The Ubuntu build of Greenplum Database 6.5.0 did not include the gpperfmon database, which is required for using Greenplum Command Center. This issue is resolved in version 6.6.0.
Upgrading to Greenplum 6.6.0
See Upgrading from an Earlier Greenplum 6 Release to upgrade your existing Greenplum 6.x software to Greenplum 6.6.0.
Deprecated Features
Deprecated features will be removed in a future major release of Greenplum Database. Pivotal Greenplum 6.x deprecates:
- The analzyedb option --skip_root_stats (deprecated
since 6.2).
If the option is specified, a warning is issued stating that the option will be ignored.
- The server configuration parameter gp_statistics_use_fkeys (deprecated since 6.2).
- The following PXF configuration properties (deprecated since 6.2):
- The PXF_USER_IMPERSONATION, PXF_PRINCIPAL, and PXF_KEYTAB settings in the pxf-env.sh file. You can use the pxf-site.xml file to configure Kerberos and impersonation settings for your new Hadoop server configurations.
- The pxf.impersonation.jdbc property setting in the jdbc-site.xml file. You can use the pxf.service.user.impersonation property to configure user impersonation for a new JDBC server configuration.
- The server configuration parameter gp_ignore_error_table
(deprecated since 6.0).
To avoid a Greenplum Database syntax error, set the value of this parameter to true when you run applications that execute CREATE EXTERNAL TABLE or COPY commands that include the now removed Greenplum Database 4.3.x INTO ERROR TABLE clause.
- Specifying => as an operator name in the CREATE OPERATOR command (deprecated since 6.0).
- The Greenplum external table C API (deprecated since 6.0).
Any developers using this API are encouraged to use the new Foreign Data Wrapper API in its place.
- Commas placed between a SUBPARTITION TEMPLATE clause and its
corresponding SUBPARTITION BY clause, and between consecutive
SUBPARTITION BY clauses in a CREATE TABLE
command (deprecated since 6.0).
Using this undocumented syntax will generate a deprecation warning message.
- The timestamp format YYYYMMDDHH24MISS (deprecated since 6.0).
This format could not be parsed unambiguously in previous Greenplum Database releases, and is not supported in PostgreSQL 9.4.
- The createlang and droplang utilities (deprecated since 6.0).
- The pg_resqueue_status system view (deprecated since 6.0).
Use the gp_toolkit.gp_resqueue_status view instead.
- The GLOBAL and LOCAL modifiers when
creating a temporary table with the CREATE TABLE and
CREATE TABLE AS commands (deprecated since 6.0).
These keywords are present for SQL standard compatibility, but have no effect in Greenplum Database.
- The Greenplum Platform Extension Framework (PXF) HDFS profile names
for the Text, Avro, JSON, Parquet, and SequenceFile data formats
(deprecated since 5.16).
Refer to Connectors, Data Formats, and Profiles in the PXF Hadoop documentation for more information.
- Using WITH OIDS or oids=TRUE to assign an OID system column when creating or altering a table (deprecated since 6.0).
- Allowing superusers to specify the SQL_ASCII encoding
regardless of the locale settings (deprecated since 6.0).
This choice may result in misbehavior of character-string functions when data that is not encoding-compatible with the locale is stored in the database.
- The @@@ text search operator (deprecated since 6.0).
This operator is currently a synonym for the @@ operator.
- The unparenthesized syntax for option lists in the VACUUM command
(deprecated since 6.0).
This syntax requires that the options to the command be specified in a specific order.
- The plain pgbouncer authentication type (auth_type = plain) (deprecated since 4.x).
Migrating Data to Greenplum 6
See Migrating Data from Greenplum 4.3 or 5 for guidelines and considerations for migrating existing Greenplum data to Greenplum 6, using standard backup and restore procedures.
Known Issues and Limitations
Pivotal Greenplum 6 has these limitations:
- Upgrading a Greenplum Database 4 or 5 release, or Greenplum 6 Beta release, to Pivotal Greenplum 6 is not supported.
- MADlib, GPText, and PostGIS are not yet provided for installation on Ubuntu systems.
- Greenplum 6 is not supported for installation on DCA systems.
- Greenplum for Kubernetes is not yet provided with this release.
The following table lists key known issues in Pivotal Greenplum 6.x.
Issue | Category | Description |
---|---|---|
10216 | ALTER TABLE, ALTER DOMAIN | In some cases, heap table data is lost when performing concurrent ALTER
TABLE or ALTER DOMAIN commands where one command alters
a table column and the other rewrites or redistributes the table data. For example,
performing concurrent ALTER TABLE commands where one command
changes a column data type from int to text might
cause data loss. This issue might also occur when altering a table column during the
data distribution phase of a Greenplum system expansion. Greenplum Database did not
correctly capture the current state of the table during command execution. This issue is resolved in Pivotal Greenplum 6.9.0. |
N/A | PXF | Starting in 6.x, Greenplum does not bundle cURL and instead
loads the system-provided library. PXF requires cURL version 7.29.0
or newer. The officially-supported cURL for the CentOS 6.x and Red
Hat Enterprise Linux 6.x operating systems is version 7.19.*. Greenplum Database 6
does not support running PXF on CentOS 6.x or RHEL 6.x due to this limitation.
Workaround: Upgrade the operating system of your Greenplum Database 6 hosts to CentOS 7+ or RHEL 7+, which provides a cURL version suitable to run PXF. |
29703 | Loading Data from External Tables | Due to limitations in the Greenplum Database external table framework,
Greenplum Database cannot log the following types of errors that it encounters while
loading data:
Workaround: Clean the input data before loading it into Greenplum Database. |
30522 | Logging | Greenplum Database may write a FATAL message to the standby master or mirror log stating that the database system is in recovery mode when the instance is synchronizing with the master and Greenplum attempts to contact it before the operation completes. Ignore these messages and use gpstate -f output to determine if the standby successfully synchronized with the Greenplum master; the command returns Sync state: sync if it is synchronized. |
30537 | Postgres Planner | The Postgres Planner generates a very large query plan that causes out of
memory issues for the following type of CTE (common table expression) query: the
WITH clause of the CTE contains a partitioned table with a large
number partitions, and the WITH reference is used in a subquery
that joins another partitioned table. Workaround: If possible, use the GPORCA query optimizer. With the server configuration parameter optimizer=on, Greenplum Database attempts to use GPORCA for query planning and optimization when possible and falls back to the Postgres Planner when GPORCA cannot be used. Also, the specified type of query might require a long time to complete. |
171883625 | PXF |
pxf [cluster] init may fail to recognize a new
JAVA_HOME setting when the value is provided via
the shell environment.
Workaround: Edit $PXF_CONF/conf/pxf-env.sh and manually set JAVA_HOME to the new value, run pxf cluster sync to synchronize this configuration change across the Greenplum cluster, and then re-run pxf [cluster] init. |
170824967 | gpfidsts | For Greenplum Database 6.x, a command that accesses an external table that uses the gpfdists protocol fails if the external table does not use an IP address when specifying a host system in the LOCATION clause of the external table definition. |
n/a | Materialized Views | By default, certain gp_toolkit views do not display data for materialized views. If you want to include this information in gp_toolkit view output, you must redefine a gp_toolkit internal view as described in Including Data for Materialized Views. |
168689202 | PXF | PXF fails to run any query on Java 11 that specifies a
Hive* profile due to this Hive known issue:
ClassCastException when initializing HiveMetaStoreClient on JDK10 or newer.
Workaround: Run PXF on Java 8 or use the PXF JDBC Connector to access Hive. |
168957894 | PXF | The PXF Hive Connector does not support using the
Hive* profiles to access Hive transactional tables.
Workaround: Use the PXF JDBC Connector to access Hive. |
169200795 | Greenplum Stream Server | When loading Kafka data into Greenplum Database in UPDATE and MERGE modes, GPSS requires that a MAPPING exist for each column name identified in the MATCH_COLUMNS and UPDATE_COLUMNS lists. |
170202002 | Greenplum-Kafka Integration | Updating the METADATA:SCHEMA property and restarting a previously-run load job could cause gpkafka to re-read Kafka messages published to the topic, and load duplicate messages into Greenplum Database. |
168548176 | gpbackup | When using gpbackup to back up a Greenplum Database 5.7.1 or earlier 5.x release with resource groups enabled, gpbackup returns a column not found error for t6.value AS memoryauditor. |
164791118 | PL/R | PL/R cannot be installed using the deprecated createlang
utility, and displays the
error:createlang: language installation failed: ERROR: no schema has been selected to create inWorkaround: Use CREATE EXTENSION to install PL/R, as described in the documentation. |
N/A | Greenplum Client/Load Tools on Windows | The Greenplum Database client and load tools on Windows have not been tested with Active Directory Kerberos authentication. |
Differences Compared to Open Source Greenplum Database
- Product packaging and installation script
- Support for QuickLZ compression. QuickLZ compression is not provided in the open source version of Greenplum Database due to licensing restrictions.
- Support for data connectors:
- Greenplum-Spark Connector
- Greenplum-Informatica Connector
- Greenplum-Kafka Integration
- Greenplum Stream Server
- Data Direct ODBC/JDBC Drivers
- gpcopy utility for copying or migrating objects between Greenplum systems
- Support for managing Greenplum Database using Pivotal Greenplum Command Center
- Support for full text search and text analysis using Pivotal GPText
- Greenplum backup plugin for DD Boost
- Backup/restore storage plugin API (Beta)