Reading ORC Data
Use the PXF HDFS connector
hdfs:orc profile to read ORC-format data when the data resides in a Hadoop file system. This section describes how to read HDFS files that are stored in ORC format, including how to create and query an external table that references these files in the HDFS data store.
- Reads 1024 rows of data at a time.
- Supports column projection.
- Supports filter pushdown based on file-level, stripe-level, and row-level ORC statistics.
- Does not support complex types.
hdfs:orc profile currently supports reading scalar data types from ORC files. If the data resides in a Hive table, and you want to read complex types or the Hive table is partitioned, use the
Ensure that you have met the PXF Hadoop Prerequisites before you attempt to read data from HDFS.
The Optimized Row Columnar (ORC) file format is a columnar file format that provides a highly efficient way to both store and access HDFS data. ORC format offers improvements over text and RCFile formats in terms of both compression and performance. PXF supports ORC file versions v0 and v1.
ORC is type-aware and specifically designed for Hadoop workloads. ORC files store both the type of, and encoding information for, the data in the file. All columns within a single group of row data (also known as stripe) are stored together on disk in ORC format files. The columnar nature of the ORC format type enables read projection, helping avoid accessing unnecessary columns during a query.
ORC also supports predicate pushdown with built-in indexes at the file, stripe, and row levels, moving the filter operation to the data loading phase.
Refer to the Apache orc documentation for detailed information about the ORC file format.
To read ORC scalar data types in Greenplum Database, map ORC data values to Greenplum Database columns of the same type. PXF uses the following data type mapping when it reads ORC data:
|ORC Physical Type||ORC Logical Type||PXF/Greenplum Data Type|
|Integer||boolean (1 bit)||Boolean|
|Integer||tinyint (8 bit)||Smallint|
|Integer||smallint (16 bit)||Smallint|
|Integer||int (32 bit)||Integer|
|Integer||bigint (64 bit)||Bigint|
The PXF HDFS connector
hdfs:orc profile supports reading ORC-format HDFS files. Use the following syntax to create a Greenplum Database external table that references a file or directory:
CREATE EXTERNAL TABLE <table_name> ( <column_name> <data_type> [, ...] | LIKE <other_table> ) LOCATION ('pxf://<path-to-hdfs-file> ?PROFILE=hdfs:orc[&SERVER=<server_name>][&<custom-option>=<value>[...]]') FORMAT 'CUSTOM' (FORMATTER='pxfwritable_import')
The specific keywords and values used in the Greenplum Database CREATE EXTERNAL TABLE command are described below.
|<path‑to‑hdfs‑file>||The path to the file or directory in the HDFS data store. When the
|SERVER=<server_name>||The named server configuration that PXF uses to access the data. PXF uses the
|<custom-option>||<custom-option>s are described below.|
|Read Option||Value Description|
|IGNORE_MISSING_PATH||A Boolean value that specifies the action to take when <path-to-hdfs-file> is missing or invalid. The default value is
|MAP_BY_POSITION||A Boolean value that, when set to
This example operates on a simple data set that models a retail sales operation. The data includes fields with the following names and types:
|Column Name||Data Type|
In this example, you:
- Create a sample data set in CSV format, use the
orc-toolsJAR utilities to convert the CSV file into an ORC-format file, and then copy the ORC file to HDFS.
- Create a Greenplum Database readable external table that references the ORC file and that specifies the
- Query the external table.
You must have administrative privileges to both a Hadoop cluster and a Greenplum Database cluster to run the example. You must also have configured a PXF server to access Hadoop.
Create a CSV file named
hdfsclient$ echo 'Prague,Jan,101,4875.33 Rome,Mar,87,1557.39 Bangalore,May,317,8936.99 Beijing,Jul,411,11600.67' > /tmp/sampledata.csv
convertcommand to convert
sampledata.csvto the ORC file
/tmp/sampledata.orc; provide the schema to the command:
hdfsclient$ java -jar orc-tools-1.6.2-uber.jar convert /tmp/sampledata.csv \ --schema 'struct<location:string,month:string,num_orders:int,total_sales:decimal(10,2)>' \ -o /tmp/sampledata.orc
Copy the ORC file to HDFS. The following command copies the file to the
hdfsclient$ hdfs dfs -put /tmp/sampledata.orc /data/pxf_examples/
Log in to the Greenplum Database master host and connect to a database. This command connects to the database named
gpadmin@gpmaster$ psql -d testdb
Create an external table named
sample_orcthat references the
/data/pxf_examples/sampledata.orcfile on HDFS. This command creates the table with the column names specified in the ORC schema, and uses the
testdb=# CREATE EXTERNAL TABLE sample_orc(location text, month text, num_orders int, total_sales numeric(10,2)) LOCATION ('pxf://data/pxf_examples/sampledata.orc?PROFILE=hdfs:orc') FORMAT 'CUSTOM' (FORMATTER='pxfwritable_import');
Read the data in the file by querying the
testdb=# SELECT * FROM sample_orc;
location | month | num_orders | total_sales ---------------+-------+------------+------------- Prague | Jan | 101 | 4875.33 Rome | Mar | 87 | 1557.39 Bangalore | May | 317 | 8936.99 Beijing | Jul | 411 | 11600.67 (4 rows)