Summarizes tabular data values across two specified fields (dimensions) to create a pivot table containing the summarized (aggregated) data, together with sub-totals for each dimension and the grand total.
The input data can be summarized using a range of aggregation functions, specifically: count, sum, min, max and mean.
To configure the Pivot Table node:
In the GroupForRow property, specify an input field to be used for the first dimension of the summarized data. The unique values contained in this field become the group-by categories that are listed (vertically) in the first column in the output pivot table.
In the GroupForColumn property, specify an input field to be used as the second dimension in the summarized data. The unique values contained in this field become the names of the columns in the pivot table that contain the summarized data.
From the AggregationType property, select the calculation that you want to perform over the specified field.
In the AggregateOverField property, enter the name of the field that contains the data to be aggregated.
Sub-totals are generated for each row in the table and for each column in the table. A grand total is also generated for the values in the table.
The node summarizes the contents of a single field, using the specified aggregation method and grouping dimensions.
Your input data represents a year's sales. The data contains fields with the following attributes:
- The Quarter in which the order was recognized (for example, Q1 - Q4).
- The Region in which the sale was made (for example, North, South, East, West).
- The Value of the individual order.
For example, you have the following input data set:
You want to use the Pivot Table node to generate a pivot table containing the total sales Value by Region and by Quarter. In this case, configure the node properties as follows:
In the GroupForRow property, enter:
In the GroupForColumn property, enter:
From the AggregationType property, select Sum.
In the AggregateOverField property, enter:
In addition to generating the quarterly summaries for each region, the pivot table output by the node would also include the following:
- A row that contains values for the total sales Value for all Regions in a particular Quarter.
- A column that contains values for the total sales Value for all Quarters, for each Region.
- The grand total of sales Value for all Regions and all Quarters.
Specify the name of the first field to group the input data by. Unique values from this field are displayed as row labels in the pivot table.
A value is required for this property.
Specify the name of the second field to group the input data by. Unique values from this field are displayed as column headings in the pivot table. The field values are displayed as headings, therefore this field cannot have a data type of Unicode.
A value is required for this property.
Optionally specify the type of aggregation to be performed over the specified field. Choose from:
The default value is Count.
Specify the name of the input field containing the data to be aggregated.
The AggregateOverField property must be specified for all aggregation types except Count. It is not necessary to specify the property when the aggregation type is Count as whole records are counted. For Max, Mean, Min and Sum, the data in the field specified in the AggregateOverField property must be numeric.
Optionally specify how the node should behave when null values are found in the field specified in the AggregateOverField property. Choose from:
- Error - The node fails with an error of 'Null value found in <field name>' when a record with a null value is found.
- Log - records with a null value will be removed from the record set and a log message will be written noting this.
- Ignore - Records with a null value are removed from the record set to be processed.
The default value is Error.
The behavior set by this property does not apply if the AggregationType property is set to Count or CountNull.
Example Data Flow
An "Example Pivot Table Node" LNA is provided with your installation in the following location:
If you do not have access to the above location, please contact your administrator.
You can import this example data flow into your installation by using the Import Data Flows/Library Nodes command in the Directory, see the topic Sharing documents for more details.
Inputs and outputs