Exporting Interaction History from Pega Cloud

From PegaWiki
This is the approved revision of this page, as well as being the most recent.
Jump to navigation Jump to search

Exporting Interaction History from Pega Cloud

Description Data export
Version as of 8.3.3
Application Pega Customer Decision Hub
Capability/Industry Area Partners and Integrations



This design pattern explains a straightforward method to export interaction history (IH) data from a Pega CloudTM instance to a file, which can then be sent to downstream systems.

Rationale

There are a number of reasons to implement this pattern:

  • This is something that is very frequently requested by clients
  • Although we have some built-in reporting, it is often desirable to send IH to downstream operational, reporting, and analytics systems to combine it with other data to provide more comprehensive reporting
  • With on-premise implementations we can do a straight database (DB) extraction or DB-to-DB transfers, something that can't be done from Pega Cloud implementations
  • The way IH is implemented (i.e. individual rows effectively split into multiple classes) makes it tricky to export as a whole.

Required setup

The primary challenge is the way that IH data is split up inside Pega PlatformTM. Although in the Pega database it is represented by straightforward star schema, in the platform it consists of classes that represent the fact table and the associated dimensions.

The underlying requirement is to have a class that can be used to combine the fact and dimension data prior to extracting it to a file, something akin to this:

IH Extract class properties

The IH Extract class contains properties for the fact and dimension record IDs, and for the page that represent those records.

Steps

  1. Construct the IH extract
    Joining the IH extract data and writing it to an output
    In this step, all the data for a single IH record is extracted and placed in the IH extract class. The input for this data flow expects a list of fact and dimension IDs. These IDs are then used with data sets that sit on the fact and dimension classes to retrieve the relevant records from those classes, storing them in the IH extract class. These can then be accessed in the file data set for outputting.
  2. Get relevant fact records
    Data flow to feed the IH creation data flow. Extracts the desired records from IH fact.
    This data flow feeds into the IH extract data flow. It retrieves the fact ID and dimension record IDs for the desired records. In this example above, these are records created yesterday. Although it uses a data set as a source, this could be also use a report definition, or even be abstract depending on the desired outcome. The output of this data flow is of class IH extract and has all the fact and dimension IDs populated which are then used in the IH extract data flow to pull the relevant IH records.
  3. File data set destination
    The file data set for IH records to be written to. Specifies the format, destination, and the mapping.
  4. Wrapping activity (optional)
    The wrapping activity which can be used to schedule the extract to run on a regular basis.
    If desired, the extract can be scheduled to run on regular basis. This requires an activity that calls the dataflow in step one above.
  5. Schedule the extraction (optional)
    The job scheduler containing the schedule details, calls the wrapping activity.
    The Job Scheduler calls the wrapping activity and specifies the schedule on which to run the wrapping activity.

Final thoughts

  • The output file can be transferred to a downstream system using a file listener and connect-ftp rule.
  • Adding additional properties to the extract file only requires changing the mapping in the file data set.
    • Existing IH properties (both fact and dimension) can just be added (bearing in mind that downstream systems may require changes to ingest them).
    • New IH properties follow the standard process to be added to IH, then can just be added to the file data set to export.
  • Easy to add some simple data transformation using a data transform, as shown in step 1, just prior to the file data set output.
  • Easy to run this manually, e.g. to do a full extract, extract a specific action, specific campaign run, etc.).
    • Just modify the fact record extraction data flow (e.g. remove the day filter) in step 2 above.
    • Change the file data set destination, to prevent interfering with scheduled data transfers.
    • Then go to Actions > Run on the IH Extract data flow.
  • Must be on at least Pega Platform 8.3.3
    • In earlier versions there is a bug that prevents the data set record retrieval for records with a negative integer key.