Workforce Intelligence APIs

From PegaWiki
Importing and exporting data using APIs / This is the approved revision of this page, as well as being the most recent.
Jump to navigation Jump to search

Workforce Intelligence APIs

Description The business purpose of the different Workforce Intelligence APIs
Version as of 8.7
Application Pega Workforce Intelligence
Capability/Industry Area Workforce Intelligence



Workforce Intelligence - REST APIs[edit]

Workforce Intelligence provides a distinct set of REST APIs that a client can leverage, and these REST APIs are organized into two different categories:

  1. Data-driven APIs
  2. External data source APIs

The APIs can be accessed via Client ID (username) and Client Secret (password). Clients can request the credentials to access these APIs from the Pega Workforce Intelligence Service Delivery Team.

Data-driven APIs[edit]

Application Summary API[edit]

The Application Summary API enables clients to programmatically integrate aggregated Workforce Intelligence data into their existing data warehouse, without the need to download aggregated weekly or monthly export files from the Exports page of the Workforce Intelligence portal. The Application Summary API returns the data in a JSON format.

The Application Summary API can be used to retrieve information about applications within a specified timeframe. The application information can be aggregated at a department, team, or associate level.

The required parameters to access the Application Summary API are:

  1. Start date
  2. End date
  3. Aggregation level (department, team, or associate level)

There are two optional parameters:

  1. Application Category - The different application categories available within Workforce Intelligence are:
    • Production
    • Other-Work
    • Non-Work
    • Idle
    • Unknown
  2. Business Unit ID - Specify the department or team ID (business unit ID) to retrieve application summary data for a specific department or team.

Note: If a category is not specified, then all the categories are returned. If a business unit is not specified, then the data defaults to parent/root department.

Sample Data Results[edit]

Sample API call 1 - https://clientname.wfi.pega.com/api/v1/business-unit/applications/summary?startDate=2020-07-01T00:00:00Z&endDate=2020-07-01T00:00:00Z&level=associate&businessUnitId=31

Sample Results 1 - The following image shows a data pull in which the aggregation level was set to associate, and a business unit ID was provided for a team. Application Category was not used in the query; therefore, the results contain data from all application categories.

Application Summary API - Screenshot.jpg

Sample API call 2 - https://clientname.wfi.pega.com/api/v1/business-unit/applications/summary?startDate=2020-07-01T00:00:00Z&endDate=2020-07-01T00:00:00Z&level=associate&businessUnitId=31&category=otherwork

Sample Results 2 - The following image shows a data pull in which the aggregation level was set to associate, a business unit ID was provided for a team, and the application category was set to Other-Work.

Application Summary API - Screenshot 2.jpg

Export API[edit]

From the Workforce Intelligence Portal, users assigned to the analyst role can download different types of data exports which are available in comma-separated value (.CSV) format.

There are two ways to access the data exports. An application user with analyst role can manually download the data exports from the Analysis > Exports menu option on the portal or these exports can be accessed via export API.

The different types of data exports available to a client are:

  1. Weekly or monthly aggregate data exports
  2. Default data export which can contain one or more types of data exports in a zipped file -
    • Detailed daily raw data exports (High-Level data export)
    • User’s hierarchy data export
    • Processed export
    • Workflow export
    • Step export
  3. Custom data exports
Note 1: You can enable either a weekly export or a monthly data export. Both exports cannot be enabled at the same time. Once the weekly or monthly and or default data export files are created, the data pipeline will not update or modify the historical export files.
Note 2: Default data exports are disabled by default. Clients can select (from above) which default data export they need. To request the different default data exports, clients will need to request them from My Support Portal.

Due to the size of these exports, clients might find the export API to be the preferable download method to gain access to the data to create their own reports and visualizations. After the Service Delivery team enables the requested exports, clients can use the Export API to create scripts to download and access the different types of data exports available to them. Clients can schedule and automate the export, download, and designate a centralized location to save the download.

Sample API call for a weekly data export (if weekly data export is enabled) - https://clientname.wfi.pega.com/api/v1/s3-exports/download?fileName=exports/pega-wfi-clientname_Data_Export_Weekly_20211024.csv

Sample API call for a monthly data export (if monthly data export is enabled) - https://clientname.wfi.pega.com/api/v1/s3-exports/download?fileName=exports/pega-wfi-clientname_Data_Export_Monthly_202009.csv

Sample API call for a default data - https://clientname.wfi.pega.com/api/v1/s3-exports/download?fileName=daily-event-exports/20211014-default-clientname.zip

Note: In the Workforce Intelligence 8.7 release, the default data export filename and extension changed to “.zip”. The zipped file contains individual “.CSV” files for event, processed, step, user, or workflow exports.

Custom data exports[edit]

Introduced in Workforce Intelligence release 8.7, if an analyst sees a metric in the Workforce Intelligence application, then we can include it in a custom data export. Clients need to create a Service Request through My Support Portal (https://msp.pega.com) to have a custom export built.

Like default data exports, custom data exports will be generated daily, and they can be accessed via Analysis > Exports or through Export API.

Note: At this time, we cannot do more than a single day at a time.

Examples of some custom data export which Service Delivery team can build are:

  • Administrator audit log
  • Screen Rules export

Sample API call for a custom data export - https://clientname.wfi.pega.com/api/v1/s3-exports/download?fileName=daily-event-exports/20211014-custom-clientname.zip

Due to the detailed nature of the daily raw data export, it is recommended that you see the data dictionary of high-level events, which is available on Pega Community.

Weekly data export report example.

This format is an exact duplicate of the portal report pulled using the manual download method:


User hierarchy data export example

Workflow data export example



Step data export example




Managing the hierarchy with APIs[edit]

Hierarchy Import[edit]

Use this import to create new and updates existing teams and departments. This import uses the same API as the Import Departments action on the Departments tab of the Organization page in the web portal.  

When a file is imported, some lines might fail while others succeed. Due to this, the 200 OK response does not always mean a complete success.  

Error messages and meanings[edit]

The meaning of API error messages can be difficult to determine based only on the name. The following list explains potential error messages and their meanings:

IMPORT_ALL_ROWS_ROWS_ERROR

Meaning: Each row has a validation error (a field is required and missing, contains invalid chars, or is not a valid value).

IMPORT_ROWS_ERROR

Meaning: Some of the rows have a validation error (a field is required and missing, contains invalid chars, or is not a valid value), and some are imported successfully.

INVALID_HEADER_ERROR

Meaning: File does not contain a required header.

IMPORT_FILE_PARSE_ERROR

Meaning: File contains some rows that have more/fewer columns than the number of headers.

IMPORT_MAX_LINES_ERROR

Meaning: File has more than 5000 entries.

UPLOAD_SIZE_LIMIT_ERROR

Meaning: File is larger than 1 MB.

UPLOAD_NO_FILE_ERROR

Meaning: A file was not included in the body.

ANCESTOR_DISABLE_ERROR

Meaning: Import row #$[row] with 'Unique ID' "$[uniqueId]" cannot be disabled because it has enabled children: $[otherUniqueIds].

DESCENDANT_DISABLE_ERROR

Meaning: Import row #$[row] with 'Unique ID' "$[uniqueId]" cannot be enabled because it has disabled parents: $[otherUniqueIds].

DESCENDANT_MOVE_DISABLE_ERROR

Meaning: Import row #$[row] with 'Unique ID' "$[uniqueId]" cannot have disabled parents: $[otherUniqueIds]

EXISTING_DISABLE_ERROR

Meaning: There is a pre-existing error in the hierarchy. A business unit with 'Unique ID' "$[uniqueId]" cannot be disabled and have enabled children $[otherUniqueIds].

EXISTING_LEVEL_ERROR

Meaning: There is a pre-existing error in the hierarchy where a department with 'Unique ID' ""$[uniqueId]"" has children that are not departments or teams: $[otherUniqueIds].

OR

There is a pre-existing error in the hierarchy where a team with 'Unique ID' ""$[uniqueId]"" has children that are not associates: $[otherUniqueIds]."

ANCESTOR_CYCLE_ERROR

Meaning: There is a pre-existing error in the hierarchy where a business unit does not have a parent: "$[uniqueId]".

UPDATED_TO_HAVE_CYCLE_ERROR

Meaning: Import row #$[row] has created or is involved in loops in the hierarchy structure.

CONFLICT_WITH_ASSOCIATE

Meaning: Import row #$[row] has 'Unique ID' "$[uniqueId]" that conflicts with an existing associate.

UPLOAD_FILE_TYPE_ERROR

Meaning: File is not a valid CSV file (ex: a JSON file).

Generic error messages Errors related to an invalid primary ID, invalid first name, invalid last name, invalid team unique ID, invalid config name, and invalid alternate IDs will return a generic error message in the import file. For example:

{
  "row": 21,
  "errorType": "DESCENDANT_LEVEL_ERROR",
  "metadata": {
    "uniqueId": "user21"
  }
}

The UI displays these types of errors in the following way: "Some rows failed to be imported:"

Then, under this message, each failed row displays a separate error message. Each individual error message looks like "Import of "$[primaryId]" failed for $[invalidFields] at row #$[rowNum]" where the $[invalidFields] represents the fields that contained unsupported characters, or are required, but missing.

These types of errors will not cause the import to fail, but will be returned to the API.  

Hierarchy import prerequisites[edit]

Ensure that you have the following prerequisites:

  • OAuth token: You can obtain this token by hitting the POST route /api/v1/oauth/token with the body x-www-form-urlencoded. The body includes: grant_type = client_credentials, client_id = {client id}, client_secret = {client secret}
  • departments.csv file
  • Email address  

URL[edit]

  • /api/v1/hierarchy/import

Method[edit]

  • POST

Authorization[edit]

  • OAuth 2.0
  • Bearer bfcbc74597c875811317a161b708147a408c6dfe26ac529795b88a870443f2d8

Headers[edit]

  • Content-Type = multipart/form-data; boundary=<calculated when request is sent>
  • Content-Length = <calculated when request is sent>
  • Host = <calculated when request is sent>

Body – form-data[edit]

  • importFile = [file: departments.csv]
  • email = [string: email address]

For example:

Body - form-data example for departments import

Success response[edit]

  • Complete success:
    • Code 200
    • Content:  
      {
        "validRowsExist": true,
        "duplicateRowsExist": false,
        "invalidRowsExist": false,
        "formattedErrors": []
      }
      
  • If some lines succeed, but other lines fail, you will still receive a 200 success response, but with a sub-array of “formattedErrors” which list the row number, the error that occurred, and other relevant information for why that value is an error. Refer to the following lines in the example code to see specific errors and their causes:  
    • Line 10: The uniqueId “user1” already exists in another row, causing this one to be a duplicate.
    • Line 27: The characters <> and [] are not allowed in the unique id field.
    • Line 40: Company level cannot be disabled.
    • Line 44: Company level should not have parents.
  1{
  2  "message": "Failed to import rows",
  3  "json": {
  4    "duplicateRowsExist": "true",
  5    "formattedErrors": [
  6      {
  7        "row": 1,
  8        "errorType": "DUPLICATE_ROW",
  9        "metadata": {
 10          "uniqueId": "user1"
 11        }
 12      },
 13      {
 14        "row": 2,
 15        "errorType": "MISSING_REQUIRED_VALUE",
 16        "metadata": {
 17          "columnName": "Unique ID"
 18        }
 19      },
 20      {
 21        "row": 3,
 22        "errorType": "VALUE_FORMATTED_INCORRECTLY",
 23        "metadata": {
 24          "columnName": "Unique ID",
 25          "formatErrorType": "<>[]",
 26          "formatErrorInfo": "INVALID_CHARACTERS",
 27          "invalidValue": "user3<>"
 28        }
 29      },
 30      {
 31        "row": 4,
 32        "errorType": "CONFIG_DOES_NOT_EXIST"
 33      },
 34      {
 35        "row": 5,
 36        "errorType": "COMPANY_INSERT_ERROR"
 37      },
 38      {
 39        "row": 6,
 40        "errorType": "COMPANY_DISABLE_ERROR"
 41      },
 42      {
 43        "row": 7,
 44        "errorType": "COMPANY_PARENT_ERROR"
 45      },
 46      {
 47        "row": 8,
 48        "errorType": "COMPANY_LEVEL_ERROR"
 49      },
 50      {
 51        "row": 9,
 52        "errorType": "EDIT_UNKNOWN_ERROR"
 53      },
 54      {
 55        "row": 10,
 56        "errorType": "ANCESTOR_DISABLE_ERROR",
 57        "metadata": {
 58          "otherUniqueIds": [
 59            "user100",
 60            "user101"
 61          ],
 62          "uniqueId": "user10"
 63        }
 64      },
 65      {
 66        "row": 11,
 67        "errorType": "DESCENDANT_DISABLE_ERROR",
 68        "metadata": {
 69          "otherUniqueIds": [
 70            "user110",
 71            "user111"
 72          ],
 73          "uniqueId": "user11"
 74        }
 75      },
 76      {
 77        "row": 12,
 78        "errorType": "DESCENDANT_MOVE_DISABLE_ERROR",
 79        "metadata": {
 80          "otherUniqueIds": [
 81            "user120",
 82            "user121"
 83          ],
 84          "uniqueId": "user12"
 85        }
 86      },
 87      {
 88        "row": 13,
 89        "errorType": "EXISTING_DISABLE_ERROR",
 90        "metadata": {
 91          "otherUniqueIds": [
 92            "user130",
 93            "user131"
 94          ],
 95          "uniqueId": "user13"
 96        }
 97      },
 98      {
 99        "row": 14,
100        "errorType": "ANCESTOR_LEVEL_ERROR",
101        "metadata": {
102          "otherUniqueIds": [
103            "user140",
104            "user141"
105          ],
106          "uniqueId": "user14"
107        }
108      },
109      {
110        "row": 15,
111        "errorType": "DESCENDANT_MOVE_LEVEL_ERROR",
112        "metadata": {
113          "otherUniqueIds": [
114            "user150",
115            "user151"
116          ],
117          "uniqueId": "user15"
118        }
119      },
120      {
121        "row": 16,
122        "errorType": "EXISTING_LEVEL_ERROR",
123        "metadata": {
124          "otherUniqueIds": [
125            "user160",
126            "user161"
127          ],
128          "uniqueId": "user16"
129        }
130      },
131      {
132        "row": 17,
133        "errorType": "ANCESTOR_CYCLE_ERROR",
134        "metadata": {
135          "otherUniqueIds": [
136            "user170",
137            "user171"
138          ],
139          "uniqueId": "user17"
140        }
141      },
142      {
143        "row": 18,
144        "errorType": "UPDATED_TO_HAVE_CYCLE_ERROR",
145        "metadata": {
146          "otherUniqueIds": [
147            "user180",
148            "user181"
149          ],
150          "uniqueId": "user18"
151        }
152      },
153      {
154        "row": 20,
155        "errorType": "PARENT_DOES_NOT_EXIST"
156      },
157      {
158        "row": 21,
159        "errorType": "DESCENDANT_LEVEL_ERROR",
160        "metadata": {
161          "uniqueId": "user21"
162        }
163      },
164      {
165        "row": 22,
166        "errorType": "CONFLICT_WITH_ASSOCIATE",
167        "metadata": {
168          "uniqueId": "user22"
169        }
170      }
171    ],
172    "invalidRowsExist": "true",
173    "validRowsExist": "false"
174  }
175}

 

Error response[edit]

  • Missing email address
    • Code 500
    • Content: {"message": "Unexpected failure during hierarchy import."}
  • Missing file or file has too many lines
    • Code 400
    • Content:  
      {
        "json": {
          "type": "UPLOAD_NO_FILE_ERROR"
        },
        "oneOf": [
          {
            "json": {
              "type": "UPLOAD_NO_FILE_ERROR"
            }
          },
          {
            "json": {
              "type": "UPLOAD_FILE_TYPE_ERROR"
            }
          },
          {
            "message": "Error: The csv contains more lines than the 5000 lines allowed",
            "json": {
              "errors": null,
              "type": "IMPORT_MAX_LINES_ERROR",
              "isCSVError": true,
              "fileHeaders": [
                "Unique ID",
                "Name",
                "Parent ID",
                "Level",
                "Production Goal %",
                "Assigned Configuration",
                "Status"
              ]
            }
          }
        ]
      }
      

Data Collection User Import[edit]

Use this import to create and update data collection users. This import uses the same API as the Import Users action on the Collectors tab of the Organization page in the web portal.  

Hierarchy import prerequisites[edit]

Ensure that you have the following prerequisites:

  • OAuth token: You can obtain this token by hitting the POST route /api/v1/oauth/token with the body x-www-form-urlencoded. The body includes: grant_type = client_credentials, client_id = {client id}, client_secret = {client secret}
  • data_collectors_users.csv file
  • Email address  

URL[edit]

  • /api/v1/user/import

Method[edit]

  • POST

Authorization[edit]

  • OAuth 2.0
  • Bearer bfcbc74597c875811317a161b708147a408c6dfe26ac529795b88a870443f2d8

Headers[edit]

  • Content-Type = multipart/form-data; boundary=<calculated when request is sent>
  • Content-Length = <calculated when request is sent>
  • Host = <calculated when request is sent>

Body – form-data[edit]

  • importFile = [file: data_collectors_users.csv]
  • email = [string: email address]

For example:

Body - form data example for data collectors

Success response[edit]

  • Code 200  
  • Content: empty response

Error response[edit]

  • Missing email address
    • Code 400
    • Content:
      {
        "message": "Cannot read properties of undefined (reading 'conflictingId')",
        "json": {
          "type": "IMPORT_GENERIC_ERROR",
          "isCSVError": true
        }
      }
      
  • Missing file
    • Code 400  
    • Content:
      {
        "name": "UploadError",
        "status": 400,
        "json": {
          "type": "UPLOAD_NO_FILE_ERROR"
        }
      }
      
  • Error in file
    • Code 400
    • Content: As an example, notice from lines 18 and 19 that the user import failed due to an invalid last name and invalid team unique ID.
       1{
       2  "message": "Failed to import rows",
       3  "json": {
       4    "errors": [
       5      {
       6        "row": 1,
       7        "networkId": "user1",
       8        "fieldErrors": [
       9          "First Name",
      10          "Client Configuration"
      11        ],
      12        "isDuplicate": false
      13      },
      14      {
      15        "row": 2,
      16        "networkId": "user2",
      17        "fieldErrors": [
      18          "Last Name",
      19          "Team Unique ID"
      20        ],
      21        "isDuplicate": false
      22      },
      23      {
      24        "row": 4,
      25        "networkId": "user4",
      26        "fieldErrors": [],
      27        "isDuplicate": true
      28      },
      29      {
      30        "row": 6,
      31        "networkId": "user6",
      32        "fieldErrors": [
      33          "Alternate Ids"
      34        ]
      35      }
      36    ]
      37  }
      38}
      

Building CSV exports from APIs[edit]

The Department and Data Collector export CSV files from WFI are built on the front end rather than the back end, meaning there is no single API for gathering these files. Instead, you will need to hit one or more APIs and parse the JSON response into a CSV file to be imported.  

Note: Any API responses may come back empty for a certain department, team, or data collector, in that case the export file would have an empty string “”. For example, the data collectors may not have any alternate IDs so you would add an empty string into that field. The runtime configuration API may not include a certain data collector at all, meaning that user inherits their configuration from their team.

Departments export[edit]

The departments export can be built with two APIs: /api/v1/hierarchy and /api/v1/runtime/configuration/list. The hierarchy API will give you all the fields necessary for the export CSV except for the “Assigned Configuration” which will come from the runtime configuration list API. Both APIs return responses for all “business units:” company, departments, and teams which exist in the departments.csv file as well as “associates” which exist in the data_collectors_users.csv file. Because of this, results will need to be filtered into the proper file depending on which export/import is necessary.  

CSV export fields[edit]

  • Name – comes from the “name” field of the hierarchy API
  • Unique ID – “unique_id” field of the hierarchy API
  • Parent Name – use the last integer in the “ancestry” field of the hierarchy API to find the number which matches the parent’s “id” field, then use the “name” field of that parent. In the grey highlighted example in the JSON response below the ancestry field is “1/3/5” in this example you would want to use 5 as the “id” to match with the parent’s name and unique_id. Department 5 is the direct parent of the team in the example, with department 3 being the direct parent of department 5, but for this import parents of parents are not necessary, only the direct parent.
  • Parent ID – use the last integer of the “ancestry field” of the hierarchy API to find the number which matches the parent’s “id” field, then use the “unique_id” field of that parent.
  • Level – “level” field from the hierarchy API
  • Production Goal % - “ownExpProdGoal” field from the hierarchy API. If this is null it means the unit inherits the production goal from its parent
  • Assigned Configuration – “name” field from the runtime configuration list API will be the value to use. You will find the correct configuration name by matching the department’s “name” in the list of the “appliedTo” field, which is a CSV field listing all departments and users that configuration is assigned to.
  • Status – “deactivated” field from the hierarchy API. This a Boolean field so it will be either true or false, but for the CSV export/import you will need to use “enabled” or “disabled”. True response correlates to “disabled” and false with “enabled”.

Hierarchy API[edit]

Prerequisites[edit]

Ensure that you have the following prerequisites:

  • OAuth token: You can obtain this token by hitting the POST route /api/v1/oauth/token with the body x-www-form-urlencoded. The body includes: grant_type = client_credentials, client_id = {client id}, client_secret = {client secret}
  • Email address of an application user in the environment
URL[edit]
  • /api/v1/user/import
Method[edit]
  • POST
Authorization[edit]
  • OAuth 2.0
  • Bearer bfcbc74597c875811317a161b708147a408c6dfe26ac529795b88a870443f2d8
Headers[edit]
  • Content-Type = application/json
  • Content-Length = <calculated when request is sent>
  • Host = <calculated when request is sent>
Body[edit]
  • email = [string: email address]
Response[edit]
  • Code 200
  • Content:
    { 
        "hierarchy": [ 
            { 
                "id": "1", 
                "ancestry": null, 
                "level": "company", 
                "unique_id": "company", 
                "firstName": null, 
                "lastName": null, 
                "isAlternateUser": false, 
                "alternateIds": [], 
                "name": "Nebula", 
                "ownExpProdGoal": 0.5, 
                "deactivated": false 
            }, 
            { 
                "id": "2", 
                "ancestry": "1/3/5", 
                "level": "team", 
                "unique_id": "unknown", 
                "firstName": null, 
                "lastName": null, 
                "isAlternateUser": false, 
                "alternateIds": [], 
                "name": "Unknown", 
                "ownExpProdGoal": null, 
                "deactivated": false 
            }, 
            { 
                "id": "34", 
                "ancestry": "1/2", 
                "level": "associate", 
                "unique_id": "tummj", 
                "firstName": null, 
                "lastName": null, 
                "isAlternateUser": false, 
                "alternateIds": [], 
                "name": "tummj", 
                "ownExpProdGoal": null, 
                "deactivated": false 
            }, 
    ] 
    "perspective": 1, 
        "rootPerspective": 1 
    }
    

Runtime Configuration List API[edit]

Prerequisites[edit]

Ensure that you have the following prerequisites:

  • OAuth token: You can obtain this token by hitting the POST route /api/v1/oauth/token with the body x-www-form-urlencoded. The body includes: grant_type = client_credentials, client_id = {client id}, client_secret = {client secret}
  • Email address of an application user in the environment
URL[edit]
  • /api/v1/runtime/configuration/list
Method[edit]
  • GET
Authorization[edit]
  • OAuth 2.0
  • Bearer bfcbc74597c875811317a161b708147a408c6dfe26ac529795b88a870443f2d8
Headers[edit]
  • Host = <calculated when request is sent>
Body[edit]
  • No body
Response[edit]
  • Code 200
  • Content:
    [ 
        { 
            "id": "1", 
            "name": "Default", 
            "description": "The default runtime configuration.", 
            "appliedTo": " Dev Runtime Machine 2, Calvin Dev Runtime Machine, Nebula, test test" 
        }, 
        { 
            "id": "70", 
            "name": "John's config", 
            "description": "", 
            "appliedTo": "John team, Robert Dept" 
        }, 
        { 
            "id": "105", 
            "name": "MKConfig", 
            "description": "", 
            "appliedTo": "Manish Kumar" 
        }, 
     
    }]
    

Data collector users export[edit]

For the data collectors export you will need to use multiple APIs again. The same /api/v1/hierarchy and /api/v1/runtime/configuration/list routes used for departments will provide most of the information needed for the data collectors. As such, refer to those APIs above. The only difference in usage is the hierarchy route you would want to filter for only level = associate instead of filtering the associates out. An additional route /api/v1/shift/most-recent API will provide the Most Recent Shift field.  

CSV export fields[edit]

  • First Name – “firstName” field of the hierarchy API
  • Last Name – “lastName” field of the hierarchy API
  • Primary ID – “unique_id” field of the hierarchy API
  • Alternate IDs – “alternateIds” field of the hierarchy API in a comma separated array of strings, which needs to be parsed into a forward slash (/) separated single string. Example: “alternateIds” = [“user 1”, “user 2”, “user 3”] from the API response would turn to “Alternate IDs” = “user 1/user 2/user 3” for the export CSV file.
  • Team – use the same process to find a team for a data collector as to find the parent of a department/team. Instead of “parent” just use “team” since a data collector can only be assigned to a team, not a department or company level. Use the last integer in the “ancestry” field of the hierarchy API to find the number which matches the team’s “id” field, then use the “name” field of that team. In the grey highlighted example in the JSON response below the ancestry field is “1/3/5” in this example you would want to use 5 as the “id” to match with the team’s name and unique_id. Team 5 is the direct parent of the associate in the example, with department 3 being the direct parent of team 5, but for this import parents of parents are not necessary, only the direct parent.
  • Team Unique ID - use the last integer of the “ancestry field” of the hierarchy API to find the number which matches the team’s “id” field, then use the “unique_id” field of that team.
  • Most Recent Shift – match the data collector’s “id” field with the “id” field in the most recent shifts API and use the paired “mostRecentShift” field.
  • Client Configuration – Also the same use case as the client configuration for departments replacing “parent” with “team”. “name” field from the runtime configuration list API will be the value to use. You will find the correct configuration name by matching the department’s “name” in the list of the “appliedTo” field, which is a CSV field listing all departments and users that configuration is assigned to.

Most Recent Shift API[edit]

Prerequisites[edit]

Ensure that you have the following prerequisites:

  • OAuth token: You can obtain this token by hitting the POST route /api/v1/oauth/token with the body x-www-form-urlencoded. The body includes: grant_type = client_credentials, client_id = {client id}, client_secret = {client secret}
  • Email address of an application user in the environment
URL[edit]
  • /api/v1/shift/most-recent
Method[edit]
  • GET
Authorization[edit]
  • OAuth 2.0
  • Bearer bfcbc74597c875811317a161b708147a408c6dfe26ac529795b88a870443f2d8
Headers[edit]
  • Host = <calculated when request is sent>
Parameters[edit]
  • email = [string: email address]
Response[edit]
  • Code 200
  • Content:
    [
      {
        "mostRecentShift": "2021-10-14T00:00:00.000Z",
        "id": "34"
      },
      {
        "mostRecentShift": "2021-05-14T00:00:00.000Z",
        "id": "35"
      },
      {
        "mostRecentShift": "2021-03-03T00:00:00.000Z",
        "id": "37"
      },
      {
        "mostRecentShift": "2021-03-10T00:00:00.000Z",
        "id": "41"
      }
    ]
    

External data source APIs[edit]

Task API[edit]

The Task API provides an alternative to Workflow tags and can be used in conjunction with workflow tags. This API provides a more precise method of capturing workflows, however, it does require some setup on the client side.

Task-based workflows track specific activities and help clients to detect patterns between applications/screens within a given start and end time (for example, the start and end time of a process, such as an address change, claim dispute, profile update and so on).

The following list outlines an example process using the Pend Address Change feature, as utilized by a client:

  • A task start trigger, start change of address
  • A task end trigger, pend change of address
  • A task start trigger to Pend Resolve change of address
  • A task end trigger to Resolve change of address
  • The system administrator would send:
    • Task name, XXXX Change of Address
    • Start Time
    • End time
    • Network ID of user performing task

All activities between these two points would be captured within the portal, and bookends all of the work. After the Change of Address TASK API, Pend and Resolve Change of Address are set up, a client can see how many first call resolutions were completed in a case vs. a pended case, and how long the average change of address took if the case was pending.

When enabling Workforce Intelligence in Pega Customer Service, task data is automatically sent for analysis.

Metric API[edit]

If Workforce Intelligence is enabled in Pega Customer Service, you will be able to see the Workforce Intelligence dashboard (pictured below). This API allows you to send additional metrics, such as a Net Promoter Score, from other data sources; for example, Pega Customer Service or even another external source metric. The new metric can be plotted with the production score that is generated by the Workforce Intelligence application.

As shown in the following image, the Net Promoter Score is coupled with the production score and provides a graphical representation of where your agents are positioned in each of the four quadrants. You can customize the four quadrants based on your business needs.

Metric API - Screenshot.jpg

Note: We cannot customize the four quadrants for different departments/teams. Currently, the quadrants are defined at the company (root) level.