Best practices for implementing high performing Pega Sales Automation applications

From PegaWiki
Revision as of 16:06, 2 June 2021 by BEAUM (talk | contribs) (updated taxonomy)

(diff) ← Older revision | Approved revision (diff) | Latest revision (diff) | Newer revision → (diff)
Jump to navigation Jump to search

Best practices for implementing high performing Pega Sales Automation applications

Description Implementing high performing Pega Sales Automation applications
Version as of 8.1
Application Pega Sales Automation
Capability/Industry Area Performance



Introduction

This article captures information related to analyzing performance of any Pega application by using the tools available in Pega Platform, as well as, best practices for implementing high performing Pega Sales Automation applications.

Analyze performance

Pega recommends testing for performance throughout the application development lifecycle.  Doing performance test during development cycles helps to optimize the application before it is pushed to production. Discovering and remediating performance issues in production application is disruptive and expensive.

As part of performance testing for Pega applications, focus on the following performance tools located in Dev Studio:

  • Memory footprint (Performance tool)
  • CPU utilization
  • Database queries

Memory footprint

The memory footprint of a user is determined by the size of the individual cases multiplied by the number of cases that are open during the working session. Memory footprint on the server is critical for sizing a system to accommodate expected user load while leaving sufficient head room for occasional traffic spikes. Out of memory errors cause nodes to crash and result in disruptions to users who lose their unsaved work.

Use the Performance tool in Dev Studio to examine the clipboard (memory) usage from screen to screen. Identify key scenarios and evaluate them step by step to assess memory usage. Run the Performance tool twice to validate that memory utilization is consistent and ensure, that there are no leaks caused by pages accumulating in the memory.

Using the Performance tool to collect footprint:

  1. In the bottom-right corner of Dev Studio, click Performance.
  2. Click Reset to reset the performance data.
  3. Run the test scenario.
  4. Click Add reading with clipboard size to collect the footprint.
  5. Open the delta snapshot that was last added and examine the requestor clipboard size to determine if it is in threshold.
    1. If the clipboard size is greater than 5 MB, examine it closer by performing the following steps:
      • Examine Report definitions that are used to retrieve data to ensure they are using an optimized WHERE clause that returns only the data needed.
      • Explicitly removing temporary pages (created say for running reports or fetching external data) reduces memory footprint.
    2.  If the clipboard size is within the threshold, there are no issues with the application performance
  6. Repeat steps 1 to 4 for each screen in the scenario

CPU utilization

CPU utilization by a user impacts the sizing of a system. Optimizing CPU usage results in faster response times experienced by the users and allows more users to run on the same hardware.  You can use this performance tool in a similar way as the memory examination to examine key metrics that impact the amount of work being done. For each screen in the scenario, users can “add reading” without the clipboard size and open the delta to examine the metric.

An executed interaction excessing 15K rules can likely be optimized.  If the interaction exceeds this threshold, most probably some complex or unoptimized logic was used.

  • Evaluate this logic to identify potential areas for improved design that can reduce the cost of running interactions.
  • A large result set from a data query might cause the system to execute a large number of rules to process each result. You can reduce the processing impact, examine the business value of the data returned and reduce the size of the result set to the data required to complete the task.

Database queries

The volume and speed of access to the database is often a key factor in the performance of each application. The complexity and frequency of SQL queries are two metrics that you must analyze.  The default SQL queries complexity  threshold is 500. Use the tracer to see every query and its time to execute. You can use the performance tool to check the elapsed time spent in the database executing a report or requesting data from an external system through a connector.

  • If a single query is taking more than 200ms for a for a single query, then there is a potential performance issue. Use the tracer to examine each query executed and identify the queries that are over the threshold. Try to narrow the filter criteria and improve the WHERE clause.
  • If the RDB I/O count is greater than 20, there might be a potential performance issue. Excessive queries to the database may be a sign of an improper query logic or a database schema that has been poorly designed. Check the queries to ensure that they are providing the business value needed. You can reduce repeated queries to the same table by improving the filter criteria of that query.

Guidelines for building a high-performing Pega Sales Automation application

If you want to build a high-performing application, you must concentrate on the following crucial areas because the way they are set up impacts application performance:

  • Data import
  • User experience
  • Clipboard and data pages
  • Monitor requestors for evaluating clipboard size
  • Integration and connectors

Data import

Application performance if often impacted by the imported volume of data.  The following are a few recommendations related to data import to avoid degrading application performance:

  • The size of the File Listener based upload should not exceed 1 million records in a single file.
  • Batch size value recommended for upload is 1000 records. Set it up in App Studio > Settings > Application settings.
  • To improve performance and to disable creating audit history, use the Add Only mode for the initial data import.
  • To ensure a maximum parallel processing, there must be as many input files for the file listener as there are threads, because each thread processes one file at the time. Set it up in the File Listener properties in Dev Studio, in the Listener properties section.
  • Database indexes improve query performance; however, when you update a large database table, the system performs re-indexing, which can cause lower performance. Remove non-essential indexes during the import phase. After the import is complete, enable indexing.

For more information about how to improve the application performance while importing data, see  Improve the data import performance by using configuration.

User experience

How both the user experience and user interface are designed has a large impact on the application performance, both actual and perceived. To avoid poor performance, determine what information is important to display to users. See the following tips:

  • Don't display information until you actually need it. For example, when searching for an object or in landing pages like organization, account, contact, lead, and opportunity only display enough information to identify that object. You can display the remaining part within the work object review harness.
  • If possible, ensure that sections have the defer load option.
  • When showing a list of items, enable progressive loading and set a reasonable page size, for example, 15 items per page.
  • When building the show and hide logic, use expressions instead of When rules because expressions execute on the client-side instead of on the server-side. As a result, expressions perform faster and reduce the server-side load as you scale.

Clipboard and data pages

There are several factors that are improving the application performance, for example, a clean clipboard on job scheduler requestors, file listener requestors, reasonable overall requestor clipboard size, and efficient and correct load strategies on data pages. A well-designed application provides a continuously responsive performance for job schedulers, file listeners regardless of their shift duration, and it must scale when there are many users per node.

When possible, make data pages requestor-level type to avoid constant reload. The system of record data, data used across cases, and data shared across interactions and service intents are all good candidates to be made requestor-level.

Do not use the Reload once per interaction setting for data pages. This option reloads the data page on every HTTP interaction, instead of once during the interaction  If you need a fine-grain reload strategy, consider using the If older than reload, or the Do not reload when settings, but force the condition to evaluate to ‘true’.

Always select the Clear pages setting after not using pages for read-only requestor-level data pages.

Monitor requestors for evaluating clipboard size

Monitor requestors periodically for average size to help identify potential issues with loading too many items on the clipboard, pages not being cleared out, and users who do not close.

In Admin Studio, click Users > Requestors > Load requestor size to monitor the clipboard size.

The key items to watch are the average page size of the requestor and the largest page sizes. On average, the requestor size should be 12MB or less across all system users.

Integration and connectors

Some Pega Sales Automation functionalities integrate with the external systems of record. When integrating with the external systems of record, to ensure high performance, follow these guidelines:

  • Upon receiving connectors, try to independently run load against those connectors by using the industry-available tools, such as JMeter, to understand their performance profile under load.
    • Interfaces should ideally respond in less than 500 milliseconds. Anything that is on average over 1 second should be remediated for performance, independently of integration into your application.
  • Aggregate data sources in data pages to reduce the number of moving pieces required for integration, both for saves and reads.
    • Implementations often use complex and hard to follow activities for performing integrations. The advent of aggregate source data pages removes the complexity.
    • Directly reference connectors and their associated mapping, and if required, add a data transform after your connector sources, to perform any reconciliation or error handling after calling multiple interfaces.

Tools for performance testing

Performance is an important aspect of any system. The perceived quickness of the system often measures user satisfaction and business value. Pega provides a full suite of tools to monitor and address performance. Use these tools during the development to ensure that your configuration performs optimally.

To open system performance, in the header of Dev Studio, click Configure > System > Performance.

System Performance.png

Performance landing page

The Performance landing page provides access to the following performance tools:

  • Performance Analyzer (PAL),
  • Database Trace
  • Performance Profiler.

These tools are also available by clicking the Performance tool in the toolbar. Use the performance tools to collect performance statistics. For more information, see Understanding system resources.

Analyze application performance with PDC

Pega Predictive Diagnostic Cloud (PDC) is a Software as a Service (SaaS) tool that runs on Pega Cloud® and actively gathers, monitors, and analyzes real-time performance and health indicators from all active Pega Platform™ applications. PDC gathers, aggregates, and analyzes alerts, system health pulses, and guardrail violations generated from Pega applications.

For more information, see  Analyzing application performance through PDC

Analyze application performance with Database Trace

The Database Trace generates a text file containing the SQL statements, rule cache hit statistics, timings, and other data that reflect the interactions of your requestor session with the Pega Platform database or other relational databases. Familiarity with SQL is not required to interpret the output.

To open the Database Trace, in Dev Studio, click System > Performance > Database Trace. You can also open it from the Performance tool in the toolbar.

For more information, see Analyzing application performance through Database Trace

Analyze application performance with Performance Profiler

The Performance Profiler is useful when determining which part of the process might be having performance issues, or identifying the particular step of a data transform or activity that might have a performance issue.

When you use the Performance Profiler, you first record readings of the application's performance, then you analyze the readings to identify problems.

To open the Performance Profiler, in Dev Studio, click System > Performance > Performance Profiler. You can also open it from the Performance tool in the toolbar.

For more information, see Analyze application performance through Performance Profiler

Analyze application performance with Performance Analyzer

The Performance Analyzer (PAL) provides a view to all the performance statistics that Pega Platform™ captures. Use PAL to understand the system resources consumed by processing a single requestor session.

To open PAL, in Dev Studio, click System > Performance or select the Performance tool in the toolbar.

For more information, see Analyzing application performance through Performance Analyzer.

Analyze the logs through PegaRules Log Analyzer (PLA)

Regular monitoring of log files during development and production helps to ensure that your application is operating properly. The PegaRULES Log Analyzer (PLA) is a standalone web application that developers and system administrators use to view summaries of console logs.

Use the PLA to test new or reconfigured Pega applications during user acceptance testing (UAT), performance and stress testing, and immediately after deployment into a production environment.

For more information, see PegaRules Log Analyzer

Summary

There are several tools that Pega Platform provides to implement high performing applications.   By using these tools you can analyze application performance, debug root causes for the performance decrease, and take necessary steps for further performance improvement.