Designing Pega unit tests
Designing Pega unit tests
|Description||Best practices for designing Pega unit tests|
|Version as of||8.5|
|Capability/Industry Area||Low-Code Application Development|
Use cases and supported rules
A Pega unit test case identifies one or more testable conditions (assertions) that are used to determine whether a rule returns an expected result. Reusable test cases support the continuous delivery model, providing a way to test rules on a recurring basis to identify the effects of new or modified rules.
Pega unit test cases can be executed whenever code changes are made that might affect existing functionality. Additionally, unit test cases can be grouped together into a test suite so that multiple test cases and suites can be executed in a specified order.
Pega units provide support for the following rule types:
- Case type
- Data page
- Data transform
- Decision table
- Decision tree
- Declare expression
- Map value
- Report definition
Best practices for designing Pega unit automated tests
Identify the right test to automate
- The following tests should be targeted to automate on priority:
- Tests that will have predictable results
- Tests that may be required to execute frequently
- Tests that are easy to automate
- Tests that help to reduce manual effort while testing complex logic
- Tests to be aimed at higher test coverage during new functionality development
- Rules that are isolated and are not dependent on complex predata setup
- Rules that have wide usage across the application
- Tests that provide high return of investment in terms of effort and complexity on creation of a Pega unit to the coverage it gives (to be assessed before creating it)
- Tests related to the categories below should be considered as low priority for automation:
- Tests that undergo very frequent changes that require frequent maintenance of test cases
- Tests that are easy to test manually with no effort and are complex to automate
- Tests that need persistence of data to database as they might interfere with existing use cases in some scenarios (Proper due diligence must be done in these scenarios.)
Keep tests independent and deliver consistent results
- Each test case must be kept as independent as possible:
- Tests should not be dependent on any other test case.
- Create all the prerequisite data that is required to execute the test using the setup tab of test case, so that the test can be run in any environment and produce the same results.
- All data that is necessary for the rule under test as well as any rules it is sourced from or refer to must also be taken care of while setting up.
- Tests should be authored in such a way that they do not interfere with any tests running after them. Use the Cleanup feature to restore any Clipboard system pages or even any changes to data instances or work instances that happen during test execution.
- The choice of data setup plays a crucial role in developing a test case quickly and in giving proper coverage for the rule under test. For example, if a test case needs complex data with embedded pages and page lists for its evaluation, it is good to take the snapshot of the clipboard page instead of setting all the data in a data transform.
- Test development should happen in line with the Rules development.
Keep tests portable
Identify a separate test ruleset to store all test case rules:
- This ruleset need not be packaged for production.
- This ruleset can be packaged to any environment in which we want these tests to be present.
- This ruleset must be configured to store test case rules.
- This ruleset should be added as the last ruleset in the order for an application to avoid storing of other application rules in this ruleset.
- It is recommended to include the Pega units in the daily CI execution to identify any issues on daily basis, due to new code merges, etc.
If you have a test application built on top of your actual application, add all the test rulesets which have your Pega units, Pega scenario tests, and related test data to this test application. For more details, see How to maintain a test application for storing your test cases and related artefacts of an actual Pega application.
Ensure proper test coverage
- Design possible cases and scenarios manually before attempting to write automated tests.
- Ensure different possible paths of execution for the rule are covered, not just the happy paths.
- Consider all positive and negative scenarios.
- Consider boundary cases for the tests.
- Cover exceptions and any error messages.
- Add proper and enough validations to ensure that the functionality is tested. Add enough tests covering all input and output combinations.
- Create as many small unit tests per rule as necessary to cover all as suggested above. Smaller unit tests can help quickly pinpoint when rule functionality does not work.
- Keep the test case logic short, crisp, and visible, covering only what is required.
Ease of maintenance
- Each test case must be easy to read and understand by any person:
- Test case name and description should be relevant and explain the purpose of the test case.
- Follow certain naming convention which would help filter test cases for execution and modification.
- Comments must be added for every step for better readability. Specifically, comments must be used for assertions.
- Maintain the history of changes by using the History tab.
- Try to test only one functionality in one test instead of adding all assertions in one test case so that in case of failure, the root cause can be identified quickly
- Use limited assertions in one test case and only keep the relevant ones together.
- All relevant assertions must be kept together in a test case.
- The preparation of test data should be modularized as much as possible in the case of huge and complex structures, so that the maintenance for updating any changes would be easy and quick in case of any changes in future.
Automation should deliver results
- The time spent on running and maintaining automation tests must be much less compared to the time spent on manual testing.
- Tests should be run on a regular basis as a daily CI, on every merge and check-in at a minimum.
- The majority of defects should be caught through automated tests. If not, it is time to refactor or change the strategy.
- This framework does not yet follow the standard data-driven or keyword-driven approaches, but it will do eventually.
- Data is tightly coupled with the test case and whenever we want to change the data for a test case, the test case needs to be edited.
- Test-driven development (TDD) approach is not incorporated yet.
- Case type and flow rules have limited support. Non-starter flows are not supported.
- When you run test cases or test suites in bulk, they run sequentially, i.e. one after the other but not in parallel.