RPA automated testing

From PegaWiki
Revision as of 11:00, 18 January 2022 by WSBotScript (talk | contribs) (Bot action - changed email parameter value)

(diff) ← Older revision | Approved revision (diff) | Latest revision (diff) | Newer revision → (diff)
Jump to navigation Jump to search

RPA automated testing

Description Discusses the use of automated testing with RPA solutions.
Version as of 19.1
Application Pega Robotic Process Automation
Capability/Industry Area Robotic Automation



While you can automate the testing of the robotic automations that you create, including building an RPA testing solution to test RPA use cases (if you can automate almost anything with RPA, why couldn’t you automate the testing of an automation using an automation?), most users deem the idea impractical.

There are three main issues that affect automated RPA testing that make a testing automation administratively and economically prohibitive:

Providing system access[edit]

Automations work on behalf of a user who has a certain security access profile for each application or through a system ID that is configured just like a user would be configured. Some systems don’t even have adequate sub-environments where testing access is available. Creating and maintaining a super-user profile for automated testing that includes a mix of UAT and production access often represents a security risk and is difficult to keep updated, as new automations requiring new systems are constantly developed.

Creating and maintaining test data[edit]

100% automated testing requires a large number of test scenarios, including exception cases that are usually not available in sub-environments. Some automation steps require actions in the systems that use up test scenarios irrevocably. Thus, automated testing can fail simply because the automation can’t reach certain screens or successfully perform transactions, if adequate test data is unavailable. A constant refresh of the test data to replace the types of scenarios that are irrevocable is often impractical.

Maintaining the automated testing framework[edit]

In addition to the previous issues, most RPA solutions require modifications to be tested automatically. Some changes are significant, such as excluding transactions, screens, or scenarios that cannot be effectively tested, and the outcome are copies of RPA solutions adapted for testing. Going forward, an additional maintenance burden exists to keep the extra testing solutions updated with any changes that are made during maintenance of production solutions.

So, what are the best practices?[edit]

There are offsetting factors that are unique to RPA technology that minimize the risk of disruption even without the effort to set up and maintain automated testing:

  • Quick turnaround time for RPA solution maintenance
    • 99% of system changes that can impact existing automations can be coded and tested with as little as one week lead time before the system release because application control re-match does not require a re-write of the RPA code. This allows for proactive mitigation of any high risk system changes within tight timelines.
  • Temporary manual fallback is available for most automations in case of unexpected down times due to system changes. By their nature, RPA solutions represent work that human users can do, if an RPA solution is down.
  • Centralized deployment of updated RPA solutions to target user/bot machines enables the prompt release of automation changes within 24 hours for the rare cases of system changes that have not been evaluated in advance.
  • Virtualized computing environments are typically cheap and can carry spare capacity to allow RPA solutions to catch up on the volumes after an unexpected brief outage, with minimal impact to business SLAs.

There are a small number of precedents when customers persevered with the setup of the automated RPA testing, yet they all have ultimately abandoned or scaled down the extent of the automated testing. Those that continue with automated testing have reduced the scope to high level application pings because of the test data constraints noted previously.

Based on these considerations, the following best practices are recommended instead of automated testing:

  1. Set up change management dependencies for individual RPA solutions to receive notifications of target application releases.
  2. Review pending system changes, assess risk, and work with the business area to perform regression testing of individual automations based on level of risk and availability of sub-environments and test data.
  3. Monitor the impact on automations during the rollout of system changes and quickly troubleshoot, if necessary.