Download as docx, pdf, or txt
Download as docx, pdf, or txt
You are on page 1of 33

Exam topics 

(% of exam)
Enterprise Design (17%) Reporting (8%)
How Pega fits into the technical landscape of the Design appropriate reporting strategy based on
organization business need
Design for architecting Pega for the enterprise Identify need for custom SQL functions
Determine application infrastructure requirements Design reports for performance
Identify when to incorporate other Pega products Identify/solve performance problems in reports
Architect an environment with high availability Background Processing (10%)
Diagnose cause of an issue happening only in one Determine appropriate background processing
environment design option
Diagnose cause of an issue after weeks of Identify proper techniques to handle background
application running processing
Case Design (13%) Design for separate nodes; asynchronous
Design case hierarchy and relationships between processing
cases Optimize standard agents
Evaluate when to use subcase and subflow Asset Design and Reuse (8%)
processing Specialize an application by overriding rulesets
Define case locking strategy Assess need for extending an existing application
Design for case level and assignment level SLAs Identify opportunities for re-using assets
Recommend appropriate data propagation Refactor an application built in Pega Express
approach vs. shared data Deployment and Testing (15%)
Design Get Next Work algorithm Apply Production Deployment best practices,
Evaluate when to use push routing and pull including DevOps
routing Design and automate a Testing strategy
Evaluate when to use activities, APIs, and Assess and monitor application quality
functions Establish quality measures and expectations on
Data Model Design (8%) your team
Design and extend the application data model Customize the rule check-in approval process
Design data reuse layers and identify relevant
records
Define mapping between source systems and Pega
User Experience Design (8%)
Provide thought leadership in the area of UX
Determine where UX fits into the design
architecture
Identify application functionality that can impact
UX
Design the user experience to optimize
performance
Design for mobile and offline usage
Security (13%)
Determine the best authentication strategy
Determine the appropriate authorization model for
a given use
Perform a security assessment; identify and fix
issues
Use security best practices
Identify security risks
General Tips
 60 mcq - 120 minutes (30 minutes more non speaking english country)
 65% or 39 questions need to be correct to pass
 3 times in 12-month period
 positive answer is right size

LSA SUMMARY

DEPLOYMENT

1. Deployment Options
a. On Premise
i. Customer DB, App Server and Other Services
b. Cloud
i. Pega Cloud
 3 Basic Provisions-> Standard(15)/Large(45)/Production
ii. Customer Managed Cloud
iii. Partner Managed Cloud
2. Deployment Services
a. PCF as topology manager (works only with above 3 cloud solutions)
i. Automate system management tasks
 Dynamic Node Allocation/ Load Balancing
b. Using Docker Container to deploy pega
i. Run and Manage Apps side by side in isolated containers
ii. Needs Docker Container and Docker Host System

HIGH AVAILABLITY

1. Clustering (2 or more pega platform servers)


a. Dynamically allocating app servers on demand
i. HA/Reliability/Scalability
2. Load Balancing
a. Distribute workload across multinode clustered environment.
b. Session Based Affinity (Sticky Sessions)– Same Browser session goes to same JVM (same
pega server)
i. Configured with Load Balancer
ii. Ensures all requests from a user handled by same server
iii. Only applicable when using slow drain for quiesce
c. Load Balancing Techniques
i. Hardware Routers that Support Sticky Sessions like F5
ii. Cloud Based Load Balancer Like Amazon EC2 (Software Solution)
3. SSO Authentication for Seamless experience
4. Shared Storage
a. Allow stateful application data to move between nodes
b. Can be Shared Disk Drive, NFS or DB. (default is pega db)
c. Stores Passivated user sessions
d. Storing passivated requestors in a file could impact performance of the JVM
e. Database passivation is used as the default passivation method
5. Split Schema
a. Pega DB split to two, Rules and Data
b. Rule upgrades/rollbacks manage independently from data
c. Rolling restarts can be performed for upgraded reducing downtime.
i. Rolling Restarts***
6. Cluster topologies
a. Horizontal Scaling
i. Multiple app servers deployed on separate physical/virtual machines
b. Vertical Scaling
i. Multiple app servers deployed on same physical/virtual machine
ii. Run on different port numbers
c. Cluster can have heterogenous servers only restriction run on same pega version
d. In memory cache management for multi node cluster topologies
i. Hazelcast
 Embedded in Pega Platform
 Data is evenly distributed among nodes of cluster
 Can run embedded in every node or run as client-server topology
 Client-Server mode
a. Allow client and server to separately scale up
b. Used for if more than 5 cluster nodes in the environment
c. Client is Pega Platform Nodes
d. Servers are standalone Hazelcast servers
 Client-Server mode Advantages***
a. Cluster member lifecycle independent since its different server
b. Cluster performance more predictable and reliable
c. Client and servers can be scaled independently.
7. Outages

Planned Outages
a. Quiesce – Takes the node out from load balancer, save session to shared storage and
activated later once relevant task done
i. Passivation and Activation
ii. Methods from where Node can be quiesced
 Slow drain – requires taking node from load balancer before starting
quiesce, All user session activated in another node after passivation
timeout which is default at 5 seconds
 Immediate Drain – default and no need to take node out since all users
(new and existing) are passivated immediately and the node can be
removed from the load balancer at the same time
iii. Pega Cloud must use the ImmediateDrain quiesce strategy
iv. HA Roles
 PegaRULES:HighAvailabilityQuiesceInvestigator
 PegaRULES:HighAvailabilityAdminstrator
b. Upgrades
i. Out of Place(Rolling Upgrade)
 Little or no downtime
 Create a new schema/Upgrade it/DB Connections updated to point to
new schema/Nodes are quiesced and restarted one at a time in rolling r.
ii. In Place Upgrade
 Significant downtime
 Stop App/Run preupgrade scripts or processes/Update database
schema using IUA, redploy ear or war and then App restarted
Unplanned Outages
c. Node Crash
i. Load balancer redirects traffic to another node in pool of active servers
ii. Reconstructing UI from UI metadata in shared storage
iii. Clipboard is not preserved, data not committed is lost
iv. SSO as a best practice
d. Browser Crash
i. Connects back to correct server on session affinity
ii. User session recovered without loss of both ui metadata & data on screens
unless***
8. Node Monitoring
a. ADMIN Studio
b. Autonomic Event Services (AES)
c. Mbeans integrated into your Network Operations Center (NOC)
d. HA Cluster Management page
9. Technical
a. Enable the HA dynamic system setting (session/ha/Enabled)
b. PegaRULES:HighAvailabilityAdministrator role - modify cluster settings & quiesce nodes
c. Update the value of the session/ha/quiesce/strategy prconfig setting to be either
"slowDrain" or "immediateDrain" and then restart the server

LEVERAGING PEGA APPLICATION

1. General
a. Offers Customer engagement apps and Industry apps
b. Product gap analayis vs Solution implementation gap analysis vs MLP vs AgileWorkbench
c. SIGA – difference between features pega app provides & the biz requirements for mlp.
d. PGA – determine if pega app is right fit for organization during sales cycle

DESIGNING THE CASE STRUCTURE


1. Guidelines to Identify a case
a. When do we need a child case or can we go for subprocess
b. Security requirements may force for a subcase
2. Parallel Processing
a. Efficient if steps/processes of same case performed by separate requestors
b. 2 types of parallel processing
i. Same case processing using subprocesses
ii. Child case Processing
3. Same Case Processing
a. Using split-join… like subprocesses
b. Multiple assignments created allow different requestors to work on the same case
c. Default locking limit case update same time by multiple people – use optimistic
4. Child Case Processing
a. Default locking limits even parent case being performed so use Do Not Lock Parent
b. Waiting for one type of child case  wait shape or else use a Ticket
5. Factors for both options ***
a. Security, Reporting, Persistence, Locking, Specialization, Performance. – Child Case
b. Data, Reporting, Attachments – Same case
c. Reproting in same case -data readily available but in sub case  good performance
since smaller data sets
d. Subprocess  limits extensibility and ability to specialize the case
e. Subcase vs subclass  subclass indicates is-a relationship in class sturcutre only like
when specializing CliamTask child case with PartsReturn

LEVERAGE APP STUDIO


1. General
a. For less technical, more business focused users also known as citizen developers
b. Ensure that assets are built to be reusable, and publish them for re-use in App Studio
c. Manage the common assets through a center of excellence (COE).
d. It is also possible to add a Relevant Record using pxSetRelevantRecord
e. Building an application with App Studio in mind ensures guardrail compliance,
transparency, and visibility
f. We make this assets reusable in app studio by marking them as relevant records.

DESIGNING FOR SPECIALIZATION


1. General
a. Encapsulation  to hide the values or state of a structured data object
b. Inheritance
c. Polymorphism  change meaning/usage of an entity according to the context
2. SOLID
a. Single Responsibility  UIKit
b. Open/Closed  Open for extension but closed for modification. Eg  PegaHC-Data-
Party extending Data-Party
c. Liskov Substitution  Object referncing other objects by their base class need not to
know how its extended. Eg Routing works on Data-Party-Person vs Data-Party-Org
d. Interface Segregation  multiple interfaces to an object fulfilling specific purpose rather
to 1 large complex interface which are not interest for client. Eg service package
e. Dependency Inversion  object facilitate the config of objects on which they append.
Eg DCR
3. Design Considerations – Specializing Layers
a. Need not to be specific for one app, can be used in many apps across enterprise.
b. Can be avoided by Circumstancing, pattern-inheritance and data modeling technique.
4. Specialization Layer – Single implementation vs Multiple implementation
a. Enterprise not spanning multiple regions where business rules vary dramatically
b. Enterprise only interested completing implementation of a framework
c. Enterprise targeting distinct customer types while leveraging core app
5. Choosing app structure
a. Framework layer IS A specialization layer/spans every case type and an entire layer
b. Production Applications IS AN Implementation application/what end users use
c. Framework defines common work processing for a set of case types & data objects
d. App specialize within by pattern and specialize across using direct inheritance
6. Component applications (Not Components like feature component)
a. Follows Open/closed principal and Template design pattern
b. In specialization thing applications as components rather than frameworks
c. Different to components which is a collection of rulesets
d. Is recursive and does not contain own unit test rules and has a stable interface
e. Multiple built on component apps eliminate use of ruleset prerequisite instead use AV
7. Component
a. Collection of ruleset with no case type or workpool and not runnable on its own
b. Same funcationality that can be reused in multiple apps
c. Holds a standalone process, but depend on other application rules for full execution
8. Ruleset, class and circumstance specialization
a. purpose of the Application Version axis is to permit evolution (upgrading and versioning)
of the applications
b. ruleset specialization eg is Pega handling Localization
c. Dev Studio displays rules using Requestor-level scope. Case-related rules are normally
circumstanced using a case-related Property. Hence circumstanced rules would only be
active when a case instance is opened, meaning the scope would be Thread-level
d. Pattern inheritance can leverage DCR
e. All pattern-specialized rules are grouped by class
f. Organizational hierarchy specialization can be achieved by direct inheritance
g. App layering synonynmous with template design pattern and open/closed principle
h. Componenet synonyos with being recursive and having a stable interface
9. Data Driven Specialization (Vebye difference applied through data)
a. P – No Vendor Specific Rules/Max Scalability
b. C – Highly complex to implement and maintain
10. Single Application Rule Specialization (All done in current app)
a. For few differneces  Use Circumstancing otherwise use Class Specialization
b. P – Single Application for users to access
c. P – Easy to implement and maintain compared to data-driven rules.
d. C – extra work when large number of rules to specialize
11. Specialization by Multiple Application (App per venue/new Case type extend from current app)
a. P - Simple development done thorugh application specific ruleset and classes
b. C – Switching between apps required
c. C – Reporting across multiple apps required
12. We created an Email Component Application reusing and Email Component

REUSE
1. Reasons to and not version application
2. COE Responsibilities
a. Manage and promote reusable assets
b. Identify new opportunities in the organization
3. Available on Built on App but not in component
a. PegaUnit test creation
b. Self testable (Components rules can be unit tested but can not test itself)
c. Extendable case types
4. Challenge
a. Shared ruleset we include in a componente and which will be added to both apps so
both can use at once. Hotel and HotelProxy app
5. Federated case management use pega web mashup to link pega app in a federation

DESIGNING THE DATA MODEL


1. Managing data is a different role than being the data.
a. Two types of properties in case  Customer Data, Transaction State and history
b. Two types of data in case  Referenced (read only), Calculated (Updateable)
2. Data Modeling can be two ways
a. Build on preexisiting foundation application data model
b. Build on app that has no prior work (called greenfield development)
3. CustomerData schema do not contain “Pega” columns such as pzInsKey or pzPvStream
4. Determine if data can be persisted on case or outside
a. Data if embedded (inside blob)
i. Page list property– optimize using declare index
ii. Page property – col is created in db table & mentioned in class rule mapping tab
b. Data Outside blob
i. PageList /Page Property  reference external source using data page using keys
5. How template design pattern helps to override CreateoRUpdateLocation flow in Work- class
6. Maintaining data integrity
a. Locking instances – Locking tab in data class (only for concrete), only lock when needed
b. Avoiding redundancy
i. single source of truth principal
ii. Better access directly from pyworkcover if data bound to change
iii. Ref pages outside case will reduce blob size
iv. Instead of maintaining large data within blob as either embedded page or pega
list, store them in history like tables (my own tabl). Eg tables with no blob cols
v. History like tables allows to see what pages containing avoiding blob reads
c. Temporal Data (Solution is Versioning Data)
i. Valid for certain period of time
ii. Create data class using a base class containing version/IsValid properties
iii. Use custom rule like Rule-Survery rule class
iv. Custom rule best for maintaining a list that contains a list
v. Avoid using d page snapshot pattern if it has temporal data since it will change
7. Data Model Best Practices
a. Concrete class  something that persist outside of case, something a case can refer
b. Reference data is data that has very few reference to other data and is atomic/self-
contained/encapsulated
c. No point of reference data to be embedded within case blob
i. One reason to do so may be values are transient
1. If copied make sure to copy why you do that as of date for price
ii. Copying historical price is beneficial for performance as avoid recalculation
d. When no calculations are involved do a lookup or Join from case to ref data as opposed
to embed in case blob like snapshot pattern.
e. Should be able to deploy concrete class with no strings attached
f.True concrete instance should remain close to modification
i. In contrast instance with pagelist can grow indefinity overtime, increasing blob
ii. Concrete data class should not maintan pagelist of reference to other instances
g. Can create data class which is synonymous with case data
i. Assist case in performing its role of being SOR for a transaction
ii. Minimizes case level properties
iii. Provides a way to reuse rules such as sections/views and data transforms
h. Law of demeter
i. Redundant paths to same object need avoided same as redundant data avoided.
i. Open/Closed  anything working at enterprise should also work at application level
j. Data Class Naming
i. Case should be noun+ action  BookEvent
ii. Data should be nound Event (No need of EventData/ EventDetails)
8. From QA
a. Data Locking required if multiple requestors access the records at same time, like queue
b. 2 hierachy types in OOP interface
i. Subsumptive -hierachy is synonymous with is-a relationship
ii. Compositional  compositional hierarchy is where a thing is the sum of its
parts, its parts also being compositions
9. From Challenge
a. Circumstancing is ideal for situations where there is a single rule that is applicable to the
majority of situations and where specializations of that rule rarely needed

EXTENDING INDUSTRY FOUNDATION MODEL


1. General
a. Applies Seperation of concern design principal
i. Keepin biz logic separate from interface that gets the data
ii. Interface insulates the biz logic from having to know how to get the data
iii. Data pages connect biz and interace logic in Pega - allows you to change the
integration rules without impacting the data model or application behavior
b. Rather than directly extend the industry foundation data model, your project may
mandate that you use an Enterprise Service Bus (ESB)- dictated data model
c. goal of an ESB is to implement a canonical data model that allows clients who access the
bus to talk to any service advertised on the bus – peag team will only focus maintaining
mapping between canonical and foundation data model
2. Extending an industry foundation data model
a. Use data classes directly from industry foundation data model as much as possible.
b. If create new props or data classes, generate integration rules at org level int ruleset.
c. Overtime rest service definition can change, use rulesets to maintain version
d. Allow coe to manage and deploy these rules to dev enviornments
e. Data dictionary to record data mapping and surjection
3. Integration versioning
a. Mapping code depen on the state of integration data model
b. Mapping code act as insulator between business and integration data model. Also
known as loose coupoling.
c. To deal with changes to integration data models, generate models with new integration
base classes such as Org-Int-ServiceV2
d. How dcr can help dpage to choose which int version to based on application version
e. can also use a Data-Admin-System-Setting value to a data page to accommodate the
interface data model change

ASSIGNING WORK
1. General
a. Push vs Pull
b. Push routing occurs when a pyActivityType=ROUTE activity is used to create either a
worklist or workbasket assignment
c. Pull assignment using 2 ways
i. using Get Next Work by clicking Next Assignment at the top of the portal
ii. by checking Look for an assignment to perform after add
d. Routing to a role makes it clear why it was routed to the person
e. Avoid using hardcoded operator ids
f. If a background process such as a Standard Agent or Advanced Agent move a case
forward to assignment that uses ToCurrentOperator  routing, a ProblemFlow  could result
g. An assignment using ToCurrentOperator  after a Wait shape will route to the requestor
who routed the case to the Wait shape.
h. It may be required to override the default behavior of some aspect of the work parties
such as validation or display. This can be performed either through ruleset specialization
or by extending the work party class and overriding the required rules
i. ToNewWorkParty/ToWorkParty  both route assignment to worklist using partyRole para
j. WorkPartyRetrieve  activity is significant since it is invoked every time a page is added
k. VOE option to add work parties when a case is created
l. When dynamically setting the work party values, leverage addWorkObjectParty  activity.
it also allows you to specify a data transform to initialize the values otherwise use VOE.
2. GetNextWork
a. Ownership of fetched assignment not occur until either
i. MoveToWorklist is called
ii. User submits the fetched assignments flow action
iii. MoveToWorklist Rule System Setting need to set to true to this to be called
b. If multiple workbaskets assigned to operator, it will go top to bottom
c. Lower urgency on looked after all applicable wb assignment with urgency above limit
taken cared off.
d. Assignment can have required and desire skills, only required skills considered in GNW
e. Assing-WorkList.GetNextWork ListView use default getcontent activity to get assignments.
f. Before the assignment returned by the list view is selected, the
Assign-.GetNextWorkCriteria  decision tree checks if the assignment is ready to be
worked on and if the assignment was previously worked on by the user today. The
assignment is skipped if it was previously worked on by the user today.
g. Don’t return case that is locked or assingments that has error
3. Customizing GetNextWork
a. change the prioritization of work by adjusting the assignment urgency
b. better long-term solution is to adjust the filter criteria in the Assign-
WorkBasket.GetNextWork and Assign-Worklist.GetNextWork list views, can sort by the
assignment's create date or join the assignment with the case or another object to
leverage other data for prioritization.
c. Modify using getnextworkrules
i. Activity overriding/Rule Circumstancing
d. Modifying without using getnextworkrules
i. Increase case urgency prior to wb routing
ii. List wb in specific order on operators record
e. adjusting the prioritization of work returned by Get Next Work
f. You can create several circumstanced list views if the requirements cannot be done in a
single list view, use this approach for different access group requirements.
g. Use the Assign-.GetNextWorkCriteria decision tree to filter the results returned by
the GetNextWork  list view
h. Using the GetNextWorkCriteria  decision tree for filtering has performance impacts since
items are iterated and opened one-by-one. Always ensure the GetNextWork  list view
performs the main filtering.
i. Besides circumstancing the GetNextWork  ListView, it is also possible to circumstance
the GetNextWorkCriteria  Decision Tree for a particular WorkGroup
4. Leverage work parties in routing
a. Workparties in case provided dynamic, reusable routing solution
b. Workpartyretrive activity called everytime a page added to pyworkparty (OnChange)
c. Workparties default behavior can be override by rulese or class specialization
5. Configuring Work Party Rules
a. Avoid generic names such as owner and originator
b. Use VOE option to add work parties when case is created or use addworkobjecparty
c. Define routing for every assignment
6. Alternative to find work - a/b/d done without touching GNW
a. pyGetWorkBasket  Report Definition behind the  D_WorkBasket  List Data Page used to
display workbasket assignments could be modified to use an ascending sort
on  pxCreateDateTime  as opposed to a descending sort on  pxUrgencyAssign  as it uses by
default.
b. value for  pxUrgencyAssign  could be derived after the fact, i.e., reverse engineered, by
taking into account @CurrentDateTime() and the time that the workbasket assignment
was created
c. A circumstanced version of the  Assign-WorkBaset GetNextWork  ListView could perform
an ascending sort on  pxDeadlineTime  as opposed to
using  pxUrgencyAssign  descending. A circumstanced version of
the  pyGetWorkBasket  Report Definition could do the same.
d. pxGoalTime and pxDeadlineTime can be adjusted from their initial value.
The  pxAdjustSLATimes  API activity would be used
7. Router activities
a. ToLeveledGroup is a valid option because it takes the users current workload into
consideration.
b. We can also create custom router activity
8. P & C of different approaches for customizing getnextwork
a. Circumstancing GNW rules
i. P Mirror the existing implementations /Easier to maintain/ Easier to extend
ii. C Requires additional rules to be used as circumstancing property
b. Custom Button & open assignment action
i. P Not tightly couple to pega rules
ii. C Need to add redundant code and custom error handling
c. Override GNW rules
i. P Fewest rules needed
ii. C affect behavior every user

DEFINING AUTHENTICATION SCHEME


1. General
a. Proving who you are
b. PRBasic replaced with Basic Credentials (This is default authentication)
c. Support Oauth2 and Open ID and kerbos
d. Pega can be an idp or an external like MS Active Directory can be idp.
2. Enterprise tier devployment can use container based authentication or JAAS or JEE security.

DEFINING AUTHORIZATION SCHEME


1. General
a. About who can do what and who can see what in the app
b. Determine RBAC/ABAC and security of reports/attachments/background processes
c. security on reports and attachments, and background processes
d. Leverage the Deny Rule security mode when defining Access Groups. Some
organizations enforce a deny first policy. In this model, users must be explicitly granted
privileges to access certain information
2. RBAC
a. Can Configure using Access Manager
b. defines the actions that 1 role is authorized to perform on a Class by Class basis
i. Class wide actions  Execute Activities and Run Reports
ii. Record level actions  Open, Modify (Create and Update), Delete
iii. Rule-specific actions as governed by Privileges
c. Rules used to configure
i. Class/Access When or Deny/Access Group/Access Role/ARO/Privilege
d. NEW RBAC features
i. Dependent roles
ii. Privilege inheritance
iii. Short circuited access checking
iv. App studio managerd authorization
e. ARO
i. Most specific ARO is taken by Pega
ii. Privilege is granted through the most specific AROs for the class of object
iii. If operator has multiple ARO, its joined by OR and only one most specific need
to grant access in order to performs operation
iv. Role has the option for inheriting privileges selected
v. Access permissions and named privileges can be granted up to a specified
Production Level between 1 through 5 (1 being Experimental, 2 being
Development, 3 being QA, 4 being Staging, and 5 being Production) or
conditionally via Access When rules.
3. ABAC
a. General
i. Enabling Access Control policies to control access to specific attributes of a
record (so long as RBAC granted access to record)
ii. ABAC allows to restrict access on specific instances of classes using policies that
are not role-based, but instead based on other attributes known about the user
iii. It is optional, and used in conjunction with RBAC
iv. Compare user information to case data on row by row basic
v. Access Control Policy Condition rules defining a set of policy conditions that
compare user properties or other information on the clipboard to properties in
the restricted class
vi. Not Applicable for Rule- classes
vii. To Enable ABAC  set DSS EnableAttributeBasedSecurity value to True
viii. Can apply at both record level (visibility of address records) or attribute level (SSN)
ix. Discover action to enable viewing selected data in instance the user can’t open
x. Can prevent overriding the policy in a descendant class
b. Rules used
i. Access Control Policy/ACP Condition
c. ACP
i. Only works with Assign-/Work-/Data-/Index-
ii. Multiple ACP’s are joined by AND only allowed when all ACP’s are satisfied
4. Dependent role hierarchy
a. Allow to inherit all authoirizations available for dependent role
b. Advantages
i. Eliminated dupplicating ARO’s
ii. Allows to reuse Pega Application Authorization
iii. Allow access role layering
c. When dependent roles are not used
i. Any changes to the Pega’s authorization model in upgraded access roles would
be masked by the application-specific access roles
ii. Any new features delivered in any Pega upgrade may depend on Privileges from
the upgraded access roles that would be masked by app-specific access roles
d. best practice is to create application-specific access roles which specify the foundation
access roles as dependencies (Like user/manager/administrator).
e. This is a replacement to ARO cloning
5. Access Group Considerations
a. Can define access role that can be used by multiple personas and then add these newly
created access roles for the access groups of both personas. One role, 2 personas.
6. RBAC can restrict the operator to accessing specific UI components, such as audit trail and
attachments, or restrict the operator from performing specific actions on a case using privileges.
You can also use RBAC to restrict access to rules and application tools, such as Tracer and Access
Manager during design time
7. You use ABAC to restrict access on specific instances of classes using policies that are not role-
based, but instead based on other attributes known about the user. The above access control
policies driven by conditions that are not role-based are typically implemented using ABAC,
which can apply at both the record-level (e.g. visibility of Address records) and attribute-level
(e.g. visibility of the Customer’s Social Security Number).
8. Access Manager is only for RBAC
9. Rule Security Mode - setting on the access group helps enforce a deny first policy
a. The goal of the Rule Security Mode Utility procedure is to determine which rules in your
application can be run by which user groups, and to automatically
assign implicit privileges for those rules and grant them to those users
b. when users are logged into the application in a production environment, these implicit
privileges grant permission for the user to execute these rules, and prevent them from
executing rules where privileges are not assigned
c. determines how the system executes rules accessed by members of the access group
d. three supported rule security modes are Allow, Deny, and Warn.
i. Allow: The system allows users in the access group to execute a rule that has no
privilege defined, or to execute a privileged rule for which the user has the
appropriate privilege.
ii. Deny: Use Deny to require privileges for all rules and users
iii. Warn: Use Warn to identify missing privileges for a user role. The system
performs the same checking as in Deny mode, but performs logging only when
no privilege has been specified for the rule or the user role. The warning
messages written to the PegaRULES log are used to generate missing privileges
for user roles with the pyRuleExecutionMessagesLogged  activity. (WARN CAN
BE USED TO VERIFY IF ACCESS ROLE IS CONFIGURED CORRECTLY)
iv. pyRuleExecutionMessagesLogged  - This activity will process
the  Warn  statements in the PegaRULES log file and create privileges for the
rules which were flagged, based on the specified user.

MITIGATING SECURITY RISKS

1. CSP
a.Protects your browser from loading and running content from untrusted sources
b.Including Cross Site Scripting (XSS) and data injection attacks
c.If an attack takes place, the browser reports to your app that a violation has occurred.
d.CSPs are a set of directives that define approved sources of content that the user's
browser may load
e. are instances of the Rule-Access-CSP 
2. Rule Security Analyzer Tool
a. tool identifies potential security risks in your applications that may introduce
vulnerabilities to attacks such as cross-site scripting (XSS) or SQL injection
b. vulnerabilities can arise only in non-autogenerated rules such as stream rules HTML, JSP,
XML, or CSS), and custom Java or SQL statements.
c. Use trained security IT staff to review the output of this tool
d. Running the Rule Security Analyzer before locking a ruleset is recommended.
3. Securing an application
a. Lock each ruleset version, except the production ruleset, before promoting an
application from the development environment
b. Ensure that a virus checker is installed to enforce which files can be uploaded. You can
use an extension point in the CallVirusCheck  activity to ensure that a virus checker is
installed.
c. Ensure file types are restricted by adding a when rule or decision table to
the SetAttachmentProperties  activity to evaluate whether a document type is allowed
d. ensure connectors and services are secured in an appropriate way.
e. Disable any users not used if the platform was not deployed in secure mode.
f. Enable security auditing for changes to operator passwords, access groups, and
application rules.
g. Review the Unauthenticated access group to make sure that it has the minimum
required access to rules.
h. Do not configure the Dynamic System Settings for a development environment, because
they restrict the Tracer tool and other developer tools
i. Rename and deploy prweb.war only on nodes requiring it.
j. Remove any unnecessary resources or servlets from the web.xml. Rename default
servlets where applicable, particularly PRServlet.
k. Rename prhelp.war and deploy it on a single node per environment.
l. Ensure that the system has been set up using a JDBC connection pool approach through
the application server, rather than the database being set up in the prconfig.xml.
4. Security best practices
a. Rule-Access-Role-Object (Access of Role to Objects or ARO) rules are non-versioned. It is
not possible to override an ARO rule within a different ruleset. There can only be one
instance of an ARO rule based on its keys, pyAccessRole  and pyAccessToClass.
b. do not to assign permissive access roles, such as WorkMgr4, early on if not completely
certain the user needs
c. The object-oriented way to enforce security is to secure the object that is ultimately
accessed, not every path that can be taken to get to the object
d. The security guidelines are included in the pxApplicationSecurityChecklist  Application
Guide rule which can be launched from an Application rule’s Documentation tab
5. Security event logging
a. Custom event logging can be used to facilitate the fulfillment of Client-Based Access
Control (CBAC) auditing requirements. It is possible to log a custom event within an
Activity java step using:

DEFINING REPORTING STRATAGY

1. Reporting and data warehousing


a. The key factor that determines whether you design your reports in the Pega application
or leverage an external reporting tool is the impact on application performance.
b. (BIX) allows you to extract data from your production application, and format the data
to make it suitable for loading into a data warehouse.
c. planning how and when to purge data from the production system.
d. In addition to making this data available for reporting from a data warehouse, create a
strategy for managing the size of these tables.
e. This strategy could include partitioning database tables or moving the data to a staging
database. This strategy could also involve purging this data from the database after it
has been archived in the warehouse.
2. Defining a reporting startagy
a. inventory the reports that the business uses to make key decisions to help determine
the overall reporting strategy.
b. Once you have an inventory of these reports, create a matrix categorizing the user roles
and reports each role uses to make business decisions
c. Identify how frequently the data needs to be delivered. The outcome of your research
affects configuration decisions such as report scheduling settings, agent schedules, and
refresh strategies on data pages.
d. Determine what data the report must contain based on requirements
e. Determine how pega offers several options for gathering report data within the app.
i. heavy trending reporting, and business intelligence (BI) reporting - data
warehouse
ii. status of work assignments on a dashboard in the application, report definitions
with charting
3. Alternatives to standard reporting solutions
a. robotic automation to gather data from external desktop applications
b. using data for analytics, consider using adaptive and predictive decisioning features
c. If you need dynamic queries, use freeform searching on text such as Elasticsearch
instead of constructing a report definition to gather the data

DESIGNIN REPORTS FOR PERFORMANCE

1. General
a. As the amount of application data grows, the report may run more slowly. Poor report
performance can cause memory, CPU, and network issues. These issues can affect all
application users, not just the user running the report
i. Memory - Large result sets can cause out-of-memory issues
ii. CPU - Using complex SQL can also have a CPU impact on the database server –
AES and PDC can help
iii. Network
b. Removing the chart on reports can also result in performance improvement
2. Configuring rules to improve report performance
a. Using data pages when possible - best approach to optimizing your report is to avoid
running the report. Data pages can help you do that.
b. Using pagination in reports if results are more than 50
c. Optimizing properties – for property
d. Utilizing declare indexes – for page list/pr_index_workparty
e. Leveraging a reports database – need same structure (table and cols) of live database
f. Avoiding outer joins
3. Tuning the db to improve report performance
a. Partitioning tables
b. Executing explain plans on your queries
c. Creating table indexes
d. Dropping the pzpvStream column on pr_index tables
e. Purging and archiving data
4. Challenge
a. Removing chart improve report performance
b. Watch browser interaction alerts when running reports
c. If more than 500 records, export that table to data warehouse

QUERY DESIGN

1. 427 – 431 ***


2. There are number of ways to query data that are not supported by Report Definitions. An
example is the Haversine formula
3. The query is found in Browse tab of FSG-Data-Address HaversineFormula Connect-SQL rule
4. Unlike a Report Definition, a Connect SQL rule lacks the ability to dynamically modify its filter
conditions based on a parameter value being empty. Unless the Report Definition is configured
to generate “is null” when a parameter lacks a value, Pega will ignore the filter condition which,
in some cases, can be risky unless a limit is placed on the number of returned rows
5. Challenge
a. In future, Connect-SQL rule queries may not allow tables with Pega-specific columns
b. Trend reports using History tables or Timeline tables.
6. Import RAP file
a. The pre-import collection is pyPreImportCollection.
b. The post-import collection is pyPostImportCollection
c. metadata properties used for pre-import steps are included in  pxArchiveMetadataPage.
d. metadata properties returned by post-import steps are included in  pxImportResultPage

USER EXPERIENCE DESING AND PERFORAMNCE

1. Identifying functionality that impact UX


a. Background processing
i. Moving required processes to perfrom in background can improve performance.
ii. Also known as asynchronous processing
b. Leveraging SOR pattern
i. Data not in case but retrieved when needed during runtime from external SOR
ii. defer the load of the external data after the initial screen is loaded.
c. Designing for realistic integration response times
i. allows the end user to start working while application gathers additional data.
d. Estimating network latency accurately
i. Keep systems you are integrating with as close as possible to your data center. If
the system you are integrating with is located very far away, consider using
replicated data from a nearby data warehouse or proxy system
e. Avoid retrieving large data sets. Keep your result sets as small as possible.
f. provide the end user meaningful feedback of how much time is needed
g. leverage Pega's Responsive UI by Keeping your User Interfaces highly specialized and
focused on individual and specific tasks.
2. UX Performance optimization strategies
a. Use layout groups to divide a rich user interface
i. Deferred load feature configure to layour groups
ii. Use data pages as the source of data for deferred loading.
iii. Cache and reuse the data sets using appropriately sized and scoped data pages
b. Leverage case management to divide complex cases
i. Dividing complex cases into smaller, more manageable subcase
ii. Avoids loading a single large case into the user's session
iii. Each subcase opened in a separate PRThread instance executes asynchronously
and independently of other cases
c. Run service calls asynchronously
i. run-in-parallel option on the Connect method. allows the service call to run in a
separate requestor session
d. Investigate alert logs using AES and PDC
i. PEGA0005 — Query time exceeds limit
ii. PEGA0020 — Total connect interaction time exceeds limit
iii. PEGA0026 — Time to connect to database exceeds limit
3. Design Practices to avoid
a. Misuse of list controls
b. Uncoordinated parallel development
i. multiple development teams could invoke the same web service returning the
same result set multiple times and within seconds of each other. Multiple
service calls returning the same result set waste CPU cycles and memory.
4. Designing UX to optimize performance
a. Asynchronous background processing
b. Run connectors in parallel
c. Executing connectors in queued mode
i. SOAP, REST, SAP, and HTTP connectors can also be executed in queue mode.
d. Pagination - Use appropriate pagination settings on grids and repeating dynamic layouts
e. Defer data loading
f. Data pages - data pages as the source for list-based controls
g. Repeating dynamic layouts - for nontabular lists
h. Consolidated server-side processing - Ensure that multiple actions that are processed on
the server are bundled together so that there is only a single round trip
i. Client-side expressions - visibility, disabled and required conditions. Use client-side
expressions instead of server-side expressions whenever possible
j. Single Page Dynamic Containers are much lighter on the browser
k. Use refresh layout instead of refresh section to refresh only what is required
l. Avoid layout groups rather than legacy tabs, which have been deprecated. Also avoid
inline styles (not recommended although still available), smart layouts, and panel sets

CONDUCTING USABILITY TESTING

1. General
a. goal of usability testing is to better understand how users interact with the application
b. involves typical users of your application
c. validates the ease of use of the user interface design
d. is done to collect valuable design data in the least amount of time as possible
e. is conducted periodically throughout the software development
2. Conducting Usability Testing
a. a method for determining how easy an application is to use by testing the application
with real users in a controlled setting
b. Work with the product owner to select the tasks to test
3. Steps
a. Select tasks to test
b. Document sequence of testing steps
c. Decide on the testing method
d. Select participants
e. Conduct tests
f. Compile results

ADVANCED BACKGROUND PROCESSING

1. Queue Processor
a. Pega Platform provides built-in capabilities for error handling, queuing and dequeing,
and commits
b. run in the security context of the ASYNCPROCESSOR
c. configuring the Queue-For-Processing method in an Activity, or the Run in
Background step in a Stage, is it possible to specify an alternate Access Group
d. The AsyncProcessor requestor type is deprecated as of 8.3. The security context to
resolve Queue Processor rules is handled by the System Runtime Context.
e. Queues are multithreading and shared across all nodes
f. Use standard queue processors for simple queue management or dedicated queue
processors for customized or delayed processing of message
g. If we define Queue Processor as delayed then define date and time while calling via
method Queue-For-Processing  or a Run in Background smart shape
h. For more advanced tasks that require vertical and horizontal scaling, such as inbound
batch file processing, create dedicated queue processor rules. Select the number of
threads for vertical scaling and the number of nodes for horizontal scaling
2. Standard Agent
a. using standard agents, Pega Platform provides built-in capabilities for error handling,
queuing and dequeing, and commits
b. By default, standard agents run in the security context of person who queued the task
c. The Access Group setting on an Agents rule only applies to Advanced Agents which are
not queued
d. To always run a standard agent in a given security context, you need to switch the
queued Access Group by overriding the System-Default-EstablishContext activity and
invoke the setActiveAccessGroup() java method within that activity
e. example is the agent processing SLAs ServiceLevelEvents in the Pega-ProCom ruleset
3. Job Scheduler
a. Job Scheduler not only must decide which records to process, it also must establish each
record’s step page context before performing work on that record
b. needs to decide whether a record needs to be locked
c. decide whether it needs to commit records that have been updated using Obj-Save
4. Advanced Agent
a. Advanced agents can also be used when there is a need for more complex queue
processing
b. all queuing operations must be handled in the agent activity
c. When running on a multinode configuration, configure agent schedules so that the
advanced agents coordinate their efforts
d. Eg is agent for full text search incremental indexing FTSIncrementalIndexer in the Pega-
SearchEngine
e. The default agent  ProcessServiceQueue  in the  Pega-IntSvcs  ruleset is an example of an
advance agent processing queued items
5. Q processor - For more advanced tasks that require vertical and horizontal scaling, such as
inbound batch file processing, create dedicated queue processor rules. Select the number of
threads for vertical scaling and the number of nodes for horizontal scaling. For example, if you
have available resources on a node, increase the number of threads on this node to utilize
resources and get the same throughput. When you select the number of threads, this number is
multiplied by the number of nodes. For example, if you have three nodes for background
processing and you set the number of threads to 3, then you have nine threads for background
processing. If you have a lot of nodes for background processing, distribute the work by
increasing the number of nodes
6. Q processor vs standard agent
a. can select dedicated threads to process only a specific action
b. can control the number of threads and the node types on which the processing runs.
c. provide better performance than agents because you do not have to queue your items
to a database
7. SLA
a. The escalation activity in an SLA provides a method for you to invoke agent functionality
without creating a new agent
b. An SLA must always be initiated in the context of a case
8. Wait
a. can only be applied against a case within a flow in a step, and wait for a single event
(timed or case status) before allowing the case to advance.
9. Asynchronous Integration
a. Commonly used asynchronous approaches include use of the Load-DataPage method
and using the Run-In-Parallel option with the Connect-Wait method.
b. Most connector rules have the capability to run in parallel by invoking the connectors
from an activity using the Connect-* methods with the RunInParallel option selected
c. Grouping several Load-DataPage requestors by specifying a PoolID is possible. Use
the Connect-Wait method to wait for a specified interval, or until all requestors with the
same PoolID have finished loading data.
d. Less commonly used asynchronous integration approaches include asynchronous
service processing and asynchronous connector processing
i. Asynchronous service processing
1. The service types that support asynchronous processing leverage the
standard agent queue
2. a queue item ID that identifies the queued request is returned to the calling
application. This item ID corresponds to the queued item that records the
information and state of the queued request. Once the service request is
queued, the ProcessServiceQueue agent in the Pega-IntSvcs ruleset
processes the item queued and invokes the service
3. In most cases, the calling application calls back later with the queue item ID
to retrieve the results of the queued service request. The standard
activity @baseclass.GetExecutionRequest is used as a service activity by the
service to retrieve the result of the queued service
4. When configuring this option for the service, a service request
processor that determines the queuing and dequeuing options must be
created
ii. Asynchronous connector processing
1. Several connector rules offer an asynchronous execution mode through the
queue functionality similar to asynchronous services
2. connector request is stored in a queued item for
the ProcessConnectQueue agent in the Pega-IntSvcs  ruleset to make the call
to the service at a later time.
3. The queued connector operates in a fire-and-forget style
4. A connector request processor must also be configured for the
asynchronous mode of operation
10. Default Agents
a. there are default agents that
i. unnecessary for most applications because the agents implement legacy
ii. Should not run in production
iii. Run at inappropriate times by default
iv. Run more frequently than needed, or not frequently enough
v. Run on all nodes by default, but should only run on one node

PEGA FOR ENTERPRISE

1. Designing pega for enterprise


a. start with Pega in the middle, and work your way out to all existing technologies,
channels, integrations to legacy systems and systems of record, one application at a
time, the vision becomes reality, release by release
2. Application deployment and design decisions
a. Pega is software that writes software, you can run your application anywhere or move it
from one environment to another
b. two environment variations when designing your application
i. Requirements to deploy an enterprise archive (.ear)
ii. Requirements to use multitenancy
c. Pega can be deployed as an enterprise archive (.ear) or a web archive (.war).
d. EAR deployement reasons
i. need message-driven beans (JMS MDB) to handle messaging requirements
ii. need to implement two phase commit or transactional integrity across systems
iii. You need to implement Java Authentication and Authorization Service (JAAS) or
Java Enterprise Edition (JEE) security
iv. You need to use the Rule-Service-EJB rule type
v. You have enterprise requirements that all appls run on JEE compliant app server
e. Pega recomends the deployment of an ear but you also have the option of using a .war
deployment, if you are running on TomCat
3. Multitenacy
a. Multitenancy allows you to run multiple logically separate applications on the same
physical hardware. This allows the use of a shared layer for common processing across
all tenants, yet allows for isolation of data and customization of rules and processes
specific to the tenant.
b. Multitenancy supports the business processing outsourcing (BPO)
c. assume the shared layer represents a customer service application offered by ServiceCo.
Each partner of ServiceCo is an independent BPO providing services to a distinct set of
customers. The partner (tenant) can customize processes unique to the business and
can leverage the infrastructure and shared rules that ServiceCo provides
d. The two administrators in a multitenant environment include the multitenant provider
administrator and the tenant administrator. The multitenant provider sets up the
tenant, and manages security and operations in the shared layer. The tenant
administrator manages security and operations of the tenant layer
e. The multitenant provider and tenant must work together to plan disk and hardware
resources based on the tenant's plans for the application
4. Security Design
a. You may also be required to use third-party authentication tools when invoking web
services, or when another application calls Pega as a service
b. An organization's security policies are often the result of industry regulatory
requirements
c. If the application resides in a cloud environment or is a hybrid cloud/on-premise
deployment, acquaint yourself with the network architecture and security protocols in
place
5. Pega Application Monitoring
a. Many organizations have application performance monitoring (APM) tools
b. these tools can report on data such as memory and CPU usage on your database and
application servers, they do not provide detailed information about the health of the
Pega application itself.
c. Pega provides two tools designed to monitor as well as provide recommendations on
how to address alerts generated by the Pega applicationAES/PDC
d. AES monitors on-premise applications. AES is installed and managed on-site.
e. Pega PDC is a Pega-hosted Software as a Service (SaaS) application that monitors Pega
Cloud applications. PDC can also be configured to monitor on-premise applications
6. AES vs PDC
a. Both AES and PDC monitor the alerts from and health activity for multiple nodes in a
cluster. Both send you a scorecard that summarizes application health across nodes. The
most notable difference, from an architecture standpoint, is that AES interacts with the
monitor node to allow you to manage processes on the monitored nodes, such as
restarting agents and quiescing application nodes.
7. Distributed Application Case Interactions
a. Pega WebMashup - You can expose Pega case types to the external application by
generating mashup code or by generating microservice code from within the case type
settings in Dev Studio.
b. Microservices - A microservice architecture is a method for developing applications
using independent, lightweight services that work together as a suite. In a microservices
architecture, each service participating in the architecture.
c. The microservice architectural approach is usually contrasted with the monolithic
application architectural approach. For example, instead of designing a single
application with Customer, Product, and Order case types, you might design separate
services that handle operations for each case type. Exposing each case type as a
microservice allows the service to be called from multiple sources, with each service
independently managed, tested, and deployed.
d. You can expose any aspect of Pega (including cases) as a consumable service, allowing
Pega to participate in microservice architectures.
e. You can create this service as an application or as an individual service that exists in its
own ruleset.

DEFINE A RELEASE PIPELINE

1. General
a. DevOps is a collaboration between Development, Quality, and Operations staff to
deliver high- quality software to end users in an automated, agile way
2. Setting up a release pipeline
a. Developer activities
i. Unit testing
ii. Sharing changes with other developers
iii. Ensuring changes do not conflict with other developer's changes
b. Customer activities
i. Testing new features
ii. Making sure existing features still work as expected
iii. Accepting the software and deploying to production
3. DevOps Functions
a. Continuous integration – Continuously integrating into a shared repository
b. Continuous delivery – Always ready to ship
c. Continuous deployment – Continuously deploying (no manual process involved)
4. Devops release pipeline
a. DevOps involves the concept of software delivery pipelines for applications
b. A pipeline is an automated process to quickly move applications from development
through testing to deployment
c. The Continuous Integration portion of the pipeline is dominated by the development
group. The Continuous Delivery portion of the pipeline is dominated by the quality
assurance group
d. The pipeline is managed by some form of orchestration and automation server such as
open source Jenkins. Pega’s version of an automation server is the Deployment
Manager available on the Pega Exchange
e. A pipeline pushes application archives into, and pulls then from, application
repositories. The application repositories are used to store the application archive for
each successful build. There should be both a development repository and a production
repository
f. The term system of record is used in a distributed development environment. Separate
development environments can push branches related to the same application to a
central server known as the system of record. The central server is considered a Pega
repository type. Within the system of record published branches are merged
5. Continuous integration and delivery
a. With continuous integration, application developers frequently check in their changes to
the source environment and use an automated build process to automatically verify
these change
b. The Ready to Share and Integrate Changes steps ensure that all the necessary critical
tests are run before integrating and publishing changes to a development repository.
During continuous integration, maintain these best practices:
c. With continuous delivery, application changes run through rigorous automated
regression testing and are deployed to a staging environment for further testing to
ensure that the application is ready to deploy on the production system
d. In the Ready to Accept step, testing runs to ensure that the acceptance criteria are met.
e. The Ready to Deploy step verifies all the necessary performance, scale, and
compatibility tests necessary to ensure the application is ready for deployment.
f. The Deploy step validates in a preproduction environment, deploys to production, and
runs post-deployment tests with the potential to roll back as needed.
6. Modular development deployment stratagies
a. Advantage of dedicated ruleset for a case
i. Encouraging case-oriented user stories using Agile Studio’s scrum methodology
to manage project software releases
ii. Simplifying the ability to populate the Agile Workbench Work item to
associate field when checking a rule into a branch
b. Branch based development
i. dedicate a branch to a single case type, as seen in the following image, doing so
simplifies the branch review process
c. Application packaging
i. Multiple applications referencing the same ruleset is highly discouraged.
Immediately after saving an application rule to a new name, warnings appear in
both applications
ii. A product rule should contain a single Rule-Application where pyMode =
Application.
iii. Product rules should be defined starting with applications that have the fewest
dependencies, ending with applications that have the greatest number of
dependencies.
iv. Currently, the Deployment Manager only supports pipelines for Rule-
Application instances where pyMode = Application. When an application is
packaged, and that application contains one or more components, those
components should also be packaged.
d. Open/Closed principal applied to packaging deployment
i. The goal of the Open-closed principle is to eliminate ripple effects. A ripple
effect occurs when an object makes changes to its interface as opposed to
defining a new interface and deprecating the existing interface.
7. Creating team-based development best practices\
a. Use branches when multiple teams contribute to a single application.
b. Peer review branches before merging
c. Use Pega Platform developer tools, such as the Rule compare utility, the action
menu Search rule option, and the action menu Preview option to determine how to
best address any rule conflict
d. Hide incomplete or risky work using toggles to facilitate continuously merging of
branches
e. Create PegaUnit test cases to validate application data by comparing expected property
values to the actual values returned by running the rule.
8. Multi team develoipmenet process
a. A Branch Reviewer first requests conflict detection, then executes the appropriate
PegaUnit tests.
b. If the Branch Reviewer detects conflicts or if any of the PegaUnit tests fail, the reviewer
notifies the developer who requests the branch review. The branch reviewer stops the
process to allow the developer to fix the issues.
c. If the review detects no conflicts and the PegaUnit tests execute successfully, the branch
merges into the system of record. Ruleset versions associated to branch are locked.
d. The Booking App team can now perform an on-demand rebase of the SOR application’s
rules into their system.
e. A rebase pulls the most recent commits made to the SOR application into the Booking
App team's system.
9. Challenge
a. Rebase occurs after successful branch merger to SOR

ASSESSING AND MONITORING QUALITY

1. Test Automation
a. UI Based functional and scenario test
i. To do end-t-end scenario tests to verify that end-to-end cases work as expected.
ii. most expensive to run
iii. Pega Platform supports automated testing for these types of tests through
the TestID property in user interface rules
b. API based-functional test
i. verify that the integration of underlying components work as expected without
going through the user interface
ii. useful when the user interface changes frequently
iii. can validate case management workflows through the service API layer using
the Pega API
c. Unit Tests
i. smallest unit is the rule. You can unit test rules as you develop them by using
the PegaUnit test framework
ii. least expensive
d. Automation test suite
i. industry test solutions, such as JUnit, RSpec, and SoapUI to build your test
automation suite
ii. When you build your automation test suite, run it on your pipeline
iii. During your continuous integration stage, the best practice is to run your unit
tests, guardrail compliance, and critical integration tests.
iv. During the continuous delivery stage, a best practice is to run all your remaining
automation tests to guarantee that your application is ready to be released.
Such tests include acceptance tests, full regression tests, and nonfunctional
tests such as performance and security tests.
e. Benefits of testing
i. Timely feedback
ii. Effective use of test resources
iii. Reinforcement of testing best practices
2. Establishing Quality Standards in your team
a. The pattern of allowing low-quality features into your production environment results in
technical debt
b. Technical debt means you spend more time fixing bugs than working on new features
that add business value
c. Establishing standard practices for your development team can prevent these type of
issues and allows you to focus on delivering new features to your users, such as
i. Leveraging branch reviews
1. Deployment Manager’s non-optional pxCheckForGuardrails  flow will
halt a merge attempt when a Get Branch Guardrails response shows
that the weighted guardrail compliance score is less than the minimum-
allowed guardrail score.
ii. Establishing rule check-in approval process
iii. Addressing guardrail warnings
iv. Creating custom guardrail warnings
v. Monitoring alerts and exception
3. Leveraging application quality landing page
a. Application Quality affects the rate at which the application moves through a Dev Ops
pipeline and affects the rate at which new features can be added to the application
4. Customizing the rule checkin approval process
a. Pega Platform comes with the Work-RuleCheckIn default work type for the approval
process
b. work type contains standard properties and activities and a flow
called ApproveRuleChanges that is designed to control the rule check-in process
c. When the default check-in approval process is in force for a ruleset version, the flow
starts when a developer begins rule check in. The flow creates a work item that is
routed to a workbasket. The standard decision tree named Work-
RuleCheckIn.FindReviewers returns the workbaskets. Rules awaiting approval are moved
to the CheckInCandidates  ruleset.
d. Override the Work-RuleCheckIn.FindReviewers decision tree if you want to route to a
different workbasket or route to different workbaskets based on certain criteria.
5. Creating a custom guardrail warning
a. Guardrail warnings identify unexpected and possibly unintended situations, practices
that are not recommended, or variances from best practices
b. To add or modify rule warnings, override the empty activity
called @baseclass.CheckForCustomWarnings
c. You typically want to place the  CheckForCustomWarnings  activity in the class of the rule
type to which you want to add the warning. For example, if you want to add a custom
guardrail warning to an activity, place  CheckForCustomWarnings  in the  Rule-Obj-
Activity  class
d. Configure the logic for checking if a guardrail warning needs to be added in
the  CheckForCustomWarnings  activity. Add the warning using
the  @baseclass.pxAddGuardrailMessage  function in the  Pega-Desktop  ruleset.
e. You can control the warnings that appear on a rule form by overriding the standard
decision tree  Embed-Warning.ShowWarningOnForm. The decision tree can be used to
examine information about a warning, such as name, severity, or type to decide whether
to present the warning on the rule form.
6. Challenge
a. AES and PAL helps in determining quality of application
b. ShowWarningOnForm to define which warnings to show on rule form
c. When checkin approval process enables
i. Rule moved to candidate ruleset
ii. Work item is created

CONDUCTING LOAD TESTING

1. General
a. Pega Platform can be treated as any web application when performing load testing.
b. The term load testing is often used synonymously with concurrency testing, software
performance testing, reliability testing, and volume testing
c. Load testing allows you the validate that your application meets the performance
acceptance criteria, such as response times, throughput, and maximum user load.
2. Load Testing a pega app
a. can use any web application load-testing tool, such as jMeter or Loadrunner.
b. Before exercising a performance test, the best practice is to exercise the main paths
through the application, including all those to be exercised by the test scripts, and then
take a Performance Analyzer (PAL) reading for each path. Investigate and fix any issues
that are exposed.
c. Ensure that the log files are clean before attempting any load tests
d. Scenairos
i. Test environment baseline - first test to establish that application, environment,
and tools are all working correctly
ii. Application baseline - test run with one user or one batch process creating a
case in a single JVM
iii. Full end to end test - This is the first full test of the application end to end, still in
a single JVM
iv. Failure in one JVM - Test what happens if there is a failure in one of the JVMs
v. Span JVMs based on the peak business and technical metrics/goals
e. Begin testing just with HTTP transactions by disabling agents and listeners. Then, test the
agents and listeners. Finally, test with both foreground and background processing.
f. The performance tests must be designed to mimic the real-world production use. Collect
data on CPU utilization, I/O volume, memory utilization, and network utilization to help
understand the influences on performance.
g. Best practices for load testing
i. Design the load test to validate the business use
ii. Validate performance for each component first
iii. Script user log-in only once
iv. Set realistic think times
v. Switch off virus-checking
vi. Validate your environment first
vii. Prime the application first
viii. Ensure adequate data loads
ix. Measure results appropriately
x. Focus on the right tests

ESTIMATING HARDWARE SIZE

1. Reasons for new estimate


a. Increasing the number of concurrent users
b. Introducing a new Pega application, such as Pega Customer Service or Pega Sales
Automation
c. Increasing the number of background processes, such as agents or listeners
d. Introducing a new case type
e. Introducing one or more new integrations to external systems, including robotic
automations
2. Submitting a hardware request
a. If you not internal to Pega, send an email to HardwareEstimate@pega.com.
b. The process for sizing estimation is the same if your application is running on Pega Cloud
c. The resulting estimate includes recommended settings for application server memory,
number of JVMs, and database server disk and memory needed to support your
application

HANDLING FLOW CHANGES FOR INFLIGHT CASES


1. Reasons for problem flows
a. remove a step in which there are open cases. This change causes orphaned
assignments.
b. You replace a step with a new step with the same name. This change may cause a
problem since flow processing relies on an internal name for each assignment shape
c. You remove or replace other wait points in the flow such as a Subprocess or a Split-For-
Each shape. These changes may cause problems since their shape IDs are referenced in
active subflows
d. You remove a stage from a case life cycle and there are in-flight cases. In-flight cases are
not be able to change stages
2. Changing an active assignment's configuration within a flow, or removing the assignment
altogether, will likely cause a problem. Critical flow-related assignment information includes
a. pxTaskName – shape id of assignment shape
b. pyInterestPageClass  - class of the flow rule
c. pyFlowType  – name of the flow rule
3. Manage flow changes for cases in flight - approaches
a. Switching the application version of in-flight cases - allows users to process existing
assignments without having to update the flows
i. In this example, an application has undergone a major reconfiguration. You created a
new version of the application that includes a newer ruleset versions. Updates
include reconfigured flows, as well as decisioning and data management
functionality. You decided to create a new access group due to the extent of changes
that go beyond flow updates
ii. P - original and newer versions of the application remain intact since no attempt is
made to backport enhancements added to the newer version
iii. C - Care must be taken not to process a case created in the new application version
when using the older application version and vice versa. Both cases and assignments
possess a pxApplicationVersion property
b. Processing existing assignments in parallel with the new flow
i. The newer version of the flow is reconfigured such that new cases never reach the
previously used shapes; yet existing assignments continue to follow their original
path.
ii. P - All cases use the same rule names across multiple versions
iii. C - This approach may not be feasible given configuration changes. In addition, it may
result in cluttered Process Modeler diagrams.
iv. Later, you can run a report that checks whether the old assignments are still in
process. If not, you can remove the outdated shapes in the next version of the flow.
c. Circumstancing
i. circumstancing as many rules as needed to differentiate the current state of a flow
from its desired future state. One type of circumstancing that would satisfy this
approach is called as-of-date
ii. P - Simple at first to implement at first using the App Explorer. No need to switch
applications
iii. C - The primary drawback is that the Case Designer is affected when circumstancing
is used except for its support for specialized Case Type rules. Case Type rules cannot
be specialized by DateTime property, as-of-date circumstancing is not allowed. This
presents a problem in that the changes should be carried forward indefinitely.
d. Moving existing assignments
i. you set a ticket that is attached within the same flow, change to a new stage, or
restart the existing stage. In-flight assignments advance to a different assignment
where they resume processing within the updated version
ii. You run a bulk processing job that locates every outdated assignment in the system
affected by the update.
iii. For each affected assignment, bulk processing should
call Assign-.OpenAndLockWork  followed by Work-.SetTicket, pxChangeStage,
or pxRestartStage. For example, you can execute a Utility shape that restarts a stage
(pxRestartStage).
iv. After you have configured the activity, you deploy the updated flow and run the bulk
assignment activity. The system must be off-line when you run the activity
v. P : A batch process activity directs assignments by performing the logic outside the
flow. You do not need to update the flow by adding a Utility shape to the existing
flow. The activity enables you to keep the processing logic in the flow and makes
upgrades easier.
vi. C - It might be impractical if the number of assignments is large, or if there is no time
period when the background processing is guaranteed to acquire the necessary locks
e. Using direct inheritance and dynamic class referencing (DCR)
i. This approach is a hybrid solution that involves circumstancing for shared work pool-
level rules and direct inheritance for case-specific rules. For case-specific rules,
differentiation of a flow’s current state from its desired future state is accomplished
using direct inheritance and DCR.
ii. P - Classes defined within an application’s ruleset stack are requestor-level
information so are compatible with the Case Designer’s ability to display a case’s
current state. It does so in conjunction with how the application rule’s Cases & data
tab is configured
iii. C- Create classes take extra time and it does not scale
4. Using problem flow to resolve flow issues.
a. Pega Platform provides two standard problem flows: FlowProblems for general process
configuration issues, and pzStageProblems  for stage configuration issues. The problem
flow administrator identifies and manages problem flows on the Flow Errors landing
page.
b. Problem flows can arise due to stage configuration changes, such as when a stage is
removed or relocated. When an assignment is unable to process due to a stage-related
issue, the system starts the standard pzStageProblems flow
i. To resolve we can use change stage  The operator can then manually move the
case to another stage
5. Managing problem flows from Flow Error Landing Page
a. The pzLPProblemFlows ListView report associated with this landing page queries
worklist and work assignments where the pxFlowName property value starts with
FlowProblems
b. These flow error assignments were initially routed to the operator and assignment type
returned by the nonfinal getFlowProblemOperator activity. The default values are
Broken Process and Workbasket, respectively
c. Use following features to fix problem
i. Use Resume Flow if you want to resume flow execution beginning at the step after
the step that paused. For example, a flow contains a decision shape that evaluates a
decision table. If the decision table returns a result that does not correspond to a
connector in the flow, add the connector to the decision shape and resume the flow
ii. Use Retry Last Step to resume flow execution by re-executing the step that paused.
Use this option to resume flow execution that pauses due to a transient issue such as
a connector timeout or failure or if you resolve an error by reconfiguring a rule used
by the flow. For example, if you add a missing result to a decision table to fix a flow
error, select Retry last step to reevaluate the decision table to determine the
appropriate flow connector.
iii. Use Restart Flow to start the flow at the initial step. If an issue requires an update to
a flow step that the application already processed, resume the flow at the initial
step.
iv. Use Delete Orphan Assignments to delete assignments for which the work item
cannot be found. Use this option to resolve flow errors caused by a user lacking
access to a needed rule, such as an activity, due to a missing ruleset or privilege.
Selecting Delete Orphan Assignments resolves an assignment that a user is otherwise
unable to perform.
6. Challenge
a. Tips for flow changes in production
i. Create the maintenance activity
ii. Create a new DevMaintenance ruleset - place this type of maintenance activity in it's
own ruleset that is accessible only by developers and administrators

EXTENDING AN APPLICATION

7. Reasons for extending


a. enterprise has planned to sequentially roll out extensions to a foundation application
due to budgetary and development resource limitations
b. Extend the production application to a new set of users
c. Split the production application to a new set of users
d. * in b and c  either situation, the resulting user populations access their own
application derived from the original production application.
8. Deployment Approaches when extending or dividing an application you can host the user
populations on either a new database or the original database.
a. Deploying to new database
i. data in both applications are isolated from each other
ii. can use the same table names in each database
iii. Use ruleset specialization to differentiate the rules specific to each application's user
population and no need to use class specialization.
b. Deploying to the original database
i. use class specialization to differentiate the data
ii. Class specialization create new Data-Admin-DB-ClassGroup records & work pools
iii. As a result, case data is written to tables that are different from the original tables.
iv. Security enforcement between applications hosted on the same database is essential
v. Unlike case data, assignments and attachments cannot be stored to different
database tables. You can avoid this issue by using Pega’s multitenant system
vi. Applications, cases, and assignments contain various organization properties. Use
these properties as appropriate to restrict access between applications hosted in the
same database. Puorgorg etc
c. Suppose the new user population is associated to new division and there is a
requirement to prevent an operator in the new division from accessing an assignment
created by the original division. The easiest solution is to implement a Read Work-
Access Policy that references the following Work- Access Policy Condition.
i. pxOwnerDivision = Application.pyOwningDivision etc
d. Extending an application to a new user population
i. the extended application can be
1. An application previously defined as a foundation application
2. An application that becomes a template, framework, blueprint, or
model application on top of which new implementations are built
e. Extending the application to a new database
i. When deploying to a new database, ruleset specialization is sufficient to differentiate
the existing application’s user population
f. Extending the application to an existing database
i. To support a new user population within an existing database, run the New
Application wizard to generate an application that extends the classes of the existing
application’s case types
g. Splitting an application's existing user population
i. In some situations, you may want to split an application's existing user population
into subsets. The resulting subsets each access a user population-specific application
built on the original application.
ii. When active cases exist throughout a user population and there is a mandate to
subdivide that user population into two distinct applications, reporting and security
become problematic
h. Moving a subset of the existing user population to a new database
i. If you create a new database to support a subdivided user population, and
immediate user migration is not required, you can gradually transition user/account
data from the existing database to the new database.
ii. Copy resolved cases for a given user/account to the new database, but do not purge
resolved cases from the original system immediately. Wait until the migration
process is complete for that user/account
iii. Optionally, modify case data organization properties to reflect the new user
population.
i. Creating subsets of the existing user population within the original database
i. The most complex situation is when immediate user population separation is
mandated within the same database. To support this requirement, a subset of the
existing cases must be refactored to different class names
j. Naming conventions of Case type class
i. Avoid refactoring every case type class name when splitting a user population within
an existing database. Refactoring class names is a time-consuming process.
Businesses prefer the most expedient and cost effective change management
process. The most cost-effective approach keeps the largest percentage of users in
the existing work pool class and moves the smaller user population to a new work
pool class
ii. Case type class names need not exactly reflect their user populations. An
application's name, its organization properties, and associated static content are
sufficient to distinguish one specialized application from another
9. General
a. the FW abbreviation. This abbreviation is optional, not a necessary, naming convention.

AI AND ROBOTIC AUOTMATION

1. General
a. You could also design a solution that uses AI and robotic automation capabilities in
tandem; they are not mutually exclusive technologies
b. Pega AI Capabilities
i. Intelligent Virtual Assistant/Customer Decision Hub
c. Robotics Automation
i. RDA/RPA/Workforce Intelligence
2. AI
a. The Adaptive Decision Manager (ADM) service is an example of adaptive learning
technology.
b. This technology can be a powerful ally in building a customer's profile, preferences, and
attributes.
c. An AI solution can also predict the next action a customer will take. This ability allows an
organization to serve the customer in a far more effective way.
d. AI solution can guide a customer service representative to offer products or services
that the customer actually wants, based on the previous behavior of the
customer. Predictive Analytics provides this capability.
e. The Customer Decision Hub combines both predictive and adaptive analytics to provide
a seamless customer experience and only shows offers relevant to that customer
f. Customer Decision Hub is the centerpiece of the Pega Sales Automation, Pega Customer
Service, and Pega Marketing applications.
g. AI uses a natural language processing (NLP) to detect patterns in text or speech to
determine the intent or sentiment of the question or statement
h. The Intelligent Virtual Assistant is an example of NLP in action.
3. Robotics Automation
a. RDA -- > Usage of RDA is also known as user-assisted robotics
b. RPA  you assign a software robot to perform time-consuming, routine tasks with no
interaction with a user. This is also known as unattended robotics
c. WFI - connects desktop activity monitoring to cloud-based analytics to gain insight about
your people, processes, and technology
d. WFI is a technology that can identify where a user is repeatedly copying and pasting,
switching screens, or typing the same information over and over. This allows the
organization to detect areas for process improvement. When you implement changes to
those processes, the organization can realize significant time and money savings.
4. Challenge
a. Robotic automation suitable for tasks that are routine and highly manual.

You might also like