Proposed Approach and System Architecture: 4.2.1 The Condition Learner in The SRL Algorithm

You might also like

Download as docx, pdf, or txt
Download as docx, pdf, or txt
You are on page 1of 7

CHAPTER 4

PROPOSED APPROACH AND SYSTEM ARCHITECTURE

4.1 Proposed Approach


RPA embodies a collection of tools and techniques that allow business owners to
automate repetitive manual tasks. The intrinsic value of RPA is beyond dispute, e.g., automation
reduces errors and costs and thus allows us to increase the overall business process
performance.However, RPA tools requires a manual effort w.r.t. identification, elicitation and
programming of the to be automated tasks. At the same time, several techniques exist that allow
us to track the exact behavior of users in the front-end, in great detail.

4.2 Machine Learning Algorithm


Machine learning algorithms are programs (math and logic) that adjust themselves to
perform better as they are exposed to more data. The “learning” part of machine learning means
that those programs change how they process data over time, much as humans change how they
process data by learning. So, a machine-learning algorithm is a program with a specific way to
adjusting its own parameters, given feedback on its previous performance making predictions
about a dataset. RPA solution operates based on a rule base (i.e., a set of rules). Training such an
RPA solution is basically learning a rule base from the IO-log and building up such a rule base.
We discuss an algorithm to learn the simple rules. We call it the simple rule learner (SRL). The
SRL algorithm consists of two phases, first learning the conditions and then learning the
responses.

4.2.1 The Condition Learner in the SRL Algorithm

We define a function mop: P(U−→F×U−→F) → B(P(UF ×UF )) for transforming a set of cases
to a multiset of the pairs of forms, and afunction mof : P(U−→F) → B(P(UF )) for returning a multiset
of forms by the samegiving IO-Log, which is adopted to generate the set of cases. Let q ∈ Ufld bethe
case identifier, X = mop(F C(S, q)) be a multiset of the pattern candidates,if pc = (F, F′) ∈ X is a pattern
candidates, s.t. F ∈ UF , F′ ∈ UF , F ̸= F′.Let Y = mof(L) be a multiset of forms; and let M be a multiset.
The function CountM : M → N+ returns the counting number for an element in multiset M.
As discussed, based on the input IO-log, we propose the SRL algorithm to learn a set of rules in
order to obtain our rule base. A condition of a rule checks whether an FA meets some properties. For
learning such properties, we provide an algorithm which consists of 5 steps. An overview of these steps
together with an example is shown in Figure 7. As illustrated in Figure 7, each row of the input table
corresponds a labeled FA (Form Action) in step (4). Based on this input table, we train model. In the
model should able to represent the logic of “it will return True if the Result field is changed from
Estimating to Pass”.

Fig.4.2.1 Adopting learning algorithm of condition functions


4.2.2 The Response Learner in the SRL Algorithm

Using the SDAP scenario, we explained the copy response to illustrate one type of the
response function. A copy response means that the function δ in Def.11 is an identity function
(i.e., the input equals to its output). The main issue of this part is gaining the form field mapping

λ, which is defined in Def.11. There are two steps in this part, and the learning process of the
SDAP scenario’s response function could be illustrated as Figure 8. As illustrated in Figure 8,
the step (2) detects the copy relations from the set of aligned cases. The values, which are
holding by the fields “Name”, “Age”, “Gender” in the ”leave” state on FI ’s FAs, are always
equivalent to the values, which are in the fields “Name”, “Age”, “Gender” of the ”leave” state in
FN ’s FAs. Thus, these threes copy relations are detected. Furthermore, the set of these threes
copy relations is the field mapping λ, the identity function is the value transfer function δ, they
build the copy response function for the scenario.

Fig.4.2.2 Adopting learning algorithm of response functions

4.2.3 The Rule Base and The Rule Selector in The SRL Algorithm

In the rule selector, the rules are picked based on a certain criterion, in the SRL
algorithm, we use the precision as the criterion. In principle, each rule classifies an observed FA
True or False; since we are in the training phase, we know the true positive and false positive,
which we use to calculate the precision measure; as a result, if an observed FA satisfies multiple
rules’ condition, we will select the rule with the highest precision measures from the SRL rule
base.

4.3 System Architecture


The main problems of adopting RPA have two aspects, (1) domain knowledge requirement, the
consultants in the RPA project should know every detail for the business, in order to model the
RPA robot's workflow, (2) labour intensive, current RPA product only provides low-level
movement workflow modeler and recorder, every movement of mouse click and keyboard
stroke should be modeled or recorded, otherwise, the robot would not know how to do the
business. For solving these two problems, we provide a novel End-to-End Approach for RPA, it
is called Form to Rule Self-Learning Approach (F2R Approach), it allows the robot learns from
the user, less domain knowledge requirement, less human intervention.
For empowering RPA robot the ability of self-learning, we purpose three steps, (1) Task
identification, know what user do/did. (2) Rule deduction, learn from user. Repetitive tasks are
the target scenarios for adopting RPA. As the tasks are repetitive, it is reasonable that it could be
detected from IO-Log, and we purpose a standard element for describing those tasks, which is
called rule. (3) Rule application, working with the user. It is obvious the ``then'' part in rule
could be automatically executed by the robot. Thus, in operation, the robot listens on forms, if
certain action happened, it will perform the suitable rule, without hesitation. And these three
steps could be illustrated as the Figure below.
Fig.4.3 The Form to Rule (F2R) Approach

4.3.1 RPA Robot Structure in F2R Approach


After this, RPA robot could collect user interactions (form action, IO-Log), learning rules
from IO-Log, and applying rules for working with user. Each of these abilities we design a
component, ear, brain, arm. Ear is for hearing the FA; brain is for deducing the rule and
memorize the IO-Log, and arm is for translating the response FA, which is the result of the rule,

on IT system. For implementation of each component, ear is done by a Chrome-


extension(running in Chrome's back-end) in JavaScript; brain is done by a web-service in
Python; arm is done by Selenium(Web-page automation software) in Python.

Fig.4.3.1 Collect information and pack into Form Action (FA)

4.3.2 RPA robot collecting IO-Log and doing response


For collecting IO-Log, two main steps for collecting single FA. Firstly, at the moment of
the user entering in an arbitrary web-page, Ear will send this page’s address (URL) to Brain,
Brain tests this page contains a form or not by matching the address with form settings.

Fig.4.3.2 RPA collecting IO Log,

If this page does not has a form, Brain will ask Ear to stop listen on this page; otherwise,
Brain will send the detail form setting to Ear, and Ear collects information from fields and puts
into the entering state in FA. Secondly, after detecting user has finished this form, Ear collects
information from the fields again and puts into the leaving state in FA; then Ear packs all the
information into an FA and sends to Brain; Brain stores the received FA.
The advantages of collecting IO-Log based on form setting are accuracy, effectiveness,
and privacy. The robot collects information from the specific elements on the web-page, it
ignores other no-value elements and user's arbitrary motions, this mechanism is more accurate
and effective compared with current RPA products. And, Ear only listens to the pages have
forms, user's interactions with other pages will be ignored, this way could keep the user's
privacy, effectively.
For doing response, after FA stored, Brain will match this FA with rules. If this FA meets
a certain rule's condition, Brain will generate a response FA by this rule. Then, Brain will send
this FA to Arm, it will reach a new form and translate FA into detail movements, after this, Arm
will execute the submit motion. This procedure is illustrated by the Figure in below.
Fig.4.3.3 RPA doing response

You might also like