Download as pdf or txt
Download as pdf or txt
You are on page 1of 38

Automated Software Testing With Machine Learning

by

Shehjad Ali Taus


19101539
Anila Tabassum
19101157
Riazul Hasan
19301168 Golam Rasul
19301126
Khondoker Al Muttakin
22241176

A thesis submitted to the Department of Computer Science and Engineering


in partial fulfillment of the requirements for the degree of
B.Sc. in Computer Science

Department of Computer Science and Engineering


Brac University
January 2023

© 2023. Brac University


All rights reserved.
Declaration
It is hereby declared that

1. The thesis submitted is my/our own original work while completing degree at
Brac University.

2. The thesis does not contain material previously published or written by a


third party, except where this is appropriately cited through full and accurate
referencing.

3. The thesis does not contain material which has been accepted, or submitted,
for any other degree or diploma at a university or other institution.

4. We have acknowledged all main sources of help.

Student’s Full Name & Signature:

Shehjad Ali Taus Anila Tabassum

19101539 19101157

Riazul Hasan Golam Rasul

19301168 19301126

Khondoker Al Muttakin

22241176

i
Approval
The thesis/project titled “ Automated Software Testing With Machine Learning”
submitted by

1. Shehjad Ali Taus (19101539)

2. Anila Tabassum (19101157)

3. Riazul Hasan (19301168)

4. Golam Rasul (19301126)

5. Khondoker Al Muttakin ( 22241176)

Of Summer, 2023 has been accepted as satisfactory in partial fulfillment of the


requirement for the degree of B.Sc. in Computer Science on 18 January,2023.

Examining Committee:

Supervisor:
(Member)

Md. Golam Rabiul Alam, PhD

Professor
Department of Computer Science and Engineering
BRAC University

Program Coordinator:
(Member)

Md. Golam Rabiul Alam, PhD

Professor
Department of Computer Science and Engineering
Brac University

Head of Department:
(Chair)

ii
Sadia Hamid Kazi, PhD

Chairperson and Associate Professor


School of Data and Science
Department of Computer Science and Engineering
Brac University

iii
Ethics Statement (Optional)
This is optional, if you don’t have an ethics statement then omit this page

iv
Abstract
Utilizing machine learning models and techniques, software testing automation en-
tails automating the software testing process. This can involve processes like test
case prioritization, selection, and automated test case generation. Software flaws
can be predicted using machine learning, which can also be used to rank issues and
recommend remedies. Additionally, test coverage analysis, increased test efficiency,
and process optimization may all be done using machine learning. Overall, the use
of machine learning in software testing automation can aid in enhancing the testing
process’s speed, accuracy, and efficiency, resulting in higher-quality software and
a shorter time to market.For software developers, identifying, locating, and fixing
defects in the software is a labor-intensive process. Human search and data analysis
are necessary for traditional testing. Due to humans’ propensity for incorrect as-
sumptions and biased outcomes, defects are frequently overlooked. Software testers
benefit from more accurate knowledge since machine learning enables systems to
learn and use the acquired knowledge in the future. Deep learning is capable of
performing a variety of complex machine learning tasks, including code completion,
defect prediction, bug localisation, clone identification, code search, and learning
API sequences. Researchers have put out a variety of strategies for automatically
changing programs over the years.

Keywords: Machine Learning, Deep Learning, API sequence, Defect Prediction,


Software Testing, Automated Test Case Generation, Test Evaluation.

v
Dedication (Optional)
A dedication is the expression of friendly connection or thanks by the author towards
another person. It can occupy one or multiple lines depending on its importance.
You can remove this page if you want.

vi
Acknowledgement
Firstly, all praise to the Great Allah for whom our thesis have been completed
without any major interruption.
Secondly, to our co-advisor Teacher Name sir for his kind support and advice in our
work. He helped us whenever we needed help.
Thirdly, Name and the whole judging panel of Conference Name. Though our paper
not accepted there, all the reviews they gave helped us a lot in our later works.
And finally to our parents without their throughout sup-port it may not be possible.
With their kind support and prayer we are now on the verge of our graduation.

vii
Table of Contents

Declaration i

Approval ii

Ethics Statement iv

Abstract v

Dedication vi

Acknowledgment vii

Table of Contents viii

List of Figures x

List of Tables xi

Nomenclature xii

1 Introduction 1
1.1 Research Problem: . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
1.2 Research Objective: . . . . . . . . . . . . . . . . . . . . . . . . . . . 2

2 Related Work 4

3 Problem Definition and Methodology 6


3.1 Problem Definition: . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
3.2 Methodology: . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6

4 Advantage and Challenges 10


4.1 Advantage: . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
4.2 Challenges: . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10

5 Dataset and Data Analysis 11


5.1 Dataset . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11
5.1.1 Images: . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11
5.1.2 Annotations: . . . . . . . . . . . . . . . . . . . . . . . . . . . 11
5.1.3 User interface metadata: . . . . . . . . . . . . . . . . . . . . . 11
5.1.4 Real-world data: . . . . . . . . . . . . . . . . . . . . . . . . . 11
5.1.5 Evaluation data: . . . . . . . . . . . . . . . . . . . . . . . . . 11

viii
5.2 Data Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12
5.2.1 Test case generation rate: . . . . . . . . . . . . . . . . . . . . 12
5.2.2 Test case coverage: . . . . . . . . . . . . . . . . . . . . . . . . 12
5.2.3 Defect detection rate: . . . . . . . . . . . . . . . . . . . . . . . 12
5.2.4 False positive rate: . . . . . . . . . . . . . . . . . . . . . . . . 12
5.2.5 Time to detect defects: . . . . . . . . . . . . . . . . . . . . . . 12
5.2.6 Test optimization: . . . . . . . . . . . . . . . . . . . . . . . . . 12
5.2.7 Test case prioritization: . . . . . . . . . . . . . . . . . . . . . . 12
5.2.8 Integration: . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13
5.2.9 Usability: . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13

6 Ethical Considerations 14

7 Tool Analysis 15
7.1 Testim: . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15
7.2 TestCraft . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16
7.3 Applitools . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16
7.4 Perfecto . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17
7.5 Mabl . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17
7.6 Functionize . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18
7.7 Sauce Labs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19

8 Conclusion 20

Bibliography 21

Appendix A How to install LATEX 22

Appendix B Overleaf: GitHub for LATEX projects 25

ix
List of Figures

3.1 A High Level flow overview of the mechanism of the neural network . 7
3.2 A High Level flow overview of the mechanism of Reinforcement Learn-
ing named Q-Learning . . . . . . . . . . . . . . . . . . . . . . . . . . 8

7.1 Workflow of Testim . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15

x
List of Tables

xi
Nomenclature

The next list describes several symbols & abbreviation that will be later used within
the body of the document

ϵ Epsilon

υ Upsilon

xii
Chapter 1

Introduction

Automated testing involves utilizing software to run test cases automatically while
looking for bugs in a software program. Functional testing, which confirms that the
program behaves as intended, and non-functional testing, which examines factors
like performance, security, and compliance, can both be included in this. For a va-
riety of software programs, including desktop programs, mobile programs, and web
programs, automated testing can be employed.
The use of automated testing has a number of advantages. Compared to manual
testing, it can save time and money because the same test cases can be executed
repeatedly without the need for human intervention. In order to find and correct
errors early in the development process, automated tests can also be run more often
and concurrently. The lack of human error in automated tests also makes them
potentially more accurate and reliable than manual testing.
There are various kinds of automated testing tools, including frameworks for unit
testing, functional testing, and test automation. Selenium, Appium, TestComplete
and JUnit are a few examples of automated testing tools.

1.1 Research Problem:


Organizations may have particular difficulties while implementing automation for
machine learning (ML) systems, in addition to the difficulties associated with soft-
ware testing automation in general:

Complexity : Systems for machine learning (ML) can be very complicated, with
a lot of interconnected parts and unpredictable behavior. Because of this, creating
efficient automated tests for ML systems may be challenging.

Data Dependency : As a result of their reliance on the data they use to train,
machine learning (ML) systems are highly sensitive to changes in the data and can
consequently behave very differently. Because of this, ensuring that automated tests
are accurate representations of real-world use scenarios may be challenging.

Lack of interpretability : It can be challenging to understand why a system is


acting in a particular way because many ML systems are difficult to comprehend.
This can make it difficult to find and address ML system flaws.

1
Continuous Learning : It can be challenging to verify that automated tests are
still valid and relevant as the system develops because many ML systems are built
to learn and get better over time.

Lack of Standardisation : There is currently no accepted methodology for testing


ML systems, and there are no tools or best practices for automating ML testing.

Lack of Domain Knowledge : The underlying ML algorithms and approaches


must be thoroughly understood in order to create automated tests for ML systems,
which can be challenging for developers without a background in ML.

Overfitting : A specific data set can be used to train ML models, which can result
in overfitting and poor generalization to new, unforeseen data. Due to this, it may
be challenging to guarantee that automated tests are realistic in actual use situa-
tions.

Scalability : Scaled automated testing of ML systems can present substantial chal-


lenges due to the potentially enormous number of inputs and outputs that might be
used, making it computationally impossible to test every possible combination.

1.2 Research Objective:


The following are possible research goals in the area of automated software testing
using machine learning (ML):
Developing new techniques for automatically generating test cases for
ML systems : Researchers may work on approaches for autonomously generating
test cases for ML systems that successfully exercise all pertinent elements of the
system and are indicative of real-world use situations.
Improving the interpretability of ML systems : Researchers may concentrate
on developing methods for improving the interpretability of ML systems, which
would make it simpler to comprehend why a system is acting a specific way and to
find and correct flaws.
Building tools and frameworks for ML testing automation : To make it sim-
pler for enterprises to deploy testing automation for their ML systems, researchers
may concentrate on creating tools and frameworks for ML system testing.
Evaluating the effectiveness of ML testing automation techniques : To un-
cover flaws and ensure the quality of ML systems, researchers may compare several
automated testing methods for machine learning. Investigating how to improve the
scalability of ML testing automation : To manage the numerous potential inputs
and outputs that are typical of ML systems, researchers may look into ways to make
ML testing automation more scalable.
Developing techniques to improve the coverage of ML testing : To make
sure that all pertinent situations and edge cases are evaluated, researchers may look
into ways to increase the coverage of ML testing.
Developing techniques to improve the maintainability of ML tests : To
reduce the expenses of updating and maintaining tests as ML systems develop, re-

2
searchers may look into ways to make ML tests more maintainable.
Developing techniques to improve the reusability of ML tests : To reduce
the expenses of designing and maintaining tests across many projects or platforms,
researchers may look into ways to make ML tests more reusable.
Investigating how to automate the testing of continuously learning ML
systems : To make sure that tests are valid and applicable as the system devel-
ops, researchers may look at automating the testing of ML systems that are always
learning.

3
Chapter 2

Related Work

Starting with research paper [2],It’s an analysis regarding image rectification soft-
ware test automation using robotic arm . Phone software features can get tested
with the usage of robotic arms . For example - Image rectification or mobile de-
vices . Basically , A precise test automation system for testing and validating the
computer vision algorithms used for image rectification in a mobile phone.Robotic
Arm setup provides us with the flexibility to run our our test cases using multiple
speeds,rotation and tilting angles.The objective behind this project is to design and
develop the test automation for the image rectification feature and usage of robotic
arm for automating this use case.Traditional manual testing requires so much hard
effort yet a lengthy process to accomplish , In this following way , It will detect the
problems easily and make the work lenient .
In this research paper [5] , Revisiting Test Impact Analysis in Continuous Testing
from the Perspective of Code Dependencies . While a continuous process , Au-
tomated test cases are executed in a limited proportion to improve the quality of
existing code . Continuous process ensures a better pattern and reduces hassles. If a
code smell occurs , We have to be consistent through the development process . Con-
tinuous testing has a great impact on the quality of software . Term CI(Continuous
Integration) is widely used in modern software development systems including its
practice of integrating developers code changes to a central code repository often .
To ensure the adjective of the integrated code , developers need to run sets of test
cases for each code integration . Due to continuous change there are possibilities to
face a fall yet a certain degree of dependencies between test cases and source code
files may remain .
Within the following research paper [1] , Effectiveness of Metamorphic Testing to
solve the Oracle Problem has been discussed . Software testing system has its term
oracle which verifies the appropriate test case execution results and problems occur
when it does not exist or it’s out of the budget . From that point of view , the Meta-
morphic Testing approach appears . It’s a testing approach that solves the oracle
problem in multiple techniques and ways . Even a small number of various meta-
morphic relations identifies the fault and solves the oracle problem .In this current
situation , Software Development System is increasing with the massive demand of
software usage among people . In such a case , Effective testing alternatives must
have been used . Metamorphic can be an alternative to prevent oracle problems
within a system .
Research paper [6] introduces A new automation testing kit that is announced and

4
tested on nano satellite flight software named CubeSAT Space Mission to ensure
flight software quality. Before the space field , several testing techniques were used
to assess flight software . When the developers approached manual testing with
classic testing strategies they found out 12 bugs were not covered in less than three
days . The fuzz testing improved the whole quality of the software through automa-
tion .
According to the research paper [4] we find the limitations of automation in testing
and generating test cases . Also it tells the importance of the user environment is
immense and we should always pay attention to the pain-points of tool users . An
user has no user manual and environmental faults are difficult to debug and this
fault may lead to incorrect results or increase user complaints .

According to the research paper[3] Artificial intelligence (AI) is a field of computer


science and engineering focused on the creation of intelligent machines that can per-
form tasks without explicit human instruction. AI technologies and techniques, such
as machine learning, deep learning, and natural language processing, have achieved
significant progress in recent years and are being applied in a variety of fields, in-
cluding software testing. In software testing, AI algorithms and techniques can be
used to evaluate information, analyze data, and perform tasks in order to ensure the
quality and functionality of software.

5
Chapter 3

Problem Definition and


Methodology

3.1 Problem Definition:


The problem that this software testing automation tool aims to solve is the efficient
and effective execution of software testing. Software testing is an essential part of
software development, but manual testing can be time-consuming and prone to hu-
man error. Automation of testing processes can improve the speed and accuracy of
testing, reducing the time and resources required to ensure software quality. This
tool is designed to automate the testing process for software applications by provid-
ing an automated way to execute test cases, track test results, and generate reports.
The tool should be able to execute tests for different types of software applications
and environments, and it should be easy to use for developers and testers. The
goal of this tool is to improve the efficiency and effectiveness of software testing by
automating repetitive and time-consuming tasks, reducing the potential for human
error, and providing detailed and accurate test results in a timely manner. The tool
should also be flexible and easy to use, allowing developers and testers to quickly
and easily set up and run tests, and easily customize the test process to fit their
specific needs.

3.2 Methodology:
We train our bots to understand UI elements with the development of a deep neural
network and pick them up through the help of computer vision. Here we have a
set of inputs and we pass it on to the black box of Machine learning where we are
getting an output and with that we are trying to match it with the labels provided
with each screenshot from the first batch of inputs. If they match then we are done
with the training and if it doesn’t match then we retrain the bots. This way they
can recognize elements and their usage properly.

6
Figure 3.1: A High Level flow overview of the mechanism of the neural network

7
Figure 3.2: A High Level flow overview of the mechanism of Reinforcement Learning
named Q-Learning

Now we need to train our bots in a way that it can navigate through the app and
do certain things that depict the functional aspect of the app. Certainly we have
to train our bots the correct way to navigate and we do that simply by the imple-
mentation of Reinforcement Learning. Specifically we use Q-Learning, which is a
special form of Reinforcement Learning that enables us to train our bots to find the
perfect path to navigation. In order to have a high level overview, our bot tries to
go from one action to another and we give scores based on their chosen path. Let’s
say we have a graph where we have a lot of nodes and we can go from one node to

8
any node. But to go from node a to node b there are only a few possible paths and
there is only one optimum path. While the bots choose any wrong path they get a
negative score and they get a relatively higher score for choosing the right path. So,
training in this way for a large number of times the bots are smart enough to find
the perfect way to navigate in given circumstances. Because based on the scores
they get to know the optimum paths.
Now based on these preconfigured AI bots, we are ready to test the UI and API
testing at one go. Now for the testing part we have followed a test case scenario
and we see if the output is our desired output. Based on that we can show test run
statuses and other analytics as well. We also plan to show benchmarks on specific
features and show how our tested software acts compared to the existing software
in the industry. Let’s say we have a product page which we get when we search a
specific keyword. Now our test passes as we get the desired output when the key-
word is typed and the search button is clicked. Afterwards, the developed tool will
show benchmarks of some existing big websites that have the same feature and how
much time they take to offer the same feature and thus show a comparison with an
analytics tab.

9
Chapter 4

Advantage and Challenges

4.1 Advantage:
One advantage of training in this way is that often with traditional modern day
coding practices and tools the type of selectors on a visual element often change in
each render. Let’s take an example of an html page generated from a Full Stack
JS framework, the tags stay the same but their identifiers and unique values often
change with every render. Now as our bots are not picking up visual elements using
the identifiers that the traditional automation tools use, we are one step ahead to
not lose time on configuration at each render. Also, if sometime in the future the
position of the shopping cart button ever changes from top right to bottom left then
with a different identifier our bots are still ready to choose that path of clicking
the shopping cart without any hassle. Now this is the ultimate power of Machine
Learning and AI. Once it’s trained there’s no going back. It can do the job quite
faster with easier maintenance and accuracy.

4.2 Challenges:
It might be a challenge to collect a huge data set of screenshots of user interfaces and
then label them. But, to cover that up we can simply collect screenshots of a large
number of websites by simply setting up a Node JS script that takes screenshots of
different pages using a library called Puppeteer. It would not be a hassle at all to
do so. Now to move on to the fact of labeling each of these screenshots can be time
consuming and consist of manual labor. But, we can simply use Amazon Mechanical
Turk that enables us to outsource these labeling tasks. So, these challenges can be
satisfied according to us.

10
Chapter 5

Dataset and Data Analysis

5.1 Dataset
5.1.1 Images:
A large dataset of images of various user interfaces will be needed to train the
computer vision model. These images should include examples of the different visual
elements that the model will need to detect, such as buttons, text fields, and icons.

5.1.2 Annotations:
The images in the dataset will need to be annotated with information about the
visual elements they contain. The annotations should indicate the location, size,
and type of each visual element in the image or video.

5.1.3 User interface metadata:


Additional metadata about the user interfaces in the dataset, such as the operating
system, resolution, and screen size, will be useful for training the model to handle
different types of user interfaces.

5.1.4 Real-world data:


In order to make the model robust and able to generalize to unseen user interfaces,
it is important to include real-world data in the dataset. This can include images
and videos taken in different lighting conditions, with different backgrounds, and
with different types of noise.

5.1.5 Evaluation data:


A separate dataset will be required to evaluate the performance of the model. This
dataset should also be annotated, but it should not be used during training.

11
5.2 Data Analysis
5.2.1 Test case generation rate:
The number of test cases generated by the tool in a given period of time. This
metric can indicate how efficient the tool is at generating test cases.

5.2.2 Test case coverage:


The percentage of the codebase that is covered by the generated test cases. This
metric can indicate how well the tool is able to generate test cases that cover the
entire codebase.

5.2.3 Defect detection rate:


The number of defects detected by the tool in a given period of time. This metric
can indicate how well the tool is able to find defects in the software.

5.2.4 False positive rate:


The percentage of test cases that are incorrectly identified as defects. A low false
positive rate indicates that the tool is able to accurately distinguish between defects
and non-defects.

5.2.5 Time to detect defects:


The amount of time it takes for the tool to detect a defect after it is introduced into
the codebase. This metric can indicate how quickly the tool is able to find defects.

5.2.6 Test optimization:


How well the tool optimizes test cases to minimize the number of test cases needed
to achieve a certain level of test coverage. Adaptive testing: How well the tool
adapts its test generation and execution based on the results of previous tests.

5.2.7 Test case prioritization:


How well the tool prioritizes the test cases to be executed based on the risk of
defects.

12
5.2.8 Integration:
How well the tool integrates with the development process, such as version control,
issue tracking, and continuous integration systems.

5.2.9 Usability:
How easy it is for developers and QA engineers to use the tool and how well the
tool satisfies their needs.

13
Chapter 6

Ethical Considerations

Bias:
AI systems can perpetuate bias if the training data contains biased examples. It’s
important to ensure that the training data is diverse and representative of the pop-
ulation the tool will be used on.
Transparency:
The tool’s decision-making process should be transparent and explainable, especially
when it comes to defects detection or test case prioritization.
Accountability:
The tool’s developers should be held accountable for any unintended consequences
or errors in the tool’s decision-making.
Privacy:
The tool should be designed with privacy in mind and should not collect or store
any personal information of the end-users.
Security:
The tool should be designed with security in mind and should protect any sensitive
data it processes from unauthorized access or breaches.
Reliability:
The tool should be reliable and produce consistent results, and it should not produce
results that could potentially harm the end-users.
Fairness:
The tool should be fair and treat all end-users equally and should not discriminate
against any group of users based on their characteristics.
Human oversight:
The tool should be designed to work in conjunction with human testers, rather than
replacing them entirely, as human oversight and evaluation is crucial for any AI
system.

14
Chapter 7

Tool Analysis

7.1 Testim:
Testim, an automated functional testing tool, employs machine learning and artifi-
cial intelligence to accelerate the creation, execution, and maintenance of automated
tests. Regarding support, the utility works with a variety of systems and browsers,
including Chrome, Firefox, Edge, IE, Safari, and Android. Testim offers basic and
professional plans. The free, extremely basic plan provides very few features. The
pro version, however, supports every feature.

Features:
• Expand test coverage
• Reduce maintenance
• Troubleshoot quickly
• Scale quality

Figure 7.1: Workflow of Testim

15
7.2 TestCraft
TestCraft is an AI-powered test automation tool for simulation and continuous test-
ing that operates at the top of Selenium. It is also used for tracking web applica-
tions. The task of artificial intelligence (AI) technology is to eradicate repair time
and cost by instantly overcoming app changes.And the greatest part about TestCraft
is that testers can digitally build automated, selenium-based experiments using a
drag-and-drop interface and run them on various browsers and work environments
continuously. No coding skills are needed.

Features:
• It stimulates test creation, execution, and maintenance of UI tests for web
mobile web applications.

• Automate constant regression UI tests quickly.

• No framework setup.

• AI auto-fixes flaky tests.

• No maintenance.

• 4x productivity.

• Run concurrent tests across platforms environments.

7.3 Applitools
Applitools is an application visual management and AI-powered visual UI testing
and monitoring software. It provides an end-to-end software testing platform pow-
ered by Visual AI and can be used by professionals in engineering, test automation,
manual QA, DevOps, and Digital Transformation teams. Also, the AI and machine
learning algorithms are entirely adaptive — it scans the apps’ screens and analyzes
them like the human eye and brain, but with the power of a machine.

Features:
• Excellent Match Level algorithms, which traditionally reduced a lot of assert
codes.

• Drastically reduce maintenance of code.

• Automation Suites sync with rapid application changes.

• Pricing models are economical.

• Easy integration.

• Quick Support.

16
• Cross Browser Testing.

• Cross-Device Testing.

• Responsive Design Testing.

7.4 Perfecto
Perfecto offers cloud-based web and mobile app testing. It lets you create and exe-
cute your tests, generate reports, and analyze test results. You can run automated
tests or perform live testing, integrate with many of the DevOps tools you already
use, and accelerate your testing cycle for faster time-to-market without compromis-
ing quality.

Features:
• Continuous Testing

• Automated Testing.

• Mobile Application Testing

• Web Testing.

• UX Testing.

• Functional Testing.

• Interactive Testing.

• Performance Testing.

• Regression Testing.

7.5 Mabl
A unified DevTestOps platform that makes it easy for developers and testers to
create and run automated functional UI tests faster and at scale. Some of features
include Creating robust automated tests that are codeless and scriptless,Testing in-
frastructure is fully managed in the cloud, Scale tests infinitely and run them all
in parallel.It generates Auto-healing tests which can adapt to UI changes without
intervention.

17
Features:
• Low Code. Easily create, run, and manage automated browser, API, and
mobile web tests.

• In-Depth Results. Gain visibility with complete test results that streamline
issue resolution.

• API Testing.

• Auto-Healing.

• SaaS.

• Data-Driven Testing.

• Cross-Browser Testing.

• Mobile Web Testing.

7.6 Functionize
A one-stop shop for all the aforementioned testing, Functionize is a cloud-based
automated testing tool used for functional, performance, and load testing. Addi-
tionally, this tool accelerates test development, diagnosis, and maintenance through
the application of machine learning and artificial intelligence.One of this tool’s best
benefits is that you don’t need to spend a lot of time thinking before running a test;
all you have to do is input your test objectives in plain English, and NLP will gen-
erate functional test cases. Additionally, it runs thousands of tests from all desktop
and mobile browsers in a matter of minutes. The test automation tool Functionize
is unquestionably worthwhile if you’re seeking for one.

Features:
• Lightning-fast creation.

• AI-assisted maintenance.

• Fast debugging easy edits.

• Adaptive execution at scale.

18
7.7 Sauce Labs
This is another well-known cloud-based platform for automated testing that makes
use of AI and machine learning. Sauce Labs offers a comprehensive list of mobile
emulators, simulators, and devices, as well as the rate at which customers must
test their applications. Additionally, it asserts to be the biggest continuous research
cloud in the world, offering thousands of actual devices along with about 800 differ-
ent browser and operating system configurations.

Features:
• Cross Browser testing

• Mobile app testing

• Low Code testing

• Error reporting

• Mobile beta testing.

• API testing.

• UI/Visual testing.

19
Chapter 8

Conclusion

The demand for software testing automation that uses machine learning (ML) is pro-
jected to rise as ML’s use in industry continues to expand. Automatic testing and
validation of ML models can help to make sure they are working as planned and can
increase overall accuracy and efficiency. Additionally, the necessity for automated
testing will probably become much more pressing given the growing complexity of
ML systems and the usage of deep learning in particular.

The industry will also experience a move toward AI-driven testing, which will em-
phasize the value of testing automation in the creation and upkeep of software. This
will involve testing not only the machine learning (ML) models but the entire system
that uses the models.

In conclusion, as ML usage grows and ML systems get more complicated, there will
undoubtedly be a greater dependence on and demand for software testing automa-
tion in industry. This will be motivated by the requirement for more effective and
precise testing as well as the requirement to guarantee that ML models and systems
are operating as planned.

20
Bibliography

[1] H. Liu, F.-C. Kuo, D. Towey, and T. Y. Chen, “How effectively does meta-
morphic testing alleviate the oracle problem?” IEEE Transactions on Software
Engineering, vol. 40, no. 1, pp. 4–22, 2013.
[2] D. Banerjee, K. Yu, and G. Aggarwal, “Image rectification software test au-
tomation using a robotic arm,” IEEE Access, vol. 6, pp. 34 075–34 085, 2018.
[3] H. Hourani, A. Hammad, and M. Lafi, “The impact of artificial intelligence on
software testing,” in 2019 IEEE Jordan International Joint Conference on Elec-
trical Engineering and Information Technology (JEEIT), IEEE, 2019, pp. 565–
570.
[4] D. Banerjee and K. Yu, “3d face authentication software test automation,”
IEEE Access, vol. 8, pp. 46 546–46 558, 2020.
[5] Z. Peng, T.-H. Chen, and J. Yang, “Revisiting test impact analysis in continu-
ous testing from the perspective of code dependencies,” IEEE Transactions on
Software Engineering, 2020.
[6] T. Gutierrez, A. Bergel, C. E. Gonzalez, C. J. Rojas, and M. A. Diaz, “System-
atic fuzz testing techniques on a nanosatellite flight software for agile mission
development,” IEEE Access, vol. 9, pp. 114 008–114 021, 2021.

21
How to install LATEX

Windows OS
TeXLive package - full version
1. Download the TeXLive ISO (2.2GB) from
https://www.tug.org/texlive/

2. Download WinCDEmu (if you don’t have a virtual drive) from


http://wincdemu.sysprogs.org/download/

3. To install Windows CD Emulator follow the instructions at


http://wincdemu.sysprogs.org/tutorials/install/

4. Right click the iso and mount it using the WinCDEmu as shown in
http://wincdemu.sysprogs.org/tutorials/mount/

5. Open your virtual drive and run setup.pl

or

Basic MikTeX - TEX distribution


1. Download Basic-MiKTEX(32bit or 64bit) from
http://miktex.org/download

2. Run the installer

3. To add a new package go to Start ¿¿ All Programs ¿¿ MikTex ¿¿ Maintenance


(Admin) and choose Package Manager

4. Select or search for packages to install

TexStudio - TEX editor


1. Download TexStudio from
http://texstudio.sourceforge.net/#downloads

2. Run the installer

22
Mac OS X
MacTeX - TEX distribution
1. Download the file from
https://www.tug.org/mactex/

2. Extract and double click to run the installer. It does the entire configuration,
sit back and relax.

TexStudio - TEX editor


1. Download TexStudio from
http://texstudio.sourceforge.net/#downloads

2. Extract and Start

Unix/Linux
TeXLive - TEX distribution
Getting the distribution:
1. TexLive can be downloaded from
http://www.tug.org/texlive/acquire-netinstall.html.

2. TexLive is provided by most operating system you can use (rpm,apt-get or


yum) to get TexLive distributions

Installation
1. Mount the ISO file in the mnt directory

mount -t iso9660 -o ro,loop,noauto /your/texlive####.iso /mnt

2. Install wget on your OS (use rpm, apt-get or yum install)

3. Run the installer script install-tl.

cd /your/download/directory
./install-tl

4. Enter command ‘i’ for installation

5. Post-Installation configuration:
http://www.tug.org/texlive/doc/texlive-en/texlive-en.html#x1-320003.4.1

6. Set the path for the directory of TexLive binaries in your .bashrc file

23
For 32bit OS
For Bourne-compatible shells such as bash, and using Intel x86 GNU/Linux and a
default directory setup as an example, the file to edit might be

edit $~/.bashrc file and add following lines


PATH=/usr/local/texlive/2011/bin/i386-linux:$PATH;
export PATH
MANPATH=/usr/local/texlive/2011/texmf/doc/man:$MANPATH;
export MANPATH
INFOPATH=/usr/local/texlive/2011/texmf/doc/info:$INFOPATH;
export INFOPATH

For 64bit OS
edit $~/.bashrc file and add following lines
PATH=/usr/local/texlive/2011/bin/x86_64-linux:$PATH;
export PATH
MANPATH=/usr/local/texlive/2011/texmf/doc/man:$MANPATH;
export MANPATH
INFOPATH=/usr/local/texlive/2011/texmf/doc/info:$INFOPATH;
export INFOPATH

Fedora/RedHat/CentOS:
sudo yum install texlive
sudo yum install psutils

SUSE:
sudo zypper install texlive

Debian/Ubuntu:
sudo apt-get install texlive texlive-latex-extra
sudo apt-get install psutils

24
Overleaf: GitHub for LATEX
projects

This Project was developed using Overleaf(https://www.overleaf.com/), an online


LATEX editor that allows real-time collaboration and online compiling of projects
to PDF format. In comparison to other LATEX editors, Overleaf is a server-based
application, which is accessed through a web browser.

25

You might also like