Download as docx, pdf, or txt
Download as docx, pdf, or txt
You are on page 1of 55

SDLC and STLC

Test Types
Credit Card Payment Process
EMV-Europay, MasterCard, Visa

--------------------------------------------------------------------------------------------------------------------------------------------------------
SDLC and STLC

SDLC

 Requirements Gathering
 Plan and Analysis
 Design
 Coding
 Testing
 Release & Maintenance

SDLC Models

 Sequential Models
o Waterfall Model
o ‘V’ Model
 Incremental / Iterative Models
o Prototype Model
o Spiral Model
o Agile Model

STLC/Testing Process/Methodology

 Test Strategy
 Test Planning
 Configuration
 Risk Analysis
 Test Design (Test Scenarios, Test Cases and Test Data)
 Test Execution
 Defect Tracking and Reporting
 Test Report/Status Reporting
 Test Closure

Testing

Testing is the process of evaluating a system or its components with the intent to find that whether it satisfies the
specified requirements or not.

How to Test

 Step 1: Create a Test Plan


 Step 2: Create test cases and Test Data
 Step 3: If applicable create scripts to run test cases
 Step 4: Execute the test cases
 Step 5: Fix the bugs if any and re test the code

1
 Step 6: Repeat the test cycle until the “unit” is free of bugs

Testing Techniques/Test Case Design strategy

a) Black box Techniques

 Positive
 Negative
 Equivalence Partitioning Classes (EPC)
 Boundary Value Analysis (BVA)
 Decision Table Testing
 State Transition Testing
 Use Case Testing

b) White box Techniques


 Statement Testing
 Decision Testing
 Condition/Multi Condition Testing
 Mutation Testing

Software Environment
 I-Tier or Standalone Applications
 II-Tier or Client/Server Applications
 III-Tier or Web Applications
 N-Tier or Distributed Applications

Informal Testing
 Exploratory Testing
 Error Guessing

Quality Standards
 ISO Standards
 IEEE Standards
 CMM/CMM (I) Process Guidelines

Software Business Domains


 BFSI
 ERP
 Healthcare
 Telecom
 Ecommerce
 Others

Test Strategy and Test Plan

Test Strategy document is a high level document and normally developed by project manager. The test strategy
document is a formal description of how a software product will be tested. This document defines “Software
Testing Approach” to achieve testing objectives. The Test Strategy is normally derived from the BRD. Some
companies include the “Test Approach” or “Strategy” inside the Test Plan, which is fine and it is usually the case for
small projects. However, for larger projects, there is one Test Strategy document and different number of Test

2
Plans for each phase or level of testing. The test strategy document contains test cases, conditions, and test
environment, a list of related tasks, pass/fail criteria and risk assessment.

Components of the Test Strategy Document

 Scope and Objectives


 Business issues
 Roles and responsibilities
 Communication and status reporting
 Test deliverability
 Industry standards to follow
 Test automation and tools
 Testing measurements and metrics
 Risks and mitigation
 Defect reporting and tracking
 Change and configuration management
 Training plan

Test Plan which describes how the tests will be carried out. Test Plan document is usually prepared by the Test
Lead or Test Manager and the focus of the document is to describe what to test, how to test, when to test and
who will do what test. It is not uncommon to have one Master Test Plan which is a common document for the test
phases and each test phase has own test plan documents.

Components of the Test Plan Document

 Test Plan id
 Introduction
 Test items
 Features to be tested
 Features not to be tested
 Test techniques
 Testing tasks
 Suspension criteria
 Features pass or fail criteria
 Test environment (Entry criteria, Exit criteria)
 Test deliverables
 Staff and training needs
 Responsibilities
 Schedule

3
Test Plan Name
Project Name :
Author :
Revision Number :
Date :
Document Revision History
(Version number, Revision date, Revision by, Summary)
Table of Contents
1.Introduction

2.Project Overview

3.Test Goals and Approach

 Goals
 Testing Approach and Execution
 Test Requirements & Setup

4. Test Requirements and Environment Setup/Tools

 Test Requirements
 Environment Setup
 Tools

5.Test Schedule and Resources Requirements/Roles and Responsibilities

 Staffing Requirements
 Equipment Requirements
 Schedule Milestones

6.Dependencies, Assumptions, Risks and Mitigations

 Dependencies
 Assumptions
 Risks

7.Test Deliverables

 Entry and Exit Criteria

8.Test Design

 Functioanal Test Cases


 System Test Cases
 Regression Test Cases
 Performance Test Cases
 Negative Test Cases 4

9. Approval

10. Appendix
Test Types

 Functional Testing
 Non-Functional Testing

Levels of Testing

 Unit Testing/Component Testing/Module Testing/Assembly Testing


 Integration Testing
 System Integration Testing
 User Acceptance Testing

Functional Testing

Unit Test

Testing the individual software components or modules, typically done by the programmer and not by testers, as it
requires detailed knowledge of the internal program design and code.

Integration Testing

Testing the group of software components or modules, typically done by the programmer and not by testers, as it
requires detailed knowledge of the internal program design and code

System Integration Testing

Evaluating the behavior of the whole system as per the requirements with dummy test data and environment, Unit
& integration test is the pre requisites for the SIT.

User Acceptance Testing

Evaluating the behavior of the whole system as per the requirements which are carried out the end user
perspective with original data and environment, these final steps before rolling out the application, SIT the pre
requisites for the UAT

Re-Regression Testing

Re Test: Re-executing the Test case to check whether bug is fixed or not.
Regression Test: Re-executing the Test case to check whether fixed bug is introduced any new bugs or not.

 Unit Regression- Done after initial test cycle completes-After initial test cycle
 Regional Regression- Done when all issues/reported bugs are fixed – Each test cycle after bug fixed
 Full Regression- Final Regression before delivering the product – Final cycle and before deliver

Ad-hoc and Monkey Testing

Evaluating the system to ensure it does not crash out with full and full negative scenario. Monkey testing is more
or less as Ad-hoc testing.

5
Compatibility testing

It is a non-functional s/w testing to evaluate the application's compatibility with the computing environment.
Compatibility testing can be automated or manual on the below;
 Hardware Platform (IBM 360, HP 9000, etc.)
 Bandwidth handling capacity of networking hardware
 Compatibility of peripherals (Printer, DVD drive, etc.)
 Operating systems (MVS, UNIX, Windows, etc.)
 Database (Oracle, Sybase, DB2, etc.)
 Other System Software (Web server, networking/ messaging tool, etc.)
 Browser compatibility (Firefox, Netscape, Internet Explorer, Safari, etc.)

Alpha testing

Testing is done at the end of development. Still minor design changes may be made as a result of such testing.

Beta testing

Testing typically done by end-users or others and the end of the Testing process, Final testing before releasing
application for commercial purpose

End-to-end testing

Similar to system testing, involves testing of a complete application environment in a situation that mimics real-
world use, such as interacting with a database, using network communications, or interacting with other hardware,
applications, or systems if appropriate.

Usability testing

User-friendliness check, Application flow is tested, Can new user understand the application easily, Proper help
documented whenever user stuck at any point. Basically system navigation is checked in this testing.

Cause Effect Graph

A graphical representation of inputs and the associated outputs effects which can be used to design test cases.

Code Coverage

An analysis method that determines which parts of the software have been executed (covered) by the test case
suite and which parts have not been executed and therefore may require additional attention

Code Inspection:

A formal testing technique where the programmer reviews source code with a group who ask questions analyzing
the program logic, analyzing the code with respect to a checklist of historically common programming errors, and
analyzing its compliance with coding standards

Code Walkthrough

A formal testing technique where source code is traced by a group with a small set of test cases, while the state of
program variables is manually monitored, to analyze the programmer's logic and assumptions

6
Emulator: Any device, computer program, or system that accepts the inputs and produces the same outputs as a
given system

Endurance Testing: Checks for memory leaks or other problems that may occur with prolonged execution.

Defect life Cycle

Defect: Which produces an incorrect result called bug or defect. The defect life cycles are,

 New
 Open
 Assign
 Reject/Deferred (Reject or Accept or Deferred)
 Re-Open (If it's rejected, tester feels it’s a valid bug then "Re-open")
 Fixed
 Re-Test
 Closed

1) Initially the status will be "New" in QC.


2) Once the Bug is found and entered in QC, then the status will changed to "Open". Then you'll get Defect ID.
3) Now you have to "Assign" that Bug to a concerned Developer mentioning his/her QC ID. Then the status will
change to "Assigned".
4) Once the Bug is assigned it'll be routed to a concerned developer and he/she'll come to know it through an
email alert. Then there are three possibility actions at Developers end.
(i) "Reject" if it is invalid bug.
(ii) "Accept" if it is valid bug.
(iii) "Deferred" if they need more discussions and clarifications on the same.
5) Once the Bug is "Rejected" they'll change the status as "Rejected" in QC and Testers needs to verify it.
6) If tester finds the bug is a valid bug, then s/he can "Re-open" the same bug and can attach the valid screen shots
as a proof.
7) Once the Bug is "Fixed" then the status will change to "Fixed". Testers needs to retest and verify the bug to
check whether its really fixed.
9) Once the testers verified the bug then they can change the status as "closed".

Sample Bug Report

Defect Name: Application crash on clicking the SAVE button while creating a new user.
Defect ID: (It will be automatically created by the BUG Tracking tool once you save this bug)
Area Path: USERS menu > New Users
Build Number: Version Number 5.0.1
Severity:
Priority: HIGH (High/Medium/Low) or 1
Assigned to: Developer-X
Reported By: Your Name
Reported On: Date
Reason: Defect
Status: New/Open
Environment: Windows 2003/SQL Server 2005

7
Priority and Severity

Priority – is a scheduling and referring how soon the bug should be fixed. It’s based on the project priority.
Severity – Is seriousness about the bug. It’s based on the functionality.
EX: if name is misspelled in the web page then the priority is HIGH and severity is LOW

Types of Priority:
 Critical
 High
 Medium
 Low

Types of Severity:
 Blocker/Show stopper: No further testing work can be done.
 Critical: Application crash, Loss of data.
 Major: Major loss of function.
 Minor: minor loss of function.
 Trivial: Some UI enhancements.
 Enhancement: Request for new feature or some enhancement in existing one.

Difference between QC10.0 and ALM11.5

Quality Center Modules

 Management
 Requirements
 Business Components
 Test Plan
 Test Resources
 Test Lab
 Defect
 Dashboard

ALM-Application Lifecycle Management

 Dashboard
o Analysis View
o Dashboard view
 Management
o Release
o Libraries
 Requirements
o Requirements
o Business Models
 Testing
o Test Resources
o Business Components
o Test Plan
o Test Lab
 Defects

8
Verification and Validation

Verification ensures that the system meets all the functionalities which are done by developers.
Validation ensures that functionalities meet the behavior of the system which is done by the testers.

Static and Dynamic Testing

Static testing the verification and dynamic testing about the validation

What are Exploratory Testing and When to do?

Testing the product under requirements is called “Software Testing” but testing the product without requirements
is called “Exploratory Testing”. Understand the application, write the test cases and test the application explore
the product, identify all possible scenarios and test the product.

 Whenever there is no requirement


 Requirement is there but not understandable
 Requirement is there understandable but there is no time

When to stop testing?


This can be difficult to determine. Many modern software applications are so complex and run in such an
interdependent environment, that complete testing can never be done. Common factors in deciding when to stop
are...

 Deadlines, e.g. release deadlines, testing deadlines;


 Test cases completed with certain percentage passed;
 Test budget has been depleted;
 Coverage of code, functionality, or requirements reaches a specified point;
 Bug rate falls below a certain level; or
 Beta or alpha testing period ends.

What if there isn't enough time for thorough testing?


Use risk analysis to determine where testing should be focused. Since it's rarely possible to test every possible
aspect of an application, every possible combination of events, every dependency, or everything that could go
wrong, risk analysis is appropriate to most software development projects. This requires judgment skills, common
sense, and experience.

Software Testing Estimation

Test estimation should be realistic and accurate, Bad estimation can lead to poor distribution of work. Experience
play major role in estimating “software testing efforts”. Working on varied projects helps to prepare an accurate
estimation for the testing cycle. Obviously one cannot just blindly put some number of days for any testing task.

For any software testing estimation technique, it is highly recommended that following factors should be taken
into account:

 Domain Knowledge and core requirements


 Risks and complexity of the application
 Team Knowledge on the subject/skills

9
 Historical data for the previous estimation for improvement and accuracy
 Estimation should include buffer time
 Bug cycles for the project
 Resources availability (Like vacations, holidays, and sick days can have a great impact on your estimates)

Following are the different popular Test Estimation Techniques

 3-Point Method
 Use – Case Point Method:
 Work Breakdown Structure
 Wideband Delphi technique
 Function Point/Testing Point Analysis
 Percentage of development effort method
 Percentage distribution
 Best Guess
 Ad-hoc method
 Experience Based

1)
Example:

1). 3-Point Method

It is based on statistical methods in which each testing task is broken down into sub tasks and then three types on
estimation are done on each tasks. The formula used by this technique is:

Test Estimate = P + (4*N) + E / 6

Whereas

P = Positive Scenarios or Optimistic Estimate (Best case scenario in which nothing goes wrong and all conditions
are optimal.)

 N = Negative Scenarios or Most Likely Estimate (most likely duration and there may be some problem but
most of the things will go right.)
 E = Exceptional Scenarios or Pessimistic Estimate (worst case scenario which everything goes wrong.)

Standard deviation for the technique is calculated as,

Standard Deviation (SD) = (N – E)/6

2) Use – Case Point Method:

Use-Case Point Method is based on the use cases where we calculate the unadjusted actor weights and unadjusted
use case weights to determine the software testing estimation.

Use case is a document which well specifies different users, systems or other stakeholders interacting with the
concerned application. They are named as ‘Actors’. The interactions accomplish some defined goals protecting the

10
interest of all stakeholders through different behaviour or flow termed as scenarios.

The formula used for this technique is:

 Unadjusted actor weights = total no. of actors (positive, negative and exceptional)
 Unadjusted use case weight = total no. of use cases.
 Unadjusted use case point = Unadjusted actor weights + Unadjusted use case weight
 Determine the technical/environmental factor (TEF) ( if not available take as 0.50)
 Adjusted use case point = Unadjusted use case point * [0.65+ (0.01 * TEF]
 Total Effort = Adjusted use case point * 2

3) Work Breakdown Structure:

It is created by breaking down the test project into small pieces. Modules are divided into sub-modules. Sub
modules are further divided into functionalities and functionalities are divided in sub-functionalities.

Review all the requirements from Requirement Document to make sure they are added in WBS. Now you figure
out the number of tasks your team needs to complete. Estimate the duration of each task.

Test Estimation Template

Efforts in PH
Task Expected Actual
Phase Task Description Best Worst Normal
ID Hrs Hrs
Case Case Case
Test
1 Planning Study Specifications 2 5 3 3.167
Determine types of tests to be
executed 0.5 1 1 1.5
Determine Test Environment 1 2 1 2
Estimate Testing Project Size, Effort,
Cost & Schedule 2 4 3 3.5
Review of Estimation & Approval of
Estimation 1 2 1.5 2
Test
2 Design Design Test Cases for Module 1 5 8 6 7.5
Design Test Cases for Module 2 6 9 8 7.5
Test
3 Execution Design Test Cases for Module 1 20 30 25 25
Design Test Cases for Module 2 40 60 50 50
Defect
4 Report Design Test Cases for Module 1 1 2 0.05 1
Design Test Cases for Module 2 2 0.5 1 2
Test
5 Report Preparing test report 2 3 1 4
90 120 100 110

11
Performance Testing (LSsSSV)

To determine how fast some aspect of a system performs under a particular workload. It is a non-functional
testing. The main goal is to improve the users experience and increase the revenue. System- Any kind of system
like computer, network, software program or device

Performance test Tool: Load Runner

Types of Performance Testing


1. Load Test-Verifying system’s behavior under normal and peak load conditions.
2. Stress Test- Verifying system’s behavior beyond the normal or peak load conditions.
3. Spike Test- Verifying system’s behavior while repeatedly increase the load in short periods of time,
beyond the normal or peak load conditions.
4. Scalability Test- Verifying system’s behavior at what peak more scaling.
5. Soak/Stability/Endurance Test- Verifying the system’s behavior in a long period of time. Ex where system
is designed to work for 10 hrs of time but same system endure for 20 hrs of time to check the staying
power of system. (Ex: re-charging mobile)
6. Volume Test- Verifying the system’s behavior by giving large amount of data. EX database volumes.

12
Performance Testing Process/Methodology

1) Identify Testing Environment

Do proper requirement study and analyzing test goals and its objectives. Also determine the testing scope along
with test Initiation Checklist. Identify the logical and physical production architecture for performance testing,
identify the software, hardware and networks configurations required for kick off the performance testing.
Compare the both test and production environments while identifying the testing environment. Get resolve the
environment-related concerns if any, analyze that whether additional tools are required for performance testing.
This step also helps to identify the probable challenges tester may face while performance testing.

2) Identify Performance Acceptance Criteria

Identify the desired performance characteristics of the application like Response time, Throughput and Resource
utilization.

3) Plan & Design Tests

Planning and designing performance tests involves identifying key usage scenarios, determining appropriate
variability across users, identifying and generating test data, and specifying the metrics to be collected. Ultimately,
these items will provide the foundation for workloads and workload profiles. The output of this stage is
prerequisites for Test execution are ready, all required resources, tools & test data are ready.

13
4) Configure Test Environment

Prepare with conceptual strategy, available tools, and designed tests along with testing environment before
execution. The output of this stage is configured load-generation environment and resource-monitoring tools.

5) Implement Test Design

According to test planning and design create your performance tests.

6) Execute Tests

 Collect and analyze the data.


 Problem Investigation like bottlenecks (memory, disk, processor, process, cache, network, etc.) resource
usage like (memory, CPU, network, etc.,)
 Generate the Performance analysis reports containing all performance attributes of the application.
 Based on the analysis prepare recommendation report.
 Repeat the above test for the new build received from client after fixing the bugs and implementing the
recommendations

7) Analyze Results, Report, and Retest

Consolidate, analyze and share test results. Based on the test report re-prioritize the test & re-execute the same. If
any specific test result within the specified metric limit & all results are between the thresholds limits then testing
of same scenario on particular configuration is completed.

Common Performance Problems:

In the software testing of an application Speed is one of the important attribute. User will not happy to work with
slow system. The performance testing uncovers the performance bottlenecks & defects to maintain interest and
attention of user. Here is the list of most commonly performance problems observed in software system:

 Poor response time


 Long Load time
 Bottlenecking
 Poor scalability
 Software configuration issues (for the Web server, load balancers, databases etc.)
 Disk usage
 Operating System limitations
 Poor network configuration
 Memory utilization
 CPU utilization and Insufficient hardware resources

14
Entry and Exit Criteria

Entry and Exit criteria are required to start and end the testing. It is must for the success of any project. If you do
not know where to start and where to finish then your goals are not clear. By defining exit and entry criteria you
define your boundaries. For instance, you can define entry criteria that the customer should provide the
requirement document or acceptance plan. If these entry criteria are not met then you will not start the project.
On the other end, you can also define exit criteria for your project. For instance, one of the common exit criteria in
projects is that the customer has successfully executed the acceptance test plan.

Entry Criteria

 Functional and Business requirement should be cleared, confirmed and approved.


 All developed code must be unit tested. Unit and Integration must be completed and signed off by
development team.
 Test plan, Test cases reviewed and approved.
 Test environment/test ware gets prepared
 Test data should be available
 Availability of application
 QA/Tester gets significant knowledge of application/domain.
 Resources should be ready

Exit Criteria

 All the test cases have been executed and passed


 Test cases completed with certain percentage passed
 Test budget depleted
 Coverage of code/functionality/requirements reaches a specified point
 All defects are fixed, retested and closed
 Closure reports are singed off
 Alpha and Beta testing period ends.
 Budget allocated for testing is exhausted
 The risk in the project is under acceptable limit.

15
Root Cause Analysis (RCA)

RCA is a mechanism of analyzing the defects, to identify its cause. We brainstorm, read and dig the defect to
identify whether the defect was due to “testing miss”, “development miss” or was a “requirement or designs
miss”. Doing the RCA accurately helps to prevent defects in the later releases or phases. If we find, that a defect
was due to design miss, we can review the design documents and can take appropriate measures. Similarly if we
find that a defect was due to testing miss, we can review our test cases or metrics, and update it accordingly.

RCA should not be limited only to the testing defects. We can do RCA on production defects as well. Based on the
decision of RCA, we can enhance our test bed and include those production tickets as regression test cases to
ensure that the defect or similar kinds of defects are not repeated. There are many factors which provokes the
defects to occur. These factors should always be kept in mind while kicking off the RCA process.

 Incorrect requirements
 Incorrect design
 Incorrect coding
 Insufficient testing
 Environment issues ( Hardware, software or configurations)

There is no defined process of doing RCA. It basically starts and proceeds with brainstorming on the defect. The
only question which we ask to ourselves while doing RCA is “WHY?” and “WHAT?” We can dig into each phase of
the life cycle to track, where the defect actually persists.

5 Why’s Technique - DMAIC

 Define
 Measure
 Analyze
 Improve
 Control

Cause & Effect or Fishbone Diagram

16
1. Example Problem Statement: You are on your way home from work and your car stops in the middle of the
road.

1. Why did your car stop?


Because it ran out of gas.

2. Why did it run out of gas?


Because I didn’t buy any gas on my way to work.

3. Why didn’t you buy any gas this morning?


Because I didn’t have any money.

4. Why didn’t you have any money?


Because I lost it all last night in a poker game.

5. Why did you lose your money in last night’s poker game?
Because I’m not very good at “bluffing” when I don’t have a good hand.

2. Example Problem Statement: FlipKart online retailer sold the goods but there were some problem

Q: Why were sales so low?


A: Because demand was much higher than we anticipated.

Q: Why was high demand?


A: Because customers couldn’t process orders through our website and left in frustration.

Q: Why couldn’t customers place orders on our website?


A: Because our frontend customization software couldn’t handle the demand and traffic spike crashed our site.

Q: Why couldn’t handle the demand?


A: Because our performance testing was insufficient.

Q: Why was our testing insufficient, and how do we fix this problem?
A: We failed to test for a high volume of concurrent orders, and we need to fix our software to be able to handle
such demands.

17
Defect leakage and how to prevent

Defect leakage?

Defect leakage formula is software testing metric to calculate the number of defects that went undetected to next
phase(s). Defect leakage is an important metric that indicates the efficiency of process, test procedures, testing
phases. Defect leakage can be calculated across different phases with in SDLC. However, Defect leakage is usually
calculated for the number of defects that went undetected to production environment.

Unit level-> Component level ->Integration level -> System level -> Customer level -> End user level
Formula for any level

Number of defects found in current phase that were supposed to be found in previous phase
Defect leakage = ------------------------------------------------------------------------------------------------------------------------- X 100
Number of defects found in the previous phase

Ex: If 40 defects were found during product testing and 5 defects were found in UAT and 2 in production
environment then defect leakage % will be 17.5 % ((5+2)/40*100)

Prevent Defect leakage

 Clear understand on BRS, SRS and get it clarified clearly, do not assume things, as your thinking and client
might be different.
 Get a sign off of the test cases and on SRS, so that nothing crops later as defect leakage
 Ensure application has been tested with entire functionality with the help of traceability matrix.
 MIMC the test bed environment similar to the customer environment
 Do the regression testing, as to check if any break is there.

Defect Density

Defect Density is the number of confirmed defects detected in software/component during a defined period of
development/operation divided by the size of the software/component

DD=No of defect/PP Size

18
Common Questions and Answers

Difference between authentication and authorization

Authentication is the process of verifying the identity of a user by obtaining some sort of credentials and using
those credentials to verify the user's identity. If the credentials are valid, the authorization process starts.
Authentication process always proceeds to Authorization process.

Ex: HDFC net banking login process.

19
Agile Methodology

Agile is an iterative and incremental software development process or methodology. It breaks task into small with
minimal planning, and doesn’t directly involve long-term planning. Each iterations have requirement analysis,
planning, design, coding, testing and Documentation. It is very effective where the requirements are dynamically
changing.

There are two methods:


Scrum: Each iteration would called a scrum which can be a 1-2 Months. In Scrum Client priorities his requirements
what he want first. If developer did not meets the entire requirement which was being fixed for a particular scrum
than rest of the development part would be transferred to the next scrum (would be delivered in the next build),
means developer can’t increase time decided for a scrum. It’s fixed.

Extreme Programming (XP): Here iteration period would be less than in scrum, which can be a 2-4 weeks. Here
developer priorities what to do first on the basis of client requirement. This duration which was being fixed for
iteration, can be increase if the some development part is still pending. The build would deploy with having all the
client needs. Thus iteration period is not fixed here it can be increase. But iteration should meet the entire client's
requirement in this build. More attention is required for testing in XP.

The responsibilities of the traditional project manager role are split up among these three Scrum roles.

1. Product Owner
2. Scrum Master
3. Team

Scrum has five meetings:

1. PBI Refinement Meeting (PBI-Product Backlog Item)


2. Sprint Plan Meeting
3. Scrum Meeting
4. Sprint Review Meeting
5. Sprint Retrospective Meeting

Diagram of Agile model:

20
Advantages of Agile model:

 Customer satisfaction by rapid, continuous delivery of useful software.


 People and interactions are emphasized rather than process and tools. Customers, developers and testers
constantly interact with each other.
 Working software is delivered frequently (weeks rather than months).
 Face-to-face conversation is the best form of communication.
 Close daily cooperation between business people and developers.
 Continuous attention to technical excellence and good design.
 Regular adaptation to changing circumstances.
 Even late changes in requirements are welcomed

Disadvantages of Agile model:

 In case of some software deliverables, especially the large ones, it is difficult to assess the effort required at
the beginning of the software development life cycle.
 There is lack of emphasis on necessary designing and documentation.
 The project can easily get taken off track if the customer representative is not clear what final outcome that
they want.
 Only senior programmers are capable of taking the kind of decisions required during the development
process. Hence it has no place for newbie programmers, unless combined with experienced resources.

When to use agile model:

 When new changes are needed to be implemented. The freedom agile gives to change is very important.
New changes can be implemented at very little cost because of the frequency of new increments that are
produced.
 To implement a new feature the developers need to lose only the work of a few days, or even only hours, to
roll back and implement it.
 Unlike the waterfall model in agile model very limited planning is required to get started with the project.
Agile assumes that the end users’ needs are ever changing in a dynamic business and IT world. Changes can
be discussed and features can be newly affected or removed based on feedback. This effectively gives the
customer the finished system they want or need.
 Both system developers and stakeholders alike, find they also get more freedom of time and options than if
the software was developed in a more rigid sequential way. Having options gives them the ability to leave
important decisions until more or better data or even entire hosting programs are available; meaning the
project can continue to move forward without fear of reaching a sudden standstill.

Velocity Calculation

Velocity is a useful planning tool for estimating how fast work can be completed and how long it will take to
complete a project. The metric is calculated by reviewing work the team successfully completed during previous
sprints; for example, if the team completed 10 stories during a two-week sprint and each story was worth 3 story
points, then the team's velocity is 30 story points per sprint.

Generally, velocity remains somewhat constant during a development project, which makes it a useful metric for
estimating how long it will take a team to complete a software development project. If the product backlog has
300 story points, and the team is averaging 30 story points per sprint, it can be estimated that team members will
require 10 more sprints to complete work. If each sprint lasts two weeks, then the project will last 20 more weeks.
If a team member is moved to another project, however, or new members are added -- the velocity must be
recalculated.

21
22
Credit Card Payment Process

What Is a Payment Gateway?

A payment gateway is an e-commerce application that authorizes payments for e-businesses, online retailers,
bricks and clicks, or traditional brick and mortar businesses. It is the virtual equivalent of a physical point of sale
terminal located in most retail outlets. Payment gateways encrypt sensitive information, such as credit card
numbers, to ensure that information passes securely between the customer and the merchant.

How Do Payment Gateways Work?

A payment gateway facilitates the transfer of information between a payment portal (such as a website, mobile
phone) and the Front End Processor or acquiring bank. Here is a step by step guide detailing how Payment
Gateways work:

23
24
What is SSL?

The Secure Socket Layer (SSL) and Transport Layer Security (TLS) is the most widely deployed security protocol
used today. It is essentially a protocol that provides a secure channel between two machines operating over the
Internet or an internal network. In today’s Internet focused world, the SSL protocol is typically used when a web
browser needs to securely connect to a web server over the inherently insecure Internet.

Technically, SSL is a transparent protocol which requires little interaction from the end user when establishing a
secure session. In the case of a browser for instance, users are alerted to the presence of SSL when the browser

25
displays a padlock, or, in the case of Extended Validation SSL, when the address bar displays both a padlock and a
green bar. This is the key to the success of SSL – it is an incredibly simple experience for end users.

Extended Validation (EV) SSL Certificates (such as GlobalSign ExtendedSSL) display visible trust indicators:

Standard SSL Certificates (such as GlobalSign DomainSSL and OrganizationSSL) display:

>As opposed to unsecured HTTP URLs which begin with "http://" and use port 80 by default, secure HTTPS URLs
begin with "https://" and use port 443 by default.
HTTP is insecure and is subject to eavesdropping attacks which, if critical information like credit card details and
account logins is transmitted and picked up, can let attackers gain access to online accounts and sensitive
information. Ensuring data is either sent or posted through the browser using HTTPS is ensuring that such
information is encrypted and secure.
In practice, how is SSL used in today’s modern e-commerce enabled / online workflow and service society?

 To secure online credit card transactions.


 To secure system logins and any sensitive information exchanged online.
 To secure webmail and applications like Outlook Web Access, Exchange and Office Communications Server.
 To secure workflow and virtualisation applications like Citrix Delivery Platforms or cloud-based computing
platforms.
 To secure the connection between an email client such as Microsoft Outlook and an email server such as
Microsoft Exchange.

 To secure the transfer of files over https and FTP(s) services such as website owners updating new pages to their
websites or transferring large files.

26
 To secure hosting control panel logins and activity like Parallels, cPanel, and others.
 To secure intranet based traffic such as internal networks, file sharing, extranets, and database connections.
 To secure network logins and other network traffic with SSL VPNs such as VPN Access Servers or applications like
the Citrix Access Gateway.

Credit Card Processing

By understanding how credit card processing works, where the money gets made off of the transactions
themselves and where those hidden fees actually are, you can gain some valuable insight into how Host Merchant
Services is able to make its guarantee. Here’s a step-by-step breakdown that sheds light on where the fees from
each transaction come from:

How Does Credit Card Processing Work?

The way credit card processing companies make money for themselves can sometimes be a confusing labyrinth
where fees are hidden, percentages are tied to things not listed on statements and the deal you think you are
getting isn’t the best deal you can actually get. Host Merchant Services is dedicated to giving its merchants the
lowest price guaranteed, and the company strives to maintain transparency with no hidden fees. So take a walk
with us and see behind the curtain as you learn exactly where the money is being made when you swipe a
customer’s credit card.

Step One: A customer visits a store.

Step Two: Customer purchases $10 worth of merchandise.

27
Step Three: The customer swipes his credit card through a payment processing terminal such as a Hypercom
T4205 from Equinox Payments to pay for the merchandise.

Step Four: The card reader recognizes who the customer is and contacts the bank that issued the credit card.

Step Five: The customer’s bank sends $10 to the merchant’s bank.

28
Step Six: Then the merchant’s bank deposits $9.80 to the merchant’s bank account.

Step Seven: That remaining 20 cents, a 2% fee, is taken from the $10 and given to the customer’s bank.

Step Eight: The customer’s bank then splits the 20 cents with the credit card company.*

* Depending on the specific company, country and merchant, the percentage can range from 1% to 6%. The
amount the bank gets and the amount Visa gets is a negotiated deal. Also, Visa and MasterCard charge the banks
an annual fee to be a part of their network in the first place.

Where Money Gets Made

29
Credit Card Companies make money in a variety of ways. Here are the four most common:

 One: The most common way credit card companies make money is through fees, such as the annual fee,
over limit fee and past due fees.
 Two: Another way credit card companies make money is through interest on revolving loans if the card
balance is not paid in full each month.
 Three: As explained above, the card issuer (the bank that issued the card and/or the issuer network, be it
Visa, MasterCard, Discover) makes a percentage of each item you purchase from a merchant who accepts
your credit card. The rates range from 1% to 6% for each purchase.
 Four: The card issuer can also make money through ancillary avenues, such as selling your name to a
mailing list or selling advertisements along with your monthly billing statement.

Credit Card (AVMD-3456)

A credit card is a payment card issued to users as a system of payment. It allows the cardholder to pay for goods
and services based on the holder’s promise to pay for them. The issuer of the card creates a revolving account and
grants a line of credit to the consumer (or the user) from which the user can borrow money for payment to a
merchant or as a cash advance to the user.

Credit cards are issued by a credit card issuer, such as a bank or credit union, after an account has been approved
by the credit provider, after which cardholders can use it to make purchases at merchants accepting that card.
Merchants often advertise which cards they accept by displaying acceptance marks – generally on stickers
depicting the various logos for credit card companies like Visa, MasterCard and Discover. Sometimes the merchant
may skip the display and just communicate directly with the consumer, saying things like “We take Discover” or
“We don’t take credit cards”.

When a purchase is made, the credit card user agrees to pay the card issuer. The cardholder indicates consent to
pay by signing a receipt with a record of the card details and indicating the amount to be paid or by entering
personal identification number (PIN). Also, many merchants now accept verbal authorizations via telephone and
electronic authorization using the Internet, known as a card not present transaction (CNP).

Electronic verification systems allow merchants to verify in a few seconds that the card is valid and the credit card
customer has sufficient credit to cover the purchase, allowing the verification to happen at time of purchase. The
verification is performed using a credit card payment terminal or point-of-sale (POS) system with a
communications link to the merchant’s acquiring bank. Data from the card is obtained from a magnetic stripe or
chip on the card; the latter system is implemented as an EMV card. For card not present transactions where the
card is not shown (e.g., e-commerce, mail order, and telephone sales), merchants additionally verify that the
customer is in physical possession of the card and is the authorized user by asking for information such as the
security code printed on the back of the card.

Each month, the credit card user is sent a statement indicating the purchases undertaken with the card, any
outstanding fees, and the total amount owed. After receiving the statement, the cardholder may dispute any
charges that he or she thinks are incorrect. The cardholder must pay a defined minimum portion of the amount
owed by a due date, or may choose to pay a higher amount up to the entire amount owed which may be greater

30
than the amount billed. The credit issuer charges interest on the unpaid balance if the billed amount is not paid in
full (typically at a much higher rate than most other forms of debt). In addition, if the credit card user fails to make
at least the minimum payment by the due date, the issuer may impose penalties on the user.

Merchants

For merchants, a credit card transaction is often more secure than other forms of payment, because the issuing
bank commits to pay the merchant the moment the transaction is authorized, regardless of whether the consumer
defaults on the credit card payment. In most cases, cards are even more secure than cash, because they
discourage theft by the merchant’s employees and reduce the amount of cash on the premises. Finally, credit
cards reduce the back office expense of processing checks/cash and transporting them to the bank.

For each purchase, the bank charges the merchant a commission (discount fee) for this service and there may be a
certain delay before the agreed payment is received by the merchant. The commission is often a percentage of the
transaction amount, plus a fixed fee (interchange rate). In addition, a merchant may be penalized or have their
ability to receive payment using that credit card restricted if there are too many cancellations or reversals of
charges as a result of disputes. Some small merchants require credit purchases to have a minimum amount to
compensate for the transaction costs.

Costs to merchants

Merchants are charged several fees for accepting credit cards. The merchant is usually charged a commission of
around 1 to 3 percent of the value of each transaction paid for by credit card. The merchant may also pay a
variable charge, called an interchange rate, for each transaction.

Merchants must also satisfy data security compliance standards which are highly technical and complicated. In
many cases, there is a delay of several days before funds are deposited into a merchant’s bank account. Because
credit card fee structures are very complicated, smaller merchants are at a disadvantage to analyze and predict
fees.

Finally, merchants assume the risk of chargebacks by consumers.

For more information on how Credit Card Processing works, view our step-by-step guide here.

Interchange Rate

Interchange is a term used in the payment card industry to describe a fee paid between banks for the acceptance
of card based transactions. Usually it is a fee that a merchant’s bank (the “acquiring bank”) pays a customer’s bank
(the “issuing bank”).

In a credit card or debit card transaction, the card-issuing bank in a payment transaction deducts the interchange
fee from the amount it pays the acquiring bank that handles a credit or debit card transaction for a merchant. The
acquiring bank then pays the merchant the amount of the transaction minus both the interchange fee and an
additional, usually smaller fee for the acquiring bank or ISO, which is often referred to as a discount rate, an add-
on rate, or pass-through.

For cash withdrawal transactions at ATMs, however, the fees are paid by the card-issuing bank to the acquiring
bank (for the maintenance of the machine).

These fees are set by the credit card networks, and are the largest component of the various fees that most
merchants pay for the privilege of accepting credit cards. Visa, MasterCard, and Discover are each known as card

31
associations. And each card association has their own rate sheets known as Interchange Reimbursement Fees.
These fees make up the majority of what you pay to your processor and they vary greatly depending on the card
type accepted.

Internet Payment Gateway and How It Works

Transaction Express ® from TransFirst® makes payment processing possible via any device with Internet connection.
It is a complete payment processing center that lets merchants accept credit cards and signature debit cards
through any Internet connection, with no terminal needed. As an online Internet payment gateway, it allows you
to process credit card orders from your website in real time; this way, your customer knows immediately whether
or not their credit card was approved.

The Transaction Express electronic payment gateway can be integrated with most websites and virtual shopping
carts to streamline online credit card processing. A shopping cart is usually used before the payment gateway. This
function allows your customers to pick and choose the various items they want to purchase from your website,
including options such as size, color, etc. At checkout the shopping cart totals the items, adds tax and shipping and
collects the customers shipping and billing information.

The payment gateway captures the credit card transaction, encrypts the transaction information, routes it to the
credit card processor and then returns either an approval or a decline notice. This is a seamless process and your
customer does not directly interact with the payment gateway as data is forwarded to the gateway via your
shopping cart and a secure connection.

There are three vital things that an online payment gateway does when a customer attempts to make a purchase
from your website using a credit card or a debit check card. These include: authorization, settling, and reporting.

Authorization

Any purchase made with a credit or debit card via a payment gateway must first be authorized by the credit card
issuer. The payment gateway checks that the credit card is acceptable. The gateway affords you a secure link
between you, your customer and your credit card processor. It also allows for fast and efficient transaction
processing with an average response time of 2 seconds.

Settling

At the end of the day, the Internet payment gateway groups all of your transactions together and sends them off
to your bank in a single batch. This process, known as settling, passes the transaction to your bank so that you
receive payment. Trans First offers clients an auto-batch close service that automatically settles transactions at the
same time every day. If there are no transactions pending in the batch, it is not closed and no batch fee is
generated or charged. Once the funds settle, it normally takes two business days for you to see the funds
electronically deposited into your bank account.

32
Reporting

This process records your transactions and allows you to view the using the payment gateway report facilities.
From here you can review them, print them or download them to your computer for further processing.
Transaction Express offers advanced report search capabilities including customizable reporting with up to 5 user-
defined fields that drive to reports.

Unlimited Users

With an Internet payment gateway, an unlimited number of users can use the gateway at the same time, unlike a
terminal or software solution where only one customer at a time can check out, or one operator can enter
transactions at a time. With an Internet payment gateway, you can have multiple users entering transactions from
various locations, all at the same time.

Credit Card Batch Processing

Batch processing is the settlement stage of credit card processing that begins after a transaction has been
authorized by the card-issuing bank. Once it sends the authorization code to the merchant, the bank places a hold
on the cardholder’s line of credit or account funds to cover the transaction amount.

How Batch Processing Works

Credit card terminals, processing software and electronic payment gateways store all of a merchant’s credit card
authorization codes in a data file, usually until the end of the business day, when they are all uploaded and
processed simultaneously in one batch.

When it’s time to settle (or close) the batch, the merchant transmits all the authorization codes to their credit card
processor, who sorts and forwards them on to the appropriate issuing banks. The banks release the funds to the
processor, who deposits them into the merchant’s bank account. This step is typically completed within 48 hours
of the transaction. The issuing bank bills the cardholder for the purchase on their monthly statement.

Merchants can close their batch at any time during the day, or even after each individual transaction. However,
credit card processors charge a fee each time the batch is closed, so it’s most cost effective to settle all
transactions at the same time and avoid multiple fees for the same service.

EMV-Europay, MasterCard, Visa

EMV cards, also known as smart cards, were developed and backed by four of the major card brands. First
implemented in Europe, the cards rely on an embedded microchip to send and receive payment data with a
merchant’s EMV-enabled terminal or POS system.

The chips, only about 3 by 5 mm in size, transmit unique numbers to the payment processors each time the cards
are used. This increases the security since the customers’ name and signature are not used or stored. Making the
chip-based cards unaffected by breaches

33
These cards have been used in Europe for more than a decade and have appeared in Canada as recently as two
years ago. So what’s holding the United States up? That’s right; you guessed it, the price tag. Javelin Strategy &
Research estimates the cost of deployment for EMV in the U.S. at about $8.6 billion. The major card brands,
however, have decided to make the push from the current magnetic strip standard, to the more secure form, EMV.

Amex joins the club

In late June, American Express announced that it would be joining Visa and MasterCard, in requiring the chip-based
cards. Visa began an aggressive push last year for EMV cards; the company claimed more than a million of the
cards were in circulation at the end of 2011. AmEx, however, will require they be implemented in April 2013,
instead of the 2015 mandate set by Visa and MasterCard.

Fraud Free

You may find yourself asking, at such a large implementation cost, are EMV cards really worth it? The answer is
yes! The savings comes in the form of decreased fraud. The chip-embedded cards are much harder to duplicate
than their magnetic strip enabled counterparts. Criminals can modify or replace the information on mag-stripe
cards easily. Whereas the signals EMV cards give off, cannot be duplicated.

Fraud in the United States amounted to more than $3.56 billion in 2010. Globally, the U.S. contributed to about
27% of payment-card purchases, yet accounted for 47% of global payment-card fraud.

In summary, EMV cards are coming to the U.S. whether merchants want to accept them or not. The cost to
implement them may cause a bit of a sticker shock, but the long-term benefits of virtually eliminating card fraud
heavily outweigh it. The decreased fraudulent charges will eventually translate into more savings for you, the
merchant.

Types of Banking

1. Retail Banking – Which deals directly with individuals and small businesses
2. Corporate Banking – Which deals with large business entities

34
Database and ETL Testing

SQL Statements
 SQL SELECT
 SQL INSERT
 SQL UPDATE
 SQL DELETE
 SQL CREATE TABLE
 SQL ALTER TABLE
 SQL RENAME
 SQL TRUNCATE
 SQL DROP

SQL Clauses
 SQL WHERE
 SQL ORDER BY
 SQL GROUP BY
 SQL HAVING

SQL Operators
 SQL Logical Operators
 SQL Comparison Operators
 SQL LIKE, IN, ISNULL, BETWEEN & AND

SQL Integrity Constraints


 Primary Key Constraint
 Foreign Key Constraint
 Not Null Constraint
 Unique Key Constraint
 Check Constraint

SQL Other Topics


 SQL Commands
 SQL Aliases
 SQL Group Functions
 SQL JOINS
 SQL VIEWS
 SQL Subquery
 SQL Index
 SQL GRANT, REVOKE

Links

http://beginner-sql-tutorial.com/sql-select-statement.htm

35
SQL Constrains

 Primary Key
 Foreign Key
 Not Null
 Unique
 Check

Difference between Primary and Unique Key

Both Primary key and Unique key enforces the Uniqueness of the values (i.e. avoids duplicate values) on the
column on which it is defined.

Primary Key

 It doesn’t allow Null values. Because of this we refer PK=Unique+Not Null


 We can have only one Primary key in a table.

Unique Key

 Allows Null value. But only one null value.


 We can have more than one unique key in a table.

Example:

Define Primary key and unique key

1. CREATE TABLE Employee


2. (
3. EmpID int PRIMARY KEY, --define primary key
4. Name varchar (50) NOT NULL,
5. MobileNo int UNIQUE, --define unique key
6. Salary int NULL
7. )

Joins

Inner Joins 3-Tables

Select * from Emp INNER JOIN Order On Emp.Emp_ID=Order.Emp_ID


INNER JOIN Book On Book.Book_ID=Order.Book_ID
INNER JOIN Author On Author.Book_ID=Book.Book_ID
INNER JOIN AuthorOn Book_Author.Author_id=Author.Author_id;

SELECT t1.col, t3.col FROM table1 JOIN table2 ON table1.primarykey = table2.foreignkey


JOIN table3 ON table2.primarykey = table3.foreignkey

36
Difference between delete, truncate and drop

Database Views

Views are virtual tables that are compiled at run time. The data associated with views are not physically stored in
the view, but it is stored in the base tables of the view. A view can be made over one or more database tables.
Generally we put those columns in view that we need to retrieve/query again and again. Once you have created
the view, you can query view like as table. We can make index, trigger on view.

Views for security purpose since it restricts the user to view some columns/fields of the table(s). Views show only
those columns that are present in the query which is used to make view. One more advantage of Views is, data
abstraction since the end user is not aware of all the data present in database table

Syntax for View

1. CREATE VIEW view_name


2. AS
3. select_statement[]

2-Types of Views

 System Defined Views


o Information Schema View
o Catalog View
o Dynamic Management View
 Server-scoped Dynamic Management View
 Database-scoped Dynamic Management View
 User Defined Views
o Simple View
o Complex View

1. System Defined Views

System defined Views are predefined Views that already exist in the Master database of Sql Server. These are also
used as template Views for all newly created databases. These system Views will be automatically attached to any
user defined database.

37
We have following types of system defined views.
1.1 Information Schema View
In Sql Server we have twenty different schema views. These are used to display information of a database, like as
tables and columns. This type of view starts with INFORMATION_SCHEMA and after this view name.
1. --Create a table
2. create table Employee_Test
3. (
4. Emp_ID int identity,
5. Emp_Name varchar(55),
6. Emp_Technology varchar(55),
7. Emp_Sal decimal (10,2),

38
8. Emp_Designation varchar(20)
9. )
10. --To view detailed information of the columns of table Employee_Test
11. SELECT * FROM INFORMATION_SCHEMA.COLUMNS
12. where TABLE_NAME='Employee_Test'

1.2 Catalog View


Catalog Views were introduced with SQL Server 2005. These are used to show database self describing information.

select * from sys.tables

1.3 Dynamic Management View


Dynamic Management Views were introduced in SQL Server 2005. These Views give the administrator information of
the database about the current state of the SQL Server machine. These values help the administrator to analyze
problems and tune the server for optimal performance. These are of two types
1.3.1 Server-scoped Dynamic Management View
These are stored only in the Master database.
1.3.2 Database-scoped Dynamic Management View
These are stored in each database.
1.3.3 --To see all SQL Server connections
1.3.4 SELECT connection_id,session_id,client_net_address,auth_scheme
1.3.5 FROM sys.dm_exec_connections

39
2. User Defined Views
These types of view are defined by users. We have two types of user defined views.
Simple View
When we create a view on a single table, it is called simple view.
1.3.6 --Now Insert data to table Employee_Test
1.3.7 Insert into Employee_Test values ('Amit','PHP',12000,'SE');
1.3.8 Insert into Employee_Test values ('Mohan','ASP.NET',15000,'TL');
1.3.9 Insert into Employee_Test values ('Avin','C#',14000,'SE');
1.3.10 Insert into Employee_Test values ('Manoj','JAVA',22000,'SSE');
1.3.11 Insert into Employee_Test values ('Riyaz','VB',18000,'TH');
1.3.12 -- Now create view on single table Employee_Test
1.3.13 create VIEW vw_Employee_Test
1.3.14 AS
1.3.15 Select Emp_ID ,Emp_Name ,Emp_Designation
1.3.16 From Employee_Test

1.3.17 -- Query view like as table


1.3.18 Select * from vw_Employee_Test

40
In simple view we can insert, update, delete data. We can only insert data in simple view if we have primary key
and all not null fields in the view.
1.3.19 -- Insert data to view vw_Employee_Test
1.3.20 insert into vw_Employee_Test(Emp_Name, Emp_Designation) values ('Shailu','SSE')
1.3.21 -- Now see the affected view
1.3.22 Select * from vw_Employee_Test

1.3.23 -- Update data to view vw_Employee_Test


1.3.24 Update vw_Employee_Test set Emp_Name = 'Pawan' where Emp_ID = 6
1.3.25 -- Now see the affected view
1.3.26 Select * from vw_Employee_Test

1.3.27 -- Delete data from view vw_Employee_Test


1.3.28 delete from vw_Employee_Test where Emp_ID = 6
1.3.29 -- Now see the affected view
1.3.30 Select * from vw_Employee_Test

41
Complex View
When we create a view on more than one table, it is called complex view.
1.3.31 --Create another table
1.3.32 create table Personal_Info
1.3.33 (
1.3.34 Emp_Name varchar(55),
1.3.35 FName varchar(55),
1.3.36 DOB varchar(55),
1.3.37 Address varchar(55),
1.3.38 Mobile int,
1.3.39 State varchar(55)
1.3.40 )
1.3.41 -- Now Insert data
1.3.42 Insert into Personal_Info values ('G.Chaudary','22-10-1985','Ghaziabad',96548922,'UP');
1.3.43 Insert into Personal_Info values ('B.S.Chauhan','02-07-1986','Haridwar',96548200,'UK');
1.3.44 Insert into Personal_Info values ('A.Panwar','30-04-1987','Noida',97437821,'UP');
1.3.45 Insert into Personal_Info values ('H.C.Patak','20-07-1986','Rampur',80109747,'UP');
1.3.46 Insert into Personal_Info values ('M.Shekh','21-10-1985','Delhi',96547954,'Delhi');
1.3.47 -- Now create view on two tables Employee_Test and Personal_Info
1.3.48 Create VIEW vw_Employee_Personal_Info
1.3.49 As
1.3.50 Select e.Emp_ID, e.Emp_Name,e.Emp_Designation,p.DOB,p.Mobile
1.3.51 From Employee_Test e INNER JOIN Personal_Info p
1.3.52 On e.Emp_Name = p. Emp_Name

1.3.53 -- Now Query view like as table

42
1.3.54 Select * from vw_Employee_Personal_Info

We can only update data in complex view. We can't insert data in complex view.
1.3.55 --Update view
1.3.56 update vw_Employee_Personal_Info set Emp_Designation = 'SSE' where Emp_ID = 3
1.3.57 -- See affected view
1.3.58 Select * from vw_Employee_Personal_Info

Database Indexes

An Index is a data structure that is created to improve the performance of the data fetch operations on a table, EX-
when there are thousands of records in a table, retrieving information will take a long time. Therefore indexes are
created on columns which are accessed frequently, so that the information can be retrieved quickly. Indexes can
be created on a single column or a group of columns. When an index is created, it first sorts the data and then it
assigns a ROWID for each row.

Syntax to Create Index

CREATE INDEX index_name

ON table_name (column_name1, column_name2);

Create Index abc_Index ON Emp(First_Name, Sal, City);

43
Difference between Views and Index

Both Views and Indexes are created on top of a table but each of them serves a specific purpose. An Index is a data
structure that is created to improve the performance of the data fetch operations on a table. View is similar to a
table but may contain data from one or more tables connected to one another based on the business logic. A view
can be created to implement business logic or to conceal the underlying table implementation from everyone.

Views

View is the simply subset of table which are stored logically in a database means a view is a virtual table in the
database whose contents are defined by a query.

Views are used for security purpose in databases, views restricts the user from viewing certain column and rows
means by using view we can apply the restriction on accessing the particular rows and columns for specific user.
Views display only those data which are mentioned in the query, so it shows only data which is returned by the
query that is defined at the time of creation of the View.

View is stored as a select statement in database. It provides security for both data and table. That means if we
drop view no damage occurs to table. And the n/w traffic can be controlled, because a large query which occupies
more memory is stored as a view.

Index

 Indexes are special lookup tables that the database search engine can use to speed up data retrieval.
Simply put, an index is a pointer to data in a table. An index in a database is very similar to an index in the
back of a book.
 For example, if you want to reference all pages in a book that discuss a certain topic, you first refer to the
index, which lists all topics alphabetically and are then referred to one or more specific page numbers.
 An index helps speed up SELECT queries and WHERE clauses, but it slows down data input, with UPDATE
and INSERT statements. Indexes can be created or dropped with no effect on the data.
Advantages of views

Security

 Each user can be given permission to access the database only through a small set of views that contain
the specific data the user is authorized to see, thus restricting the user's access to stored data
Query Simplicity

 A view can draw data from several different tables and present it as a single table, turning multi-table
queries into single-table queries against the view.
Structural simplicity

 Views can give a user a "personalized" view of the database structure, presenting the database as a set of
virtual tables that make sense for that user.
Consistency

44
 A view can present a consistent, unchanged image of the structure of the database, even if the underlying
source tables are split, restructured, or renamed.

Data Integrity

 If data is accessed and entered through a view, the DBMS can automatically check the data to ensure that
it meets the specified integrity constraints.
Logical data independence.

 View can make the application and database tables to a certain extent independent. If there is no view,
the application must be based on a table. With the view, the program can be established in view of above,
to view the program with a database table to be separated.
Disadvantages of views

Performance

 Views create the appearance of a table, but the DBMS must still translate queries against the view into
queries against the underlying source tables. If the view is defined by a complex, multi-table query then
simple queries on the views may take considerable time.
Update restrictions

 When a user tries to update rows of a view, the DBMS must translate the request into an update on rows
of the underlying source tables. This is possible for simple views, but more complex views are often
restricted to read-only.

45
Common SQL Question

How to find the duplicate record

Select count (*)-1 from tablename group by (Col1, col1….) having count(*)>1;

46
ETL Testing

ETL stands for Extract Transformation and Load, It collect the different source data from heterogeneous system
(DB) and transform into data warehouse (Target). Data are first transform to staging table (temporary table) based
on business rules the data are mapped into target table. This process is manually mapped using ETL tool
configuration. ETL not transformed the duplicate data.

Data Transformation process speed based on source and target data ware House. It considers the OLAP structure
(Online Analytic Processing) and Data Warehouse Model.

Difference between Database and Data Warehouse testing

Database testing and data warehouse is similar while the fact is that both hold different direction in testing.

 Database testing is done using smaller volume of data with data involving OLTP (Online transaction
processing) databases, while data warehouse testing is done with large volume of date with data
involving OLAP (online analytical processing) databases.
 Database testing data is consistently injected from uniform sources while in data warehouse testing most
of the data comes from different kind of data sources which are sequentially inconsistent.
 DB testing generally we perform only CRUD (Create, read, update and delete) while in data warehouse
testing we use read-only (Select) operation.
 Normalized databases are used in DB testing while demoralized DB is used in data warehouse testing.

Difference between Normalization and De-normalization

 Normalization is the process of dividing larger tables in to smaller ones and defining relationships
between them to reducing the redundant data, while de-normalization is the process of adding redundant
data to optimize performance.
 In normalization Inserts, Selects, Updates operations are very fast because there are no duplicates.
 In de-normalization Inserts, Selects, Updates operations are very slow because there are duplicates

Normalization

In RDBMS, Normalization is the process of organizing data to minimize redundancy. Normalization usually involves
dividing a database into two or more tables and defining relationships between the tables. The main objective is to
isolate data so that data anomalies can be perform well (insert, select update etc.).

Types of Normal Form

 First Normal Form (1NF): All are atomic- this concept is referred to as table’s atomicity.

 Eliminate duplicative columns from the same table.


 Create a separate table for each set of related data.
 Identify each set of related data with a primary key.
 Second Normal Form (2NF): No partial dependency- Records should not depend on anything other than a
table's primary key.
 Eliminate Redundant Data.
 Create separate tables for sets of values that apply to multiple records.
 Relate these tables with a foreign key.

47
Ex: Consider a customer's address in an accounting system. The address is needed by the Customers table,
but also by the Orders, Shipping, Invoices, Accounts Receivable, and Collections tables. Instead of storing
the customer's address as a separate entry in each of these tables, store it in one place, either in the
Customers table or in a separate Addresses table.

 Third Normal Form (3NF): Every non-prime attribute of the table must be dependent on Primary key
o Eliminate the Transitive Functional Dependency from the table.
o Ex: Student table- SID, SName, DOB, Street, City State & ZIP (here SID is P-key) but here Street,
City & State dependent on ZIP (The dependency btwn ZIP and other filed called as TD), so we
need to remove Street, City & State to new table with ZIP (here ZIP is P-key).
o So now there two table as
 Student table: SID, SName, DOB & ZIP
 Address table: ZIP, Street, City & State

 Boyce Codd Normal Form (BCNF)


o BCNF is an extension/Higher version of 3NF
o BCNF states that for any Non-Trivial Functional Dependency X → A, and then X must be a super-
key. In the above depicted picture, SID is super-key in Student table relation and ZIP is super-key
in Address table relation. So,
 SID → SName, Zip and
 ZIP → City

 Fourth normal form (4NF)


 Eliminates Multi-Valued Dependency.

 Fifth normal form (5NF)


 Every join dependency for the entity is a consequence of its candidate keys.

ETL Testing Methodology includes following steps:

 Understanding of data to be reported


 Understanding and review of data model
 Understanding and review of source to target mappings (transformations)
 Data Quality Assessment of Source data
 Packages testing
 Schema testing (source and target)
 Verify Data completeness
 Verification of transformation rules
 Comparison of sample data between source and target
 Checking of referential integrities and relations (primary key foreign key)
 Data Quality checks on target warehouse
 Performance tests

48
ETL Tester’s Responsibilities

ETL Tester primarily test source data extraction, business transformation logic and target table loading. There are
so many tasks involved for doing the same, which are given below -

1. Stage or Staging table/ SFS or MFS file created from source upstream system - below checks come under this:

 Record count Check


 Reconcile records with source data
 No Junk data loaded
 Key or Mandatory Field not missing
 duplicate data not
 Data type and size check

2) Business transformation logic applied - below checks come under this:

 Business data check like telephone no can’t be more than 10 digit or character data
 Record count check for active and passing transformation logic applied
 Derived Field from the source data is proper
 Check Data flow from stage to intermediate table
 Surrogate key generation check if any

3. Target table loading from stage file or table after applying transformation - below check come under this

 Record count check from intermediate table or file to target table


 Mandatory or key field data not missing or Null
 Aggregate or derived value loaded in Fact table
 Check view created based on target table
 Truncate and load table check
 CDC applied on incremental load table
 dimension table check & history table check
 Business rule validation on loaded table
 Check reports based on loaded fact and dimension table

How do you eliminate duplicate Column from the table?

Select Syntax

select COUNT(*) from TableName group by (Col1, col2..)having COUNT(*)>1;

Example

select COUNT(*) from employee group by EmpID, EmpName, Contact, Age, Sal having COUNT(*)>1;

Delete Syntax

49
Healthcare Domain Concepts

Life Insurance

Healthcare Medical Dental Vision and Dual of anything

Dtmf-phone
Plan –Hmo, pop

Agents commission in Insurance Policies

Endowment

Life insurance policy that pays the assured sum (face amount) on a fixed date or upon the death of the insured,
whichever comes earlier. Endowment policies carry premiums higher than those on conventional whole life
policies and term insurance, but are useful in meeting special sum needs such as college expenses or for buying a
retirement home. Also called endowment life policy or endowment policy

1. Debt instrument: Amount of an obligation to be collected or paid at maturity.

2. Endowment insurance: Amount received by an insured (or his or her beneficiary) on reaching a specified age at
the end of endowment period.

Money back

ULIP

Agents commission in Insurance Policies

In this article we will see the commission structure of Insurance Policies. We will look at Endowment/Money
back/ULIP plans and how much commission an agent earns per year out of those policies, we looked at Mutual
funds commission earlier and now let’s see how much commission an agent earns from Insurance policies.

As per Insurance Act, 1938, the insurance companies are allowed to pay a maximum commission of 40 per cent of
the first year’s premium, 7.5 per cent of the second year’s premium and 5 per cent from there on. The commission
paid is limited to 2 per cent in case of single premium policies. In case of pension plans, the commission is limited
to 7.5 per cent of the first year’s premium and 2 per cent there on. Currently most of the policies are very much
paying these kinds of commissions. Let us quickly look some of the facts on Life Insurance .

 Average sum assured of the insured Indian is around Rs 90,000


 1 trillion worth of policies lapsed in 2008-09, this is mostly because investors have discarded their old
policies to buy new ones, thanks to agents who tell people about another “hot” plan in market. Another
reason is that investor’s buy policies which have higher premium than what they can afford in reality and
later feel that it’s time to stop it.
 India Insurance penetration is around 7.5% of global numbers. i.e.: 0.16% of the GDP, which is, against a
global average of 2.14
 As per IRDA report 2008-09, Insurance Industry had 29.37 lakh agents by the end of Mar 2009, out of
which 13 lakh agents were added during 2008-09..

50
Life Insurance Commission Example

Premium Paying Upfront Commission (1st Trail Commission Trail Commission


Policy Type Term Year) (2nd & 3rd yr) (from 4th yr)

Endowment / Term
Plans 15+ yrs 25% – 35% * 7.5% 5%

Endowment / Term
Plans 10-14 yrs 20% – 28% * 7.5% 5%

Endowment / Term
Plans 5-9 yrs 14% 5% 5%

Endowment / Term
Plans Single Premium 2% 0% 0%

Money Back 15+ yrs 15% – 21% * 10% 5%

ULIPs Regular Premium 20 – 40% 2% 2%

ULIPs Single premium 2% 0% 0%

Note: Some of the numbers are in range, which means the commission can lie between that ranges. Mostly its
minimum commission + Bonus if any

Example

 Policy Type : Endowment Policy


 Premium Paying Term : 20 Yrs
 Premium/Year : Rs 1 Lacs

Agents Commission

Year Commission Amount Method

1st Year Rs 35,000 1 X 35%

2nd Year Rs 15,000 2 X 7.5%

4-20th Year Rs 85,000 17 X 5%

51
Total Rs 1.35 Lacs 6.75%

Importance of Life Insurance in India

One of my good friends had a small argument with me that she would not invest in Term Plan of Insurance,
because she will not get any “returns” out of it. I believe investing in a term plan looked a very unprofitable thing
to her as she never gets back the money she paid as “premiums”, if she survives.

Endowment plans looked nice to her, because they provide money if you are dead and even if you survive. You get
back money as the prize for not dying!

With respect to Term insurance, she understood the fact that her family will get the money from insurance
company in case of her death, but she was concentrating on the fact that she would not get back anything if she
survives. What is the return in that case? Nothing!!! , and looked like someone is fooling you with a product called
“Term Insurance” , where you are “investing” premiums to get nothing at the end.

Let me now tell why this happens and some give you some insight on this matter.

I have already talked earlier in my last post “Life Insurance and how to go about it”, about Term Insurance. Let me
now take deeper dive into it and talk about the reasoning part.

I will first talk about fundamentals of Insurance and then talk about Endowment Policies and why they are popular,
and what people don’t realize about them. And how Term insurance is the right thing for most of the people

Basics of Life Insurance

What happens in an average family: There is someone who earns and his family comprises of wife, kids, parents. if
not all there is a subset of these family members. The head of the family earns and his family lives happily. All the
expenses are met from the earnings of this main member, most of the time the husband. Now consider this person
dies in an accident or for that matter because of any event. What happens? What happens to his family members
other than the psychological trauma? If they don’t have money to take care for themselves ,either someone from
family have to take up the job and start working which may not be possible for them, or They have to decrease
their standard of life to maintain the expenses . They are now totally unsecured from future’s point of view. In
short they are totally messed up, which should not have happened. I gave this detailed explanation for the
circumstances because I wanted you to understand how badly can happen and proper measures must be taken
care for this.

What is the Solution?

Adequate Coverage!!! , this can’t be compromised… You must have a backup plan which can give your family the
same kind of income which confirms that they are not short of money in case the main earner is gone. If there are
some debts like Home Loan , or any other tasks which need money apart from regular income , the cover must be
good enough to cover that too..

For example : Robert has a family expenses of 25,000 per month and there is a Home loan of Rs 25 lacs to be paid
within 10 yrs. He is 27 yrs old. He has a wife , 2 kids and parents. All of them are dependent on him financially. He
has investments of 5 lacs. Now in this case. In case he dies , who will take care of Home loan, how will provide
them enough money to live life comfortably. They need 25k * 12 = 3 lacs per year. which they can get per month if
they have 35-40 Lacs of money . If they put this in bank , they will get Rs 25,000 per month as interest which they

52
can use. Considering inflation it will not be enough after some years, but lets leave it now for this example. Add
home loan of 25 lacs to this 40 lacs and what we come to know is that this family must be covered with minimum
Rs 65 lacs. Rs 75-80 Lacs is a decent cover for this family. Now if he takes a cover of 80 lacs for his family, from that
day he can happily live all his life without any tension, thinking what will happen if he is not there. He will be
attaining peace of mind, and not be worried for it. He must get a lot of internal peace because his Family is
protected with a good enough cover to take care for them. And this is what you get in “return” from Insurance. No
monitory return can give you more satisfaction than peace of mind.

So before doing anything else , his first step is to give adequate cover to his family and that’s the most important
responsibility for him as a Husband , Father , Son . He must understand that this is not an investment for monitory
benefit later in his life , but its for his family happiness and future.

One point to remember and not forget is that this is the minimum cover required for family and anything less than
this will be taking risk with family future.

Endowment or Money back Policies

Lets discuss the problems with these plans with respect to the above example.

High Premium : For an 80 lacs cover for say 30 yrs , the premium payable will be At least 2-2.5 lacs/year (this is a
conservative figure) . So now premium so high is not possible for anyone like Robert, so what they do? They go
with a kind of cover for which they can pay premium easily, can then they take cover for 5 lacs , 10 lacs or
maximum 20 lacs. And guess who suffers in case of his death: HIS LOVED ONE’s .

It might also happen that they are compromising on a lot of small things which are important at that moment in
time , like buying a bike for son , which they cant buy because of the insurance they have to premium, or some
vacation they could have gone to with family , but compromise on that because of premium.

Money back at the end of the maturity is like a penny after so many years:

This is something most of the people overlook. They just see the numbers , 5 lacs 10 lacs or 20 lacs . And at the
time of taking Insurance it looks good figure to them, because they see numbers , they dont see its value after
many years, They don’t consider Inflation into account . In case of above example , if Robert takes a cover of 15
lacs by money back policy , what happens if he survives the tenure. He gets 15 lacs at the end , Great Money after
30 yrs . Isn’t!!!

Lets see how great this money is? His monthly expenses will grow from 25,000 per month to 1.5 lacs per month
(considering inflation of 6%) . Now this money will help him survive for not more than 10 months … For so many
years he pays high premium each year, just to get back money to cover his 10 months monthly expenses. ? What
the hell!!!

Under Insurance : Because of the fact that people want money back on survival and because of high premium ,
people end up taking policy for which they have to pay premium under there budget , which means less cover.
Without realising the fact that they are highly under insured , the reason for this is that they see Insurance as
investment product and not a protection cover for there family. When they die , there family get the money from
Insurance company , but most of the time its not enough for them and it erodes very soon.

53
Term Insurance Policies

Let’s discuss the features of Term Policies with respect to above example.

Cheap Premium: The premium is very low for Term insurance Policies. For above example, the yearly premium for
Rs 75 lacs cover for 25 yrs is just Rs 20,000 yearly or just 1,600 per month!!! . This is in any way affordable for most
of the people. Its providing the fundamental requirement of Good cover and low premium and if you think of
returns, Good cover and low premium can themselves be seen as good enough return. You family protection at
low cost is the return you get.

Opportunity to invest rest of the money in High return Investments :

With term Insurance you save a lot of money in premium and now you can invest this money as per your wish in
high return instruments , anyways in Endowment policies you put money for long term and you get it after so long
time. So you can now always put your saved money in things which are long term investment products and return
great returns.

One of those things is Equity Diversified Mutual funds and Direct Equity (depending on persons ability and
interest). In long term Equity Diversified gives fabulous returns (15-20 yrs) and the risk is minimised because of
long term. And if you consider India growth story , it looks great in long term , hence Equities for long term is the
most obvious choice . They will give you return of 15%+ CAGR. (15-20 yrs)

Also it will be flexible; you can not invest for a year or two, if you want to use the money for your family vacation
or some important event.

Conclusion:

Insurance is not an investment product , its a Protection instrument for your Family or any one your want to cover.
There are other products for your investments.

Let your finances be the way you want your life to be, SIMPLE!!!
Don’t mix Insurance and Investments. There are products like ULIPS(What are ULIPS) and Endowment or Money
Back policies which never excited me. They complicate things, confuse people.

They can be good if you understand how to make most out of it, but it requires knowledge and expertise. They
offer some flexibilities, but still they are not worth it .

Read more on Term Insurance at my Old article

I would be happy to read your comments or disagreement on any topic. Please leave a comment.

Disclaimer: All the opinions are personal and shall be taken as knowledge sharing and not as encouragement.

54
------------------------------------------------------------------------------------------------------------------------------------
Insurance Domain

It’s a risk management in which we avoid the potential losses. It allows an individual or business and other entities
to protect themselves.

Types of Insurance

 Life Insurance
 Health Insurance
 Vehicle Insurance (Commercial Vehicle and Personal Vehicle)
 General Liability Insurance
 Professional Liability Insurance
 Property Insurance
 Worker’s Compensation Insurance
 Directors and Officers Insurance

CRU-Commission Rates Update

--------------------------------------------------------------------------------------------------------------------------------------------
How Mobile Banking Works

Basic Mobile Banking Technologies, There are four fundamental approaches to mobile banking. The first two rely
on technologies that are standard features on almost all cell phones.

Interactive Voice Response (IVR)


If you’ve ever called your credit card issuer and meandered through a maze of prompts -- "For English, press 1; for
account information, press 2" -- then you’re familiar with interactive voice response. In mobile banking, it works
like this:
 Banks advertise a set of numbers to their customers.
 Customers dial an IVR number on their mobile phones.
 They are greeted by a stored electronic message followed by a menu of options.
 Customers select an option by pressing the corresponding number on their keypads.
 A text-to-speech program reads out the desired information.
IVR is the least sophisticated and the least "mobile" of all the solutions. In fact, it doesn’t require a mobile phone at
all. It also only allows for inquiry-based transactions, so customers can’t use it for more advanced services.

Short Message Service (SMS)


In some circles, mobile banking and SMS banking are synonymous. That’s because SMS banking uses text
messaging -- the iconic activity of cell phone use. SMS works in either a push mode or a pull mode. In pull mode,
the bank sends a one-way text message to alert a mobile subscriber of a certain account situation or to promote a
new bank service. In push mode, the mobile subscriber sends a text message with a predefined request code to
specific number. The bank then responds with a reply SMS containing the specific information

http://sqa.fyicenter.com/FAQ/Testing-Techniques/What_is_retesting_.html

55

You might also like