Download as doc, pdf, or txt
Download as doc, pdf, or txt
You are on page 1of 34

Platinum Technologies Testing Tools

Quick Test Professional FAQ’s


------------------------------
1. How QTP scripts are advantageous when need to re-work the same business scenarios?

A. To record the Quick Test Professional Script on one instance and be able to execute it on any other
instance. The assumption is there are no major GUI changes between the recorded and the execution
instances.

2. How can you make the scripts generic and what architecture needs to be followed?

A. In order to achieve the above objective, we need to plan the Quick Test Professional script. It should
have two parts:
1. Script – that is generic Quick test script.
2. Data – from the parameter file that is customer instance specific.

Eg. Imagine a business flow has a scenario


- Login to the web page
- Check mail
- Logout
1. Create data/parameter file (can be flat file [.txt] or an excel file) is instance specific.
2. Create (record/program) the QTP initialization
3. The initialization script which calls the Object repository, common function library and all QTP
actions/scripts

3. How to create an individual script?

A. Record the Quick Test Professional script and modify it to make it a generic script using the following
steps:

a. Set the testing options in the Test Settings Dialog box.


b. Record the script
c. Modify the script to make it generic:
i. Parameterize the hard coded values.
ii. Modify the Object Repository file with regular expressions.
iii. Add unique properties to items in the Object Repository to make recognition simpler.
iv. Insert synchronization points as required.
v. Insert checkpoints for pass/fail validation.
vi. Insert additional programming logic.
vii. Use the Common Functions.

4. What is the testing process in QTP? A. The testing process consists of 3 main phases:

1. Gather relevant test information – Input data should be gathered.

2. Create the basic test – Record/Program the scripts in actions

3. Enhance the basic test


Use Data Driven scripts to use the script to test with different sets of data
Reusable Actions- Identify the common scenarios and make the scripts generic and reuse the scripts
across different business scenarios.

5. What are different types of recording modes in QTP?


A. There are two recording modes
1. Low-level 2. Analog

Low – Level Recording:

#411,AnnapurnaBlock, AdithyaEnclave, Ameerpet, Hyderabad. 1


Ph:040-64510222 E-Mail:
platinum_hyd@yahoo.com
Platinum Technologies Testing Tools
Use Low-Level Recording when you need to record the exact location of the object on your application
screen.
To record in Low-Level Recording, select “Low-Level Recording” from the “Test” menu while
recording

Analog: Use Analog Recording for applications in which the actual movement of the mouse is what you
want to record. To record in Analog mode, select “Analog Recording” from the “Test” menu while
recording.

6. What is Object repository?

A. The Object Repository dialog box displays a test tree of all objects in the current action or the entire
test (depending on the object repository mode you choose when you create your test). You can use the
Object Repository dialog box to view or modify the properties of any test object in the repository or to
add new objects to your repository.

Object Repository Modes

Per – Action Object Repository:


This is the default setting, and all tests created in QuickTest 5.6 or earlier use this mode. In this mode,
QuickTest automatically creates an object repository file for each action in your test so that you can
record and run tests without creating, choosing, or modifying object repository files.
Shared Object Repository:
In this mode, you can use one object repository file for multiple tests if the tests include the same objects.
Object information that applies to many tests is kept in one central location. When the objects in your
application change, you can update them in one location for multiple tests

7. How to select the Object repository mode?

A. To select the object repository mode


Goto Test Settings > Resource Tab to change the Object Repository Mode

The default object repository file name is default.tsr


You can change object repository mode when the Test contain no actions.

8. What is Active screen? What are the advantages of Active screen?


A. Active screen captures all the properties of the application and makes available even when offline/
when you are not connected to the application.
The main advantage is checkpoints can be added without connecting to the application

9. What are different Screen capture options available for Active screen?

A. Complete—Captures all properties of all objects in the application’s active window/dialog box/Web
page in the Active Screen of each step. This level saves Web pages after any dynamic changes and saves
Active Screen files in a compressed format.

Partial—(Default). Captures all properties of all objects in the application’s active window/dialog
box/Web page in the Active Screen of the first step performed in an application’s window, plus all
properties of the recorded object in subsequent steps in the same window. This level saves Web pages
after any dynamic changes and saves Active Screen files in a compressed format.
Minimum—Captures properties only for the recorded object and its parent in the Active Screen of each
step. This level saves the original source HTML of all Web pages (prior to dynamic changes) and saves
Active Screen files in a compressed format.

None—Disables capturing of Active Screen files for all applications and Web pages.

10. How QTP identifies the objects in the application during runtime?

#411,AnnapurnaBlock, AdithyaEnclave, Ameerpet, Hyderabad. 2


Ph:040-64510222 E-Mail:
platinum_hyd@yahoo.com
Platinum Technologies Testing Tools
A. QTP uses different properties to identify the objects in the applications. They are:
a. Mandatory Properties
b. Assistive Properties
c. Object Identifies
d. Smart Identification

11. Explain all Object identification properties.

A. Mandatory and Assistive Properties:


During the test run, QuickTest looks for objects that match all properties in the test object description - it
does not distinguish between properties that were learned as mandatory properties and those that were
learned as assistive properties

Smart Identification: QuickTest uses a very similar process of elimination with its Smart Identification
mechanism to identify an object, even when the recorded description is no longer accurate. Even if the
values of your test object properties change, QuickTest’s TestGuard technology maintains your test’s
reusability by identifying the object using Smart Identification.

12. What are Ordinal identifies. Explain in detail.

A. Ordinal Identifiers are


Index:
Indicates the order in which the object appears in the application code relative to other objects with an
otherwise identical description.

Location:
Indicates the order in which the object appears within the parent window, frame, or dialog box relative to
other objects with an otherwise identical description. Values are assigned from top to bottom, and then
left to right.

The Web Browser object has a third ordinal identifier type:

Creation Time:
Indicates the order in which the browser was opened relative to other open browsers with an otherwise
identical description.

13. What is Smart Identification?

A. Smart Identification:
If QuickTest is unable to find any object that matches the recorded object description, or if it finds more
than one object that fits the description, then QuickTest ignores the recorded description, and uses the
Smart Identification mechanism to try to identify the object.

While the Smart Identification mechanism is more complex, it is more flexible, and thus, if configured
logically, a Smart Identification definition can probably help QuickTest identify an object, if it is present,
even when the recorded description fails.

14. What are the properties available in Smart identification?

A. Base filter properties:


The most fundamental properties of a particular test object class; those whose values cannot be changed
without changing the essence of the original object. For example, if a Web link’s tag was changed from
Optional filter properties:
Other properties that can help identify objects of a particular class as they are unlikely to change on a
regular basis, but which can be ignored if they are no longer applicable.

15. What is Object Spy? How is it used in QTP?

#411,AnnapurnaBlock, AdithyaEnclave, Ameerpet, Hyderabad. 3


Ph:040-64510222 E-Mail:
platinum_hyd@yahoo.com
Platinum Technologies Testing Tools
A. Using the Object Spy, you can view the run-time or test object properties and methods of any object in
an open application. You use the Object Spy pointer to point to an object. The Object Spy displays the
selected object’s hierarchy tree. It displays the run-time or test object properties and values of the
selected object in the Properties tab. It displays the run-time or test object methods associated with the
selected object in the Methods tab

16. What are Run-Time Object Properties / Run-Time Object Methods?

A. Run-Time Object Properties / Run-Time Object Methods:


You can use the Object property to access the native properties of any run-time object. For example, you
can retrieve the current value of the ActiveX calendar’s internal Day property as follows:

Eg. Sample code


Dim MyDay
Set MyDay = Browser('index').Page('Untitled').ActiveX('MSCAL.Calendar.7').Object.Day

17. What are Test Object Properties / Test Object Methods?

A. Test Object Properties / Test Object Methods:


You can use the GetTOProperty and SetTOProperty methods to retrieve and set the value of test object
properties for test objects in your test.

You can use the GetROProperty to retrieve the current property value of objects in your application
during the test run.

18.What are User-Defined Test Object Classes. How are they mapped?

A User-Defined Test Object Classes:


The Object Mapping dialog box enables you to map an object of an unidentified or custom class to a
Standard Windows class. For example, if your application has a button that cannot be identified, this
button is recorded as a generic WinObject. You can teach QuickTest to identify your object as if it
belonged to a standard Windows button class. Then, when you click the button while recording a test,
QuickTest records the operation in the same way as a click on a standard Windows button. When you
map an unidentified or custom object to a standard object, your object is added to the list of Standard
Windows test object classes as a user-defined test object. You can configure the object identification
settings for a user defined object class just as you would any other object class

19. What are checkpoints?

A. A checkpoint is a verification point that compares a current value for a specified property with the
expected value for that property. This enables you to identify whether your Web site or application is
functioning correctly.

When you add a checkpoint, Quick Test adds a checkpoint with an icon in the test tree and adds a Check
Point statement in the Expert View. When you run the test, Quick Test compares the expected results of
the checkpoint to the current results. If the results do not match, the checkpoint fails. You can view the
results of the checkpoint in the Test Results window.

20. What is a standard checkpoint?

A. You can check that a specified object in your application or on your Web page has the property values
you expect, by adding a standard checkpoint to your test. To set the options for a standard checkpoint,
you use the Checkpoint Properties dialog box.

21. What is Text or Text Area Checkpoint?

A. Text or Text Area Checkpoint Results

#411,AnnapurnaBlock, AdithyaEnclave, Ameerpet, Hyderabad. 4


Ph:040-64510222 E-Mail:
platinum_hyd@yahoo.com
Platinum Technologies Testing Tools
By adding text or text area checkpoints to your tests, you can check that a text string is displayed in the
appropriate place in your application or on your Web page. When you run your test, Quick Test compares
the expected results of the checkpoint to the actual results of the test run. If the results do not match, the
checkpoint fails.

23. What is Bitmap Checkpoint?

A. Bitmap Checkpoints:
You can check an area of a Web page or application as a bitmap. While creating a test, you specify the
area you want to check by selecting an object. You can check an entire object or any area within an
object. Quick Test captures the specified object as a bitmap, and inserts a checkpoint in the test. You can
also choose to save only the selected area of the object with your test in order to save disk space.

24. What is Table and Database Checkpoint?

A. Table and Database Checkpoints:


By adding table checkpoints to your tests, you can check that a specified value is displayed in a cell in a
table on your Web page or in your application. By adding database checkpoints to your tests, you can
check the contents of databases accessed by your Web page or application. The results displayed for table
and database checkpoints are similar. When you run your test, Quick Test compares the expected results
of the checkpoint to the actual results of the test run. If the results do not match, the checkpoint fails.

25. What is Accessibility Checkpoint?

A. Accessibility Checkpoints:
You can add accessibility checkpoints to help you quickly identify areas of your Web site that may not
conform to the W3C (World Wide Web Consortium) Web Content Accessibility Guidelines. You can add
automatic accessibility checkpoints to each page in your test, or you can add individual accessibility
checkpoints to individual pages or frames.

26. What is XML Checkpoint?

A. XML Checkpoint:
The XML Checkpoint Properties dialog box displays the element hierarchy and values (character data) of
the selected XML file.

Select the element(s), attribute(s), and/or value(s) that you want to check. For each element you want to
check, select the checks you want to perform. For each attribute or value you want to check, select the
checks you want to perform, or the parameterization options you want to set.

27. What is Synchronization?

A. When you run tests, your application may not always respond with the same speed. For example, it
might take a few seconds:
♣for a progress bar to reach 100%
♣for a status message to appear
♣for a button to become enabled
♣for a window or pop-up message to open
You can handle these anticipated timing problems by synchronizing your test to ensure that Quick Test
waits until your application is ready before performing a certain step.

28. What are different functions available for Synchronization?

A. There are several options that you can use to synchronize your test:

#411,AnnapurnaBlock, AdithyaEnclave, Ameerpet, Hyderabad. 5


Ph:040-64510222 E-Mail:
platinum_hyd@yahoo.com
Platinum Technologies Testing Tools
You can insert a synchronization point, which instructs Quick Test to pause the test until an object
property achieves the value you specify. When you insert a synchronization point into your test, Quick
Test generates a WaitProperty statement in the Expert View.

29. What is the difference in Exists/wait statements? A. Exist ()/ Wait()

You can insert Exist or Wait statements that instruct QuickTest to wait until an object exists or to wait a
specified amount of time before continuing the test.

Eg. Browser('Yahoo”).Page('CheckMail”).Button(“CheckMail”).Exists(10)

QTP waits for 10 seconds till the button exists in the page. The script proceeds if the button even exits
before 10 seconds unlike wait() statement – it waits for 10 seconds no matter the button exits before 10
seconds.

30. What is Default Time Out? A. Default Time Out:


You can also increase the default timeout settings in the Test Settings and Options dialog boxes in order
to instruct Quick Test to allow more time for certain events to occur

31. What is Parameterization (Data Table Wizard)?

A. You can supply the list of possible values for a parameter by creating a Data Table parameter. Data
Table parameters enable you to create a data-driven test (or action) that runs several times using the data
you supply. In each repetition, or iteration, Quick Test substitutes the constant value with a different
value from the Data Table.

32. What are Method Arguments? A. Using Method arguments you parameterize method arguments in
the Method Arguments dialog box. to open the Method Arguments dialog box, right-click a step
containing a method in the test tree and choose Method Arguments. The Method Arguments dialog box
opens and displays the method arguments in the step.

33. Well, I would like to run my test with different sets of data, How can I make it with the options
available in QTP?
A. Listed are the different Data Table Iterations
Run one iteration only:
Runs the test only once, using only the first row in the global Data Table.
Run on all rows:
Runs the test with iterations using all rows in the global Data Table.
Run from row __ to row __ :
Runs the test with iterations using the values in the global Data Table for the specified row range.
34. What are different data tables available?

A. 1. Global Sheet
The Global sheet contains the data that replaces parameters in each iteration of the test.
2. Action Sheets
Each time you add a new action to the test, a new action sheet is added to the Data Table. Action sheets
are automatically labeled with the exact name of the corresponding action. The data contained in an
action sheet is relevant for the corresponding action only.

35. What is an Action?

A. An Quick test script contains different actions. An action contains the script ie. A piece of business
scenario like, login to application, logout etc.

Well again It depends on how you create your framework ( If you would like to know more about frame
work check out this link. Good one. http://www-128.ibm.com/developerworks/rational/library/591.html)
for testing the applications.

#411,AnnapurnaBlock, AdithyaEnclave, Ameerpet, Hyderabad. 6


Ph:040-64510222 E-Mail:
platinum_hyd@yahoo.com
Platinum Technologies Testing Tools
I would suggest every action has a piece of business scenario which would help to re-use the script in a
better way. Before deciding what are re-usable scripts. Firstly, identify all the common scenarios that
occur in different business flows across different modules.
Then prepare the scripts and make generic. You can call all these functions by making this common
function library available at Test options > Resourses.

36. What is Copy of action?

A. Copy of Action:
When you insert a copy of an action into a test, the action is copied in its entirety, including checkpoints,
parameterization, and the corresponding action tab in the Data Table. If the test you are copying into uses
per-action repository mode, the copied action’s action object repository will also be copied along with the
action.

37. What are re-usable actions?

A. Reusable Actions:
Determines whether the action is a reusable action. A reusable action can be called multiple times within
a test and can be called from other tests. Non-reusable actions can be copied and inserted as independent
actions, but cannot be inserted as calls to the original action.

38. what about Call of Action? A. You can insert a call (link) to a reusable action that resides in your
current test (local action), or in any other test (external action).

39. When to Insert transactions? A. Inserting Transactions:

♣ During the test run, the Start Transaction signals the beginning of the time measurement. You define
the beginning of a transaction in the Start Transaction dialog box

♣The End Transaction signals the end of the time measurement

40. What are reular expressions?


A. Regular Expressions:
Regular expressions enable QuickTest to identify objects and text strings with varying values. You can
use regular expressions when:
• Defining the property values of an object
• Parameterizing a step
• Creating checkpoints with varying values
A regular expression is a string that specifies a complex search phrase. By using special characters such
as a period (.), asterisk (*), caret (^), and brackets ([ ]), you can define the conditions of a search. When
one of these special characters is preceded by a backslash (\), QuickTest searches for the literal character.

Here is an example:
The actual pattern for the regular expression search is set using the Pattern property of the RegExp
object. The RegExp.Global property has no effect on the Test method.
The Test method returns True if a pattern match is found; False if no match is found.
The following code illustrates the use of the Test method.
Function RegExpTest(patrn, strng)
Dim regEx, retVal ' Create variable.
Set regEx = New RegExp ' Create regular expression.
regEx.Pattern = patrn ' Set pattern.
regEx.IgnoreCase = False ' Set case sensitivity.
retVal = regEx.Test(strng) ' Execute the search test.
If retVal Then
RegExpTest = 'One or more matches were found.'
Else
RegExpTest = 'No match was found.'
End If
#411,AnnapurnaBlock, AdithyaEnclave, Ameerpet, Hyderabad. 7
Ph:040-64510222 E-Mail:
platinum_hyd@yahoo.com
Platinum Technologies Testing Tools
End Function
MsgBox(RegExpTest('is.', 'IS1 is2 IS3 is4'))

41. Create a script to print the message

A.
Dim MyVar
MyVar = MsgBox ('Hello World!', 65, 'MsgBox Example')
' MyVar contains either 1 or 2, depending on which button is clicked.

42. List all the run time errors in VB script.


A. VBScript run-time errors are errors that result when your VBScript script attempts to perform an
action that the system cannot execute. VBScript run-time errors occur while your script is being
executed; when variable expressions are being evaluated, and memory is being dynamic allocated.

Error Number Description


429 ActiveX component can't create object

507 An exception occurred

449 Argument not optional

17 Can't perform requested operation

430 Class doesn't support Automation

506 Class not defined

11 Division by zero

48 Error in loading DLL

5020 Expected ')' in regular expression

5019 Expected ']' in regular expression

432 File name or class name not found during Automation operation

92 For loop not initialized

5008 Illegal assignment

51 Internal error

505 Invalid or unqualified reference

481 Invalid picture

5 Invalid procedure call or argument

5021 Invalid range in character set

94 Invalid use of Null

448 Named argument not found

447 Object doesn't support current locale setting


#411,AnnapurnaBlock, AdithyaEnclave, Ameerpet, Hyderabad. 8
Ph:040-64510222 E-Mail:
platinum_hyd@yahoo.com
Platinum Technologies Testing Tools
445 Object doesn't support this action

438 Object doesn't support this property or method

451 Object not a collection

504 Object not safe for creating

503 Object not safe for initializing

502 Object not safe for scripting

424 Object required

91 Object variable not set

7 Out of Memory

28 Out of stack space

14 Out of string space

6 Overflow

35 Sub or function not defined

9 Subscript out of range

5017 Syntax error in regular expression

462 The remote server machine does not exist or is unavailable

10 This array is fixed or temporarily locked

13 Type mismatch

5018 Unexpected quantifier

500 Variable is undefined

458 Variable uses an Automation type not supported in VBScript

450 Wrong number of arguments or invalid property assignment

42. What is a Recovery Scenario?

Recovery scenario gives you an option to take some action for recovering from a fatal error in the test.
The error could range in from

occasional to typical errors. Occasional error would be like "Out of paper" popup error while printing
something and typical errors would be like

"object is disabled" or "object not found". A test case have more then one scenario associated with it and
also have the priority or order in which it

should be checked.
#411,AnnapurnaBlock, AdithyaEnclave, Ameerpet, Hyderabad. 9
Ph:040-64510222 E-Mail:
platinum_hyd@yahoo.com
Platinum Technologies Testing Tools
43. What does a Recovery Scenario consists of?

Trigger: Trigger is nothing but the cause for initiating the recovery scenario. It could be any popup
window, any test error, particular state

of an object or any application error.


Action: Action defines what needs to be done if scenario has been triggered. It can consist of a
mouse/keyboard event, close application, call a

recovery function defined in library file or restart windows. You can have a series of all the specified
actions.
Post-recovery operation: Basically defined what need to be done after the recovery action has been taken.
It could be to repeat the step, move

to next step etc....

44. When to use a Recovery Scenario and when to us on error resume next?

Recovery scenarios are used when you cannot predict at what step the error can occur or when you know
that error won't occur in your

QTP script but could occur in the world outside QTP, again the example would be "out of paper", as this
error is caused by printer device driver. "On

error resume next" should be used when you know if an error is expected and dont want to raise it, you
may want to have different actions depending upon the error that occurred. Use err.number &
err.description to get more details about the error.

Library Files or VBScript Files


How do we associate a library file with a test ?

Library files are files containing normal VBScript code. The file can contain function, sub procedure,
classes etc.... You can also use executefile

function to include a file at run-time also. To associate a library file with your script go to Test-
>Settings... and add your library file to resources

tab.

45. When to associate a library file with a test and when to use execute file?

When we associate a library file with the test, then all the functions within that library are available to all
the actions present in the test. But

when we use Executefile function to load a library file, then the function are available in the action that
called executefile. By associated a library to

a test we share variables across action (global variables basically), using association also makes it
possible to execute code as soon as the script

runs because while loading the script on startup QTP executes all the code on the global scope. We can
use executefile in a library file associated

with the test to load dynamic files and they will be available to all the actions in the test.

#411,AnnapurnaBlock, AdithyaEnclave, Ameerpet, Hyderabad. 10


Ph:040-64510222 E-Mail:
platinum_hyd@yahoo.com
Platinum Technologies Testing Tools
46. Add-ins

Test and Run-time Object


What is the difference between Test Objects and Run Time Objects ?

Test objects are basic and generic objects that QTP recognize. Run time object means the actual object to
which a test object maps.
Can i change properties of a test object

Yes. You can use SetTOProperty to change the test object properties. It is recommended that you switch
off the Smart Identification for the

object on which you use SetTOProperty function.


Can i change properties of a run time object?

No (but Yes also). You can use GetROProperty("outerText") to get the outerText of a object but there is
no function like SetROProperty to

change this property. But you can use WebElement().object.outerText="Something" to change the
property.

Action & Functions


What is the difference between an Action and a function?

Action is a thing specific to QTP while functions are a generic thing which is a feature of VB Scripting.
Action can have a object repository

associated with it while a function can't. A function is just lines of code with some/none parameters and a
single return value while an action can

have more than one output parameters.

47. Where to use function or action?

Well answer depends on the scenario. If you want to use the OR feature then you have to go for Action
only. If the functionality is not about any

automation script i.e. a function like getting a string between to specific characters, now this is something
not specific to QTP and can be done on

pure VB Script, so this should be done in a function and not an action. Code specific to QTP can also be
put into an function using DP. Decision of

using function/action depends on what any one would be comfortable using in a given situation.

48. Checkpoint & Output value


What is checkpoint?

Checkpoint is basically a point in the test which validates for truthfulness of a specific things in the AUT.
There are different types of

checkpoints depending on the type of data that needs to be tested in the AUT. It can be text,
image/bitmap, attributes, XML etc....
What's the difference between a checkpoint and output value?

Checkpoint only checks for the specific attribute of an object in AUT while Output value can output
those attributes value to a column in data
#411,AnnapurnaBlock, AdithyaEnclave, Ameerpet, Hyderabad. 11
Ph:040-64510222 E-Mail:
platinum_hyd@yahoo.com
Platinum Technologies Testing Tools
table.
How can i check if a checkpoint passes or not?

code:
--------------------------------------------------------------------------------

chk_PassFail = Browser(...).Page(...).WebEdit(...).Check (Checkpoint("Check1"))if chk_PassFail then


MsgBox "Check Point passed"else MsgBox "Check Point failed"end if

--------------------------------------------------------------------------------

My test fails due to checkpoint failing, Can i validate a checkpoint without my test failing due to
checpoint failure?

code:
--------------------------------------------------------------------------------

Reporter.Filter = rfDisableAll 'Disables all the reporting stuffchk_PassFail =


Browser(...).Page(...).WebEdit(...).Check (Checkpoint("Check1"))Reporter.Filter = rfEnableAll 'Enable
all the reporting stuffif chk_PassFail then MsgBox "Check Point passed"else MsgBox "Check Point
failed"end if

--------------------------------------------------------------------------------

Environment
How can i import environment from a file on disk

Environment.LoadFromFile "C:\Env.xml"
How can i check if a environment variable exist or not?

When we use Environment("Param1").value then QTP expects the environment variable to be already
defined. But when we use

Environment.value("Param1") then QTP will create a new internal environment variable if it does not
exists already. So to be sure that variable exist

in the environment try using Environment("Param1").value.

How to connect to a database?

#411,AnnapurnaBlock, AdithyaEnclave, Ameerpet, Hyderabad. 12


Ph:040-64510222 E-Mail:
platinum_hyd@yahoo.com
Platinum Technologies Testing Tools

Software Testing Interview Question s- Test Automation


1. What automating testing tools are you familiar with?
Win Runner , Load runner, QTP , Silk Performer, Test director, Rational robot, QA run.
2. How did you use automating testing tools in your job?
1. For regressio testing 2. Criteria to decide the condition of a particular build
3. Describe some problem that you had with automating testing tool.
The problem of winrunner identifying the third party controls like infragistics control.
4. How do you plan test automation?
1. Prepare the automation Test plan 2. Identify the scenario 3. Record the scenario 4.
Enhance the scripts by inserting check points and Conditional Loops 5. Incorporated
Error Hnadler 6. Debug the script 7. Fix the issue 8. Rerun the script and report the
result.
5. Can test automation improve test effectiveness?
Yes, Automating a test makes the test process: 1.Fast 2.Reliable 3. Repeatable
4.Programmable 5.Reusable 6.Comprehensive
6. What is data - driven automation?
Testing the functionality with more test cases becomes laborious as the functionality
grows. For multiple sets of data (test cases), you can execute the test once in which you
can figure out for which data it has failed and for which data, the test has passed. This
feature is available in the WinRunner with the data driven test where the data can be
taken from an excel sheet or notepad.
7. What are the main attributes of test automation?
software test automation attributes : Maintainability - the effort needed to update the
test automation suites for each new release Reliability - the accuracy and repeatability
of the test automation Flexibility - the ease of working with all the different kinds of
automation test ware Efficiency - the total cost related to the effort needed for the
#411,AnnapurnaBlock, AdithyaEnclave, Ameerpet, Hyderabad. 13
Ph:040-64510222 E-Mail:
platinum_hyd@yahoo.com
Platinum Technologies Testing Tools
automation Portability - the ability of the automated test to run on different
environments Robustness - the effectiveness of automation on an unstable or rapidly
changing system Usability - the extent to which automation can be used by different
types of users
8. Does automation replace manual testing?
There can be some functionality which cannot be tested in an automated tool so we may
have to do it manually. therefore manual testing can never be repleaced. (We can write
the scripts for negative testing also but it is hectic task).When we talk about real
environment we do negative testing manually.
9. How will you choose a tool for test automation?
choosing of a tool deends on many things ... 1. Application to be tested 2. Test
environment 3. Scope and limitation of the tool. 4. Feature of the tool. 5. Cost of the
tool. 6. Whether the tool is compatible with your application which means tool should
be able to interact with your appliaction 7. Ease of use
10. How you will evaluate the tool for test automation?
We need to concentrate on the features of the tools and how this could be benficial for
our project. The additional new features and the enhancements of the features will also
help.
11. What are main benefits of test automation?
FAST ,RELIABLE,COMPREHENSIVE,REUSABLE
12. What could go wrong with test automation?
1. The choice of automation tool for certain technologies 2. Wrong set of test automated
13. How you will describe testing activities?
Testing activities start from the elaboration phase. The various testing activities are
preparing the test plan, Preparing test cases, Execute the test case, Log teh bug, validate
the bug & take appropriate action for the bug, Automate the test cases.
14. What testing activities you may want to automate?
1. Automate all the high priority test cases which needs to be exceuted as a part of
regression testing for each build cycle.
15. Describe common problems of test automation.
The commom problems are: 1. Maintenance of the old script when there is a feature
change or enhancement 3. The change in technology of the application will affect the
old scripts
16. What types of scripting techniques for test automation do you know?
5 types of scripting techniques: Linear Structured Shared Data Driven Key Driven
17. What are principles of good testing scripts for automation?
1. Proper code guiding standards 2. Standard format for defining functions, exception
handler etc 3. Comments for functions 4. Proper errorhandling mechanisms 5. The
apprpriate synchronisation techniques
18. What tools are available for support of testing during software development life
cycle?
Testing tools for regressiona and load/stress testing for regression testing like, QTP, load
runner, rational robot, winrunner, silk, testcomplete, Astra are availalbe in the market.
-For defect tracking BugZilla, Test Runner are availalbe.
19. Can the activities of test case design be automated?

#411,AnnapurnaBlock, AdithyaEnclave, Ameerpet, Hyderabad. 14


Ph:040-64510222 E-Mail:
platinum_hyd@yahoo.com
Platinum Technologies Testing Tools
As I know it, test case design is about formulating the steps to be carried out to verify
something about the application under test. And this cannot be automated. IHowever, I
agree that the process of putting the test results into the excel sheet.
20. What are the limitations of automating software testing?
Hard-to-create environments like “out of memory”, “invalid input/reply”, and “corrupt
registry entries” make applications behave poorly and existing automated tools can’t
force these condition - they simply test your application in “normal” environment.
21. What skills needed to be a good test automator?
1.Good Logic for programming. 2. Analatical sklls. 3.Pessimestic in Nature.
22. How to find that tools work well with your existing system?
1. Discuss with the support officials 2. Download the trial version of the tool and
evaluate 3. Get suggestions from peopel who are working on the tool
23. Describe some problem that you had with automating testing tool.
1. The inabality of winrunner to identify the third party control like infragistics controls
2. The change of the location of the table object will cause object not found error. 3.
The inability of the winrunner to execute the script against multiple langauges
24. What are the main attributes of test automation?
Maintainability, Reliability, Flexibility, Efficiency, Portability, Robustness, and
Usability - these are the main attributes in test automation.
25. What testing activities you may want to automate in a project?
Testing tools can be used for : * Sanity tests(which is repeated on every build), *
stress/Load tests(U simulate a large no of users,which is manually impossibele) & *
Regression tests(which are done after every code change)
26. How to find that tools work well with your existing system?
To find this, select the suite of tests which are most important for your application. First
run them with automated tool. Next subject the same tests to careful manual testing. If
the results are coinciding you can say your testing tool has been performing.
27. How will you test the field that generates auto numbers of AUT when we click
the button 'NEW" in the application?
We can create a textfile in a certain location, and update the auto generated value each
time we run the test and compare the currently generated value with the previous one
will be one solution.
28. How will you evaluate the fields in the application under test using automation
tool?
We can use Verification points(rational Robot) to validate the fields .Ex.Using
objectdata,objectdata properties VP we can validate fields.
29. Can we perform the test of single application at the same time using different
tools on the same machine?
No. The Testing Tools will be in the ambiguity to determine which browser is opened
by which tool.
30. Diffenece between Web aplication Testing and Client Server Testing. State the
different types for Web apllication Testing and Client Server Testing types?
which winrunner 7.2 versioncompatiable with internet explorer, firefox,n.n
31. What is 'configuration management'?
Configuration management is a process to control and document any changes made
during the life of a project. Revision control, Change Control, and Release Control are
important aspects of Configuration Management.

#411,AnnapurnaBlock, AdithyaEnclave, Ameerpet, Hyderabad. 15


Ph:040-64510222 E-Mail:
platinum_hyd@yahoo.com
Platinum Technologies Testing Tools
32. How to test the Web applications?
The basic differnce in webtesting is here we have to test for URL's coverage and links
coverage. Using WinRunner we can conduct webtesting. But we have to make sure that
Webtest option is selected in "Add in Manager". Using WR we cannot test XML objects.
33. what are the problems encountered during the testing the application
compatibility on different browsers and on different operating systems
Font issues,alignment issues
34. how exactly the testing the application compatibility on different browsers and
on different operating systems is done
Please Send Your Suggetion
35. How testing is proceeded when SRS or any other docccument is not given?
If SRS is not there we can perfom Exploratory testing. In Exporatory testing the basic
moduole is executed and depending on its results, the next plan is executed.
36. How do we test for severe memory leakages ?
By using Endurance Testing . Endurance Testing means checking for memory leaks or
other problems that may occur with prolonged execution.
37. what is the difference between quality assurance and testing?
Quality assurance involves the entire software development process and testing involves
operation of a system or application to evaluate the results under certain conditions. QA
is oriented to prevention and Testing is oriented to detection.
38. why does software have bugs?
1.miscommunication 2.programming errors 3.time pressures. 4.changing requirements
5.software complexity
39. how do u do usability testing,security testing,installation testing,ADHOC,safety
and smoke testing?
40. What is memory leaks and buffer overflows ? Memory leaks means incomplete
deallocation - are bugs that happen very often. Buffer overflow means data sent as input
to the server that overflows the boundaries of the input area, thus causing the server to
misbehave. Buffer overflows can be used.
41. what are the major differences between stress testing,load testing,Volume
testing? Stress testing means increasing the load ,and cheking the performance at each
level. Load testing means at a time giving more load by the expectation and cheking the
performance at that leval. Volume testing means first we have to apply initial.

Software Testing Interview Questions - Web Testing


1. what bugs are mainly come in webtesting what severity and priority we are
giving
The bug that mainly comes in web testing are cosmetic bugs on web pages , field
validation related bugs and also the bugs related to scalability ,throughput and response
time for web pages.
2. What is the difference in testing a CLENT-SERVER application and a WEB
application ?
*search code*120*600 ads : info@eisn.net
--

Software Testing Interview Questions - eisn.net centerads Testing Scenario

#411,AnnapurnaBlock, AdithyaEnclave, Ameerpet, Hyderabad. 16


Ph:040-64510222 E-Mail:
platinum_hyd@yahoo.com
Platinum Technologies Testing Tools
1. Testing Scenarios : How do you know that all the scenarios for testing are
covered?
By using the Requirement Traceability Matrix (RTM) we can ensure that we have
covered all the functionalities in Test Coverage. RTM is a document that traces User
Requirements from analysis through implementations. RTm can be used as a
completeness check to verify that all the requirements are present or that there is no
unnecessary/extra features and as a maintenance guide to new personnel. We can use
the simple format in Excel sheet where we map the Functionality with the Test case ID.
2. Complete Testing with Time Constraints : Question: How do you complete the
testing when you have a time constraint?
If i am doinf regression testing and i do not have sufficient time then we have to see for
which sort of regression testing i have to go 1)unit regression testing 2)Regional
Regression testing 3)Full Regression testing.
3. Given an yahoo application how many test cases u can write?
First we need requirements of the yahoo applicaiton. I think test cases are written aganist
given requirements.So for any working webapplication or new application, requirements
are needed to prepare testcases.The number of testcases depends on the requirements of
the application Note to learners : A Test Engineer must have knowledge on SDLC. I
suggest learners to take any one exiting application and start pratice from writing
requirements.
4. Lets say we have an GUI map and scripts, a we got some 5 new pages included
inan application how do we do that?
By integration testing.
5. GUI contains 2 fields Field 1 to accept the value of x and Field 2 displays the
result of the formula a+b/c-d where a=0.4*x, b=1.5*a, c=x, d=2.5*b; How many
system test cases would you write
GUI contains 2 fields Field 1 to accept the value of x and Field 2 displays the result
so that there is only one testcase to write
--

Software Testing Interview Questions - eisn.net centerads Test Cases


1. How can we write a good test case?
2. for a triangle(sum of two sides is greater than or equal to the third side),what is
the minimal number of test cases required.
The answer is 3 1. Measure all sides of the triangle. 2. Add the minnimum and second
highest length of the triangle and store the result as Res. 3. Compare the Res with the
largest side of the triangle.
3. How will you check that your test cases covered all the requirements?
By using traceabiltymatrix. Traceability matrix means the matrix showing the
relationship b/w the requirements & testcases. . *search code*120*600 ads >

Software Testing - eisn.net centerads Frequently Asked Questions Part 2


* What makes a good Software Test engineer? * What makes a good Software QA
engineer? * What makes a good QA or Test manager? * What's the role of
documentation in QA? * What's the big deal about 'requirements'? * What steps
#411,AnnapurnaBlock, AdithyaEnclave, Ameerpet, Hyderabad. 17
Ph:040-64510222 E-Mail:
platinum_hyd@yahoo.com
Platinum Technologies Testing Tools
are needed to develop and run software tests? * What's a 'test plan'? * What's a
'test case'? * What should be done after a bug is found? * What is 'configuration
management'? * What if the software is so buggy it can't really be tested at all? *
How can it be known when to stop testing? * What if there isn't enough time for
thorough testing? * What if the project isn't big enough to justify extensive
testing? * What can be done if requirements are changing continuously? * What
if the application has functionality that wasn't in the requirements? * How can
QA processes be implemented without stifling productivity? * What if an
organization is growing so fast that fixed QA processes are impossible? * How does
a client/server environment affect testing? * How can World Wide Web sites be
tested? * How is testing affected by object-oriented designs? * What is Extreme
Programming and what's it got to do with testing? What makes a good Software
Test engineer? * A good test engineer has a 'test to break' attitude, an ability to take the
point of view of the customer, a strong desire for quality, and an attention to detail. Tact
and diplomacy are useful in maintaining a cooperative relationship with developers, and
an ability to communicate with both technical (developers) and non-technical (customers,
management) people is useful. Previous software development experience can be helpful
as it provides a deeper understanding of the software development process, gives the
tester an appreciation for the developers' point of view, and reduce the learning curve in
automated test tool programming. Judgement skills are needed to assess high-risk areas
of an application on which to focus testing efforts when time is limited.
1. What makes a good Software QA engineer? * The same qualities a good tester has
are useful for a QA engineer. Additionally, they must be able to understand the entire
software development process and how it can fit into the business approach and goals of
the organization. Communication skills and the ability to understand various sides of
issues are important. In organizations in the early stages of implementing QA processes,
patience and diplomacy are especially needed. An ability to find problems as well as to
see 'what's missing' is important for inspections and reviews.
2. What makes a good QA or Test manager? A good QA, test, or
QA/Test(combined) manager should: * be familiar with the software development
process * be able to maintain enthusiasm of their team and promote a positive
atmosphere, despite what is a somewhat 'negative' process (e.g., looking for or
preventing problems) * be able to promote teamwork to increase productivity * be
able to promote cooperation between software, test, and QA engineers * have the
diplomatic skills needed to promote improvements in QA processes * have the ability to
withstand pressures and say 'no' to other managers when quality is insufficient or QA
processes are not being adhered to have people judgement skills for hiring and keeping
skilled personnel * be able to communicate with technical and non-technical people,
engineers, managers, and customers. * be able to run meetings and keep them focused.
3. What's the role of documentation in QA? * Critical. (Note that documentation can
be electronic, not necessarily paper, may be embedded in code comments, etc.) QA
practices should be documented such that they are repeatable. Specifications, designs,
business rules, inspection reports, configurations, code changes, test plans, test cases, bug
reports, user manuals, etc. should all be documented in some form. There should ideally
be a system for easily finding and obtaining information and determining what
documentation will have a particular piece of information. Change management for
documentation should be used if possible.

#411,AnnapurnaBlock, AdithyaEnclave, Ameerpet, Hyderabad. 18


Ph:040-64510222 E-Mail:
platinum_hyd@yahoo.com
Platinum Technologies Testing Tools
4. What's the big deal about 'requirements'? * One of the most reliable methods of
ensuring problems, or failure, in a large, complex software project is to have poorly
documented requirements specifications. Requirements are the details describing an
application's externally-perceived functionality and properties. Requirements should be
clear, complete, reasonably detailed, cohesive, attainable, and testable. A non-testable
requirement would be, for example, 'user-friendly' (too subjective). A testable
requirement would be something like 'the user must enter their previously-assigned
password to access the application'. Determining and organizing requirements details in a
useful and efficient way can be a difficult effort; different methods are available
depending on the particular project. Many books are available that describe various
approaches to this task. (See the Books section's 'Software Requirements Engineering'
category for books on Software Requirements.) * Care should be taken to involve ALL
of a project's significant 'customers' in the requirements process. 'Customers' could be in-
house personnel or out, and could include end-users, customer acceptance testers,
customer contract officers, customer management, future software maintenance
engineers, salespeople, etc. Anyone who could later derail the project if their
expectations aren't met should be included if possible. * Organizations vary
considerably in their handling of requirements specifications. Ideally, the requirements
are spelled out in a document with statements such as 'The product shall.....'. 'Design'
specifications should not be confused with 'requirements'; design specifications should be
traceable back to the requirements. * In some organizations requirements may end up in
high level project plans, functional specification documents, in design documents, or in
other documents at various levels of detail. No matter what they are called, some type of
documentation with detailed requirements will be needed by testers in order to properly
plan and execute tests. Without such documentation, there will be no clear-cut way to
determine if a software application is performing correctly. * 'Agile' methods such as
XP use methods requiring close interaction and cooperation between programmers and
customers/end-users to iteratively develop requirements. The programmer uses 'Test first'
development to first create automated unit testing code, which essentially embodies the
requirements.
5. What steps are needed to develop and run software tests? The following are
some of the steps to consider: * Obtain requirements, functional design, and internal
design specifications and other necessary documents. * Obtain budget and schedule
requirements. * Determine project-related personnel and their responsibilities, reporting
requirements, required standards and processes (such as release processes, change
processes, etc.) * Determine project context, relative to the existing quality culture of
the organization and business, and how it might impact testing scope, aproaches, and
methods. * Identify application's higher-risk aspects, set priorities, and determine scope
and limitations of tests. * Determine test approaches and methods - unit, integration,
functional, system, load, usability tests, etc. * Determine test environment requirements
(hardware, software, communications, etc.) * Determine testware requirements
(record/playback tools, coverage analyzers, test tracking, problem/bug tracking, etc.) *
Determine test input data requirements. * Identify tasks, those responsible for tasks, and
labor requirements. * Set schedule estimates, timelines, milestones. * Determine input
equivalence classes, boundary value analyses, error classes. * Prepare test plan
document and have needed reviews/approvals. * Write test cases.Have needed
reviews/inspections/approvals of test cases. * Prepare test environment and testware,

#411,AnnapurnaBlock, AdithyaEnclave, Ameerpet, Hyderabad. 19


Ph:040-64510222 E-Mail:
platinum_hyd@yahoo.com
Platinum Technologies Testing Tools
obtain needed user manuals/reference documents/configuration guides/installation
guides, set up test tracking processes, set up logging and archiving processes, set up or
obtain test input data. * Obtain and install software releases. * Perform tests. *
Evaluate and report results. * Track problems/bugs and fixes. * Retest as needed. *
Maintain and update test plans, test cases, test environment, and testware through life
cycle.
6. What's a 'test plan'? A software project test plan is a document that describes the
objectives, scope, approach, and focus of a software testing effort. The process of
preparing a test plan is a useful way to think through the efforts needed to validate the
acceptability of a software product. The completed document will help people outside the
test group understand the 'why' and 'how' of product validation. It should be thorough
enough to be useful but not so thorough that no one outside the test group will read it.
The following are some of the items that might be included in a test plan, depending on
the particular project: * Title * Identification of software including version/release
numbers. * Revision history of document including authors, dates, approvals. * Table
of Contents. * Purpose of document, intended audience * Objective of testing effort *
Software product overview * Relevant related document list, such as requirements,
design documents, other test plans, etc. * Relevant standards or legal requirements *
Traceability requirements * Relevant naming conventions and identifier conventions *
Overall software project organization and personnel/contact-info/responsibilties * Test
organization and personnel/contact-info/responsibilities * Assumptions and
dependencies * Project risk analysis * Testing priorities and focus * Scope and
limitations of testing * Test outline - a decomposition of the test approach by test type,
feature, functionality, process, system, module, etc. as applicable * Outline of data input
equivalence classes, boundary value analysis, error classes * Test environment -
hardware, operating systems, other required software, data configurations, interfaces to
other systems * Test environment validity analysis - differences between the test and
production systems and their impact on test validity. * Test environment setup and
configuration issues * Software migration processes * Software CM processes * Test
data setup requirements * Database setup requirements * Outline of system-
logging/error-logging/other capabilities, and tools such as screen capture software, that
will be used to help describe and report bugs * Discussion of any specialized software
or hardware tools that will be used by testers to help track the cause or source of bugs *
Test automation - justification and overview * Test tools to be used, including versions,
patches, etc. * Test script/test code maintenance processes and version control *
Problem tracking and resolution - tools and processes * Project test metrics to be used
* Reporting requirements and testing deliverables * Software entrance and exit criteria
* Initial sanity testing period and criteria * Test suspension and restart criteria *
Personnel allocation * Personnel pre-training needs * Test site/location * Outside test
organizations to be utilized and their purpose, responsibilties, deliverables, contact
persons, and coordination issues. * Relevant proprietary, classified, security, and
licensing issues. * Open issues * Appendix - glossary, acronyms, etc.
7. What's a 'test case'? * A test case is a document that describes an input, action, or
event and an expected response, to determine if a feature of an application is working
correctly. A test case should contain particulars such as test case identifier, test case
name, objective, test conditions/setup, input data requirements, steps, and expected
results. * Note that the process of developing test cases can help find problems in the

#411,AnnapurnaBlock, AdithyaEnclave, Ameerpet, Hyderabad. 20


Ph:040-64510222 E-Mail:
platinum_hyd@yahoo.com
Platinum Technologies Testing Tools
requirements or design of an application, since it requires completely thinking through
the operation of the application. For this reason, it's useful to prepare test cases early in
the development cycle if possible.
8. What should be done after a bug is found? * The bug needs to be communicated
and assigned to developers that can fix it. After the problem is resolved, fixes should be
re-tested, and determinations made regarding requirements for regression testing to check
that fixes didn't create problems elsewhere. If a problem-tracking system is in place, it
should encapsulate these processes. A variety of commercial problem-
tracking/management software tools are available (see the 'Tools' section for web
resources with listings of such tools). The following are items to consider in the tracking
process: * Complete information such that developers can understand the bug, get an
idea of it's severity, and reproduce it if necessary. * Bug identifier (number, ID, etc.) *
Current bug status (e.g., 'Released for Retest', 'New', etc.) * The application name or
identifier and version * The function, module, feature, object, screen, etc. where the
bug occurred * Environment specifics, system, platform, relevant hardware specifics *
Test case name/number/identifier * One-line bug description * Full bug description *
Description of steps needed to reproduce the bug if not covered by a test case or if the
developer doesn't have easy access to the test case/test script/test tool * Names and/or
descriptions of file/data/messages/etc. used in test * File excerpts/error messages/log
file excerpts/screen shots/test tool logs that would be helpful in finding the cause of the
problem * Severity estimate (a 5-level range such as 1-5 or 'critical'-to-'low' is common)
* Was the bug reproducible? * Tester name * Test date * Bug reporting date *
Name of developer/group/organization the problem is assigned to * Description of
problem cause * Description of fix * Code section/file/module/class/method that was
fixed * Date of fix * Application version that contains the fix * Tester responsible for
retest * Retest date * Retest results * Regression testing requirements * Tester
responsible for regression tests * Regression testing results * A reporting or tracking
process should enable notification of appropriate personnel at various stages. For
instance, testers need to know when retesting is needed, developers need to know when
bugs are found and how to get the needed information, and reporting/summary
capabilities are needed for managers.
8. What is 'configuration management'? * Configuration management covers the
processes used to control, coordinate, and track: code, requirements, documentation,
problems, change requests, designs, tools/compilers/libraries/patches, changes made to
them, and who makes the changes. (See the 'Tools' section for web resources with
listings of configuration management tools. Also see the Books section's 'Configuration
Management' category for useful books with more information.)
9. What if the software is so buggy it can't really be tested at all? * The best bet in
this situation is for the testers to go through the process of reporting whatever bugs or
blocking-type problems initially show up, with the focus being on critical bugs. Since
this type of problem can severely affect schedules, and indicates deeper problems in the
software development process (such as insufficient unit testing or insufficient integration
testing, poor design, improper build or release procedures, etc.) managers should be
notified, and provided with some documentation as evidence of the problem.
10. How can it be known when to stop testing? This can be difficult to determine.
Many modern software applications are so complex, and run in such an interdependent
environment, that complete testing can never be done. Common factors in deciding when

#411,AnnapurnaBlock, AdithyaEnclave, Ameerpet, Hyderabad. 21


Ph:040-64510222 E-Mail:
platinum_hyd@yahoo.com
Platinum Technologies Testing Tools
to stop are: * Deadlines (release deadlines, testing deadlines, etc.) * Test cases
completed with certain percentage passed * Test budget depleted * Coverage of
code/functionality/requirements reaches a specified point * Bug rate falls below a
certain level * Beta or alpha testing period ends
11. What if there isn't enough time for thorough testing? * Use risk analysis to
determine where testing should be focused. Since it's rarely possible to test every
possible aspect of an application, every possible combination of events, every
dependency, or everything that could go wrong, risk analysis is appropriate to most
software development projects. This requires judgement skills, common sense, and
experience. (If warranted, formal methods are also available.) Considerations can
include: * Which functionality is most important to the project's intended purpose? *
Which functionality is most visible to the user? * Which functionality has the largest
safety impact? * Which functionality has the largest financial impact on users? *
Which aspects of the application are most important to the customer? * Which aspects
of the application can be tested early in the development cycle? * Which parts of the
code are most complex, and thus most subject to errors? * Which parts of the
application were developed in rush or panic mode? * Which aspects of similar/related
previous projects caused problems? * Which aspects of similar/related previous projects
had large maintenance expenses? * Which parts of the requirements and design are
unclear or poorly thought out? * What do the developers think are the highest-risk
aspects of the application? * What kinds of problems would cause the worst publicity?
* What kinds of problems would cause the most customer service complaints? * What
kinds of tests could easily cover multiple functionalities? * Which tests will have the
best high-risk-coverage to time-required ratio?
12. What if the project isn't big enough to justify extensive testing? * Consider the
impact of project errors, not the size of the project. However, if extensive testing is still
not justified, risk analysis is again needed and the same considerations as described
previously in 'What if there isn't enough time for thorough testing?' apply. The tester
might then do ad hoc testing, or write up a limited test plan based on the risk analysis.
13. What can be done if requirements are changing continuously? A common
problem and a major headache * Work with the project's stakeholders early on to
understand how requirements might change so that alternate test plans and strategies can
be worked out in advance, if possible. * It's helpful if the application's initial design
allows for some adaptability so that later changes do not require redoing the application
from scratch. * If the code is well-commented and well-documented this makes changes
easier for the developers. * Use rapid prototyping whenever possible to help customers
feel sure of their requirements and minimize changes. * The project's initial schedule
should allow for some extra time commensurate with the possibility of changes. * Try
to move new requirements to a 'Phase 2' version of an application, while using the
original requirements for the 'Phase 1' version. * Negotiate to allow only easily-
implemented new requirements into the project, while moving more difficult new
requirements into future versions of the application. * Be sure that customers and
management understand the scheduling impacts, inherent risks, and costs of significant
requirements changes. Then let management or the customers (not the developers or
testers) decide if the changes are warranted - after all, that's their job. * Balance the
effort put into setting up automated testing with the expected effort required to re-do
them to deal with changes. * Try to design some flexibility into automated test scripts.

#411,AnnapurnaBlock, AdithyaEnclave, Ameerpet, Hyderabad. 22


Ph:040-64510222 E-Mail:
platinum_hyd@yahoo.com
Platinum Technologies Testing Tools
* Focus initial automated testing on application aspects that are most likely to remain
unchanged. * Devote appropriate effort to risk analysis of changes to minimize
regression testing needs. * Design some flexibility into test cases (this is not easily
done; the best bet might be to minimize the detail in the test cases, or set up only higher-
level generic-type test plans) * Focus less on detailed test plans and test cases and more
on ad hoc testing (with an understanding of the added risk that this entails).
14. What if the application has functionality that wasn't in the requirements? * It
may take serious effort to determine if an application has significant unexpected or
hidden functionality, and it would indicate deeper problems in the software development
process. If the functionality isn't necessary to the purpose of the application, it should be
removed, as it may have unknown impacts or dependencies that were not taken into
account by the designer or the customer. If not removed, design information will be
needed to determine added testing needs or regression testing needs. Management should
be made aware of any significant added risks as a result of the unexpected functionality.
If the functionality only effects areas such as minor improvements in the user interface,
for example, it may not be a significant risk.
15. How can QA processes be implemented without stifling productivity? * By
implementing QA processes slowly over time, using consensus to reach agreement on
processes, and adjusting and experimenting as an organization grows and matures,
productivity will be improved instead of stifled. Problem prevention will lessen the need
for problem detection, panics and burn-out will decrease, and there will be improved
focus and less wasted effort. At the same time, attempts should be made to keep
processes simple and efficient, minimize paperwork, promote computer-based processes
and automated tracking and reporting, minimize time required in meetings, and promote
training as part of the QA process. However, no one - especially talented technical types
- likes rules or bureacracy, and in the short run things may slow down a bit. A typical
scenario would be that more days of planning and development will be needed, but less
time will be required for late-night bug-fixing and calming of irate customers. (See the
Books section's 'Software QA', 'Software Engineering', and 'Project Management'
categories for useful books with more information.)
16. What if an organization is growing so fast that fixed QA processes are
impossible * This is a common problem in the software industry, especially in new
technology areas. There is no easy solution in this situation, other than: * Hire good
people * Management should 'ruthlessly prioritize' quality issues and maintain focus on
the customer * Everyone in the organization should be clear on what 'quality' means to
the customer
17. How does a client/server environment affect testing? * Client/server applications
can be quite complex due to the multiple dependencies among clients, data
communications, hardware, and servers. Thus testing requirements can be extensive.
When time is limited (as it usually is) the focus should be on integration and system
testing. Additionally, load/stress/performance testing may be useful in determining
client/server application limitations and capabilities. There are commercial tools to assist
with such testing. (See the 'Tools' section for web resources with listings that include
these kinds of test tools.)
18. How can World Wide Web sites be tested? * Web sites are essentially
client/server applications - with web servers and 'browser' clients. Consideration should
be given to the interactions between html pages, TCP/IP communications, Internet

#411,AnnapurnaBlock, AdithyaEnclave, Ameerpet, Hyderabad. 23


Ph:040-64510222 E-Mail:
platinum_hyd@yahoo.com
Platinum Technologies Testing Tools
connections, firewalls, applications that run in web pages (such as applets, javascript,
plug-in applications), and applications that run on the server side (such as cgi scripts,
database interfaces, logging applications, dynamic page generators, asp, etc.).
Additionally, there are a wide variety of servers and browsers, various versions of each,
small but sometimes significant differences between them, variations in connection
speeds, rapidly changing technologies, and multiple standards and protocols. The end
result is that testing for web sites can become a major ongoing effort. Other
considerations might include:
19. How is testing affected by object-oriented designs? * What are the expected loads
on the server (e.g., number of hits per unit time?), and what kind of performance is
required under such loads (such as web server response time, database query response
times). What kinds of tools will be needed for performance testing (such as web load
testing tools, other tools already in house that can be adapted, web robot downloading
tools, etc.)? * Who is the target audience? What kind of browsers will they be using?
What kind of connection speeds will they by using? Are they intra- organization (thus
with likely high connection speeds and similar browsers) or Internet-wide (thus with a
wide variety of connection speeds and browser types)? * What kind of performance is
expected on the client side (e.g., how fast should pages appear, how fast should
animations, applets, etc. load and run)? * Will down time for server and content
maintenance/upgrades be allowed? how much? * Will down time for server and content
maintenance/upgrades be allowed? how much? * How reliable are the site's Internet
connections required to be? And how does that affect backup system or redundant
connection requirements and testing? * What processes will be required to manage
updates to the web site's content, and what are the requirements for maintaining,
tracking, and controlling page content, graphics, links, etc.? * Which HTML
specification will be adhered to? How strictly? What variations will be allowed for
targeted browsers? * Will there be any standards or requirements for page appearance
and/or graphics throughout a site or parts of a site? * How will internal and external
links be validated and updated? how often? * Can testing be done on the production
system, or will a separate test system be required? How are browser caching, variations
in browser option settings, dial-up connection variabilities, and real-world internet 'traffic
congestion' problems to be accounted for in testing? * How extensive or customized are
the server logging and reporting requirements; are they considered an integral part of the
system and do they require testing? * How are cgi programs, applets, javascripts,
ActiveX components, etc. to be maintained, tracked, controlled, and tested? * Pages
should be 3-5 screens max unless content is tightly focused on a single topic. If larger,
provide internal links within the page. * The page layouts and design elements should
be consistent throughout a site, so that it's clear to the user that they're still within a site.
* Pages should be as browser-independent as possible, or pages should be provided or
generated based on the browser-type. * All pages should have links external to the page;
there should be no dead-end pages. * The page owner, revision date, and a link to a
contact person or organization should be included on each page.
20. What is Extreme Programming and what's it got to do with testing? * Extreme
Programming (XP) is a software development approach for small teams on risk-prone
projects with unstable requirements. It was created by Kent Beck who described the
approach in his book 'Extreme Programming Explained' (See the Softwareqatest.com
Books page.). Testing ('extreme testing') is a core aspect of Extreme Programming.

#411,AnnapurnaBlock, AdithyaEnclave, Ameerpet, Hyderabad. 24


Ph:040-64510222 E-Mail:
platinum_hyd@yahoo.com
Platinum Technologies Testing Tools
Programmers are expected to write unit and functional test code first - before the
application is developed. Test code is under source control along with the rest of the
code. Customers are expected to be an integral part of the project team and to help
develope scenarios for acceptance/black box testing. Acceptance tests are preferably
automated, and are modified and rerun for each of the frequent development iterations.
QA and test personnel are also required to be an integral part of the project team.
Detailed requirements documentation is not used, and frequent re-scheduling, re-
estimating, and re-prioritizing is expected. *search code*120*600 ads
--

*** ads.php
eISN Home <http://eisn.net> Home <index.php> Manual Testing Basics <manual-testing-
basics.php> Procedure <manual-testing-procedure.php> Sample Test Case <stc.doc>
Books On Manual Testing <manual-testing-books.php> FAQ 1 <faq1.php> FAQ 2
<faq2.php> Jobs in Manual Testing <manual-testing-jobs.php> Automation Testing
Basics <automation-testing-basics.php> Books On Automation Testing <automation-
testing-books.php> Jobs In Automation Testing <automation-testing-jobs.php> Testing
Tools & Softwares Load and Performance Test Tools <load-performance-tools.php> Java
Test Tools <java-test-tools.php> Link Checkers <link-checking-tools.php> HTML
Validators <html-validators.php> Free On-the-Web HTML Validators and Link
Checkers <free-html-validators.php> PERL and C Programs for Validating and
Checking <perl-testing-tools.php> Web Functional/Regression Test Tools <web-
functional-tools.php> Web Site Security Test Tools <web-security-tools.php> External
Site Monitoring Services <external-monitoring-tools.php> Web Site Management Tools
<web-management-tools.php> Log Analysis Tools <log-analysis-tools.php> Other Web
Test Tools <other-web -test-tools.php> Interview Questions Test Automation <test-
automation.php> Load Testing <load-testing.php> Win Runner <win-runner.php> Test
Director <test-director.php> QTP <qtp.php> Load Runner <load-runner.php>
DataBase Testing <database-testing.php> Bug Tracking <bug-tracking.php> WhiteBox
Testing <white-box-test.php> Product Testing <product-testing.php> Testing Scenarios
<testing-scenario.php> Wireless Testing <wire-less-testing.php> Web Testing <web-
testing.php> Test Cases <test-cases.php> General Topics Interview Guidelines
<interview-gl.php> Certifications <testing-certifications.php> White Box Testing
<whitebox-testing.php> Black Box Testing <blackbox-testing.php> Unit Testing <unit-
testing.php> Integration Testing <integration-test.php> Performance Testing
<performance-testing.php> Stress Testing <stress-test.php> Security Testing <security-
test.php> Installation Testing <installation-test.php> Usability Testing <usability-
test.php> Stability Testing <stability-test.php> Acceptance Testing <acceptance-
test.php> Alpha Testing <alpha-test.php> Compatibility Testing <compatibility-
test.php> Product Testing <product-test.php> System Testing <system-test.php>
Sample Test Cycle <testing-cycle.php> Test Cases, Suits, Scripts <test-case-ss.php>
Defect Tracking <defect-tracking.php> Formal Verification <formal-verify.php> Fuzz
Testing <fuzz-test.php> Quality Assurance <qa.php>

#411,AnnapurnaBlock, AdithyaEnclave, Ameerpet, Hyderabad. 25


Ph:040-64510222 E-Mail:
platinum_hyd@yahoo.com
Platinum Technologies Testing Tools
Software Testing - eisn.net centerads Frequently Asked Questions Part 1
1.What is 'Software Quality Assurance'? Software QA involves the entire soft
ware development Process - monitoring and improving the process, making sure that any
agreed-upon standards and procedures are followed, and ensuring that problems are
found and dealt with. It is oriented to 'prevention'. (See the Books section for a list of
useful books on Software Quality Assurance.)
2.What is 'Software Testing'? Testing involves operation of a system or application
under controlled conditions and evaluating the results (eg, 'if the user is in interface A of
the application while using hardware B, and does C, then D should happen'). The
controlled conditions should include both normal and abnormal conditions. Testing
should intentionally attempt to make things go wrong to determine if things happen when
they shouldn't or things don't happen when they should. It is oriented to 'detection'.
Organizations vary considerably in how they assign responsibility for QA and testing.
Sometimes they're the combined responsibility of one group or individual. Also common
are project teams that include a mix of testers and developers who work closely together,
with overall QA processes monitored by project managers. It will depend on what best
fits an organization's size and business structure.
3. What are some recent major computer system failures caused by software bugs?
* Media reports in January of 2005 detailed severe problems with a $170 million high-
profile U.S. government IT systems project. Software testing was one of the five major
problem areas according to a report of the commission reviewing the project. Studies
were under way to determine which, if any, portions of the project could be salvaged. *
In July 2004 newspapers reported that a new government welfare management system in
Canada costing several hundred million dollars was unable to handle a simple benefits
rate increase after being put into live operation. Reportedly the original contract allowed
for only 6 weeks of acceptance testing and the system was never tested for its ability to
handle a rate increase. * Millions of bank accounts were impacted by errors due to
installation of inadequately tested software code in the transaction processing system of a
major North American bank, according to mid-2004 news reports. Articles about the
incident stated that it took two weeks to fix all the resulting errors, that additional
problems resulted when the incident drew a large number of e-mail phishing attacks
against the bank's customers, and that the total cost of the incident could exceed $100
million. * A bug in site management software utilized by companies with a significant
percentage of worldwide web traffic was reported in May of 2004. The bug resulted in
performance problems for many of the sites simultaneously and required disabling of the
software until the bug was fixed. * According to news reports in April of 2004, a
software bug was determined to be a major contributor to the 2003 Northeast blackout,
the worst power system failure in North American history. The failure involved loss of
electrical power to 50 million customers, forced shutdown of 100 power plants, and
economic losses estimated at $6 billion. The bug was reportedly in one utility company's
vendor-supplied power monitoring and management system, which was unable to
correctly handle and report on an unusual confluence of initially localized events. The
error was found and corrected after examining millions of lines of code. * In early
2004, news reports revealed the intentional use of a software bug as a counter-espionage
tool. According to the report, in the early 1980's one nation surreptitiously allowed a
hostile nation's espionage service to steal a version of sophisticated industrial software
that had intentionally-added flaws. This eventually resulted in major industrial disruption

#411,AnnapurnaBlock, AdithyaEnclave, Ameerpet, Hyderabad. 26


Ph:040-64510222 E-Mail:
platinum_hyd@yahoo.com
Platinum Technologies Testing Tools
in the country that used the stolen flawed software. * A major U.S. retailer was
reportedly hit with a large government fine in October of 2003 due to web site errors that
enabled customers to view one anothers' online orders. * News stories in the fall of
2003 stated that a manufacturing company recalled all their transportation products in
order to fix a software problem causing instability in certain circumstances. The
company found and reported the bug itself and initiated the recall procedure in which a
software upgrade fixed the problems. * In January of 2001 newspapers reported that a
major European railroad was hit by the aftereffects of the Y2K bug. The company found
that many of their newer trains would not run due to their inability to recognize the date
'31/12/2000'; the trains were started by altering the control system's date settings. *
News reports in September of 2000 told of a software vendor settling a lawsuit with a
large mortgage lender; the vendor had reportedly delivered an online mortgage
processing system that did not meet specifications, was delivered late, and didn't work.
* In early 2000, major problems were reported with a new computer system in a large
suburban U.S. public school district with 100,000+ students; problems included 10,000
erroneous report cards and students left stranded by failed class registration systems; the
district's CIO was fired. The school district decided to reinstate it's original 25-year old
system for at least a year until the bugs were worked out of the new system by the
software vendors. * In October of 1999 the $125 million NASA Mars Climate Orbiter
spacecraft was believed to be lost in space due to a simple data conversion error. It was
determined that spacecraft software used certain data in English units that should have
been in metric units. Among other tasks, the orbiter was to serve as a communications
relay for the Mars Polar Lander mission, which failed for unknown reasons in December
1999. Several investigating panels were convened to determine the process failures that
allowed the error to go undetected. * Bugs in software supporting a large commercial
high-speed data network affected 70,000 business customers over a period of 8 days in
August of 1999. Among those affected was the electronic trading system of the largest
U.S. futures exchange, which was shut down for most of a week as a result of the
outages. * January 1998 news reports told of software problems at a major U.S.
telecommunications company that resulted in no charges for long distance calls for a
month for 400,000 customers. The problem went undetected until customers called up
with questions about their bills. 4.Why is it often hard for management to get serious
about quality assurance? * Solving problems is a high-visibility process; preventing
problems is low-visibility. This is illustrated by an old parable: In ancient China there
was a family of healers, one of whom was known throughout the land and employed as a
physician to a great lord. 5.Why does software have bugs? * Miscommunication or
no communication - as to specifics of what an application should or shouldn't do (the
application's requirements). * Software complexity - the complexity of current software
applications can be difficult to comprehend for anyone without experience in modern-
day software development. Multi-tiered applications, client-server and distributed
applications, data communications, enormous relational databases, and sheer size of
applications have all contributed to the exponential growth in software/system
complexity. * Programming errors - programmers, like anyone else, can make mistakes.
* Changing requirements (whether documented or undocumented) - the end-user may not
understand the effects of changes, or may understand and request them anyway -
redesign, rescheduling of engineers, effects on other projects, work already completed
that may have to be redone or thrown out, hardware requirements that may be affected,

#411,AnnapurnaBlock, AdithyaEnclave, Ameerpet, Hyderabad. 27


Ph:040-64510222 E-Mail:
platinum_hyd@yahoo.com
Platinum Technologies Testing Tools
etc. If there are many minor changes or any major changes, known and unknown
dependencies among parts of the project are likely to interact and cause problems, and
the complexity of coordinating changes may result in errors. Enthusiasm of engineering
staff may be affected. In some fast-changing business environments, continuously
modified requirements may be a fact of life. In this case, management must understand
the resulting risks, and QA and test engineers must adapt and plan for continuous
extensive testing to keep the inevitable bugs from running out of control - see 'What can
be done if requirements are changing continuously?' in Part 2 of the FAQ. Also see
information about 'agile' approaches such as XP, also in Part 2 of the FAQ. * Time
pressures - scheduling of software projects is difficult at best, often requiring a lot of
guesswork. When deadlines loom and the crunch comes, mistakes will be made. * egos
- people prefer to say things like: * * 'no problem' * * 'piece of cake' * * 'I can whip
that out in a few hours' * * 'it should be easy to update that old code' * instead of: * *
'that adds a lot of complexity and we could end up making a lot of mistakes' * * 'we
have no idea if we can do that; we'll wing it' * * 'I can't estimate how long it will take,
until I take a close look at it' * * 'we can't figure out what that old spaghetti code did in
the first place' If there are too many unrealistic 'no problem's', the result is bugs. *
Poorly documented code - it's tough to maintain and modify code that is badly written or
poorly documented; the result is bugs. In many organizations management provides no
incentive for programmers to document their code or write clear, understandable,
maintainable code. In fact, it's usually the opposite: they get points mostly for quickly
turning out code, and there's job security if nobody else can understand it ('if it was hard
to write, it should be hard to read'). * Software development tools - visual tools, class
libraries, compilers, scripting tools, etc. often introduce their own bugs or are poorly
documented, resulting in added bugs.
6.How can new Software QA processes be introduced in an existing organization?
* A lot depends on the size of the organization and the risks involved. For large
organizations with high-risk (in terms of lives or property) projects, serious management
buy-in is required and a formalized QA process is necessary. * Where the risk is lower,
management and organizational buy-in and QA implementation may be a slower, step-at-
a-time process. QA processes should be balanced with productivity so as to keep
bureaucracy from getting out of hand. * For small groups or projects, a more ad-hoc
process may be appropriate, depending on the type of customers and projects. A lot will
depend on team leads or managers, feedback to developers, and ensuring adequate
communications among customers, managers, developers, and testers. * The most value
for effort will often be in (a) requirements management processes, with a goal of clear,
complete, testable requirement specifications embodied in requirements or design
documentation, or in 'agile'-type environments extensive continuous coordination with
end-users, (b) design inspections and code inspections, and (c) post-
mortems/retrospectives.
7.What is verification? validation? * Verification typically involves reviews and
meetings to evaluate documents, plans, code, requirements, and specifications. This can
be done with checklists, issues lists, walkthroughs, and inspection meetings. Validation
typically involves actual testing and takes place after verifications are completed. The
term 'IV & V' refers to Independent Verification and Validation.
8.What is a 'walkthrough'? * A 'walkthrough' is an informal meeting for evaluation
or informational purposes. Little or no preparation is usually required.

#411,AnnapurnaBlock, AdithyaEnclave, Ameerpet, Hyderabad. 28


Ph:040-64510222 E-Mail:
platinum_hyd@yahoo.com
Platinum Technologies Testing Tools
9.What's an 'inspection'? * An inspection is more formalized than a 'walkthrough',
typically with 3-8 people including a moderator, reader, and a recorder to take notes. The
subject of the inspection is typically a document such as a requirements spec or a test
plan, and the purpose is to find problems and see what's missing, not to fix anything.
Attendees should prepare for this type of meeting by reading thru the document; most
problems will be found during this preparation. The result of the inspection meeting
should be a written report.
10.What kinds of testing should be considered? * Black box testing - not based on
any knowledge of internal design or code. Tests are based on requirements and
functionality. * White box testing - based on knowledge of the internal logic of an
application's code. Tests are based on coverage of code statements, branches, paths,
conditions. * Unit testing - the most 'micro' scale of testing; to test particular functions
or code modules. Typically done by the programmer and not by testers, as it requires
detailed knowledge of the internal program design and code. Not always easily done
unless the application has a well-designed architecture with tight code; may require
developing test driver modules or test harnesses. * Incremental integration testing -
continuous testing of an application as new functionality is added; requires that various
aspects of an application's functionality be independent enough to work separately before
all parts of the program are completed, or that test drivers be developed as needed; done
by programmers or by testers. * Integration testing - testing of combined parts of an
application to determine if they function together correctly. The 'parts' can be code
modules, individual applications, client and server applications on a network, etc. This
type of testing is especially relevant to client/server and distributed systems. *
Functional testing - black-box type testing geared to functional requirements of an
application; this type of testing should be done by testers. This doesn't mean that the
programmers shouldn't check that their code works before releasing it (which of course
applies to any stage of testing.) * System testing - black-box type testing that is based
on overall requirements specifications; covers all combined parts of a system. * End-to-
end testing - similar to system testing; the 'macro' end of the test scale; involves testing of
a complete application environment in a situation that mimics real-world use, such as
interacting with a database, using network communications, or interacting with other
hardware, applications, or systems if appropriate. * Sanity testing or smoke testing -
typically an initial testing effort to determine if a new software version is performing
well enough to accept it for a major testing effort. For example, if the new software is
crashing systems every 5 minutes, bogging down systems to a crawl, or corrupting
databases, the software may not be in a 'sane' enough condition to warrant further testing
in its current state. * Regression testing - re-testing after fixes or modifications of the
software or its environment. It can be difficult to determine how much re-testing is
needed, especially near the end of the development cycle. Automated testing tools can be
especially useful for this type of testing. * Acceptance testing - final testing based on
specifications of the end-user or customer, or based on use by end-users/customers over
some limited period of time. * Load testing - testing an application under heavy loads,
such as testing of a web site under a range of loads to determine at what point the
system's response time degrades or fails. * Stress testing - term often used
interchangeably with 'load' and 'performance' testing. Also used to describe such tests as
system functional testing while under unusually heavy loads, heavy repetition of certain
actions or inputs, input of large numerical values, large complex queries to a database

#411,AnnapurnaBlock, AdithyaEnclave, Ameerpet, Hyderabad. 29


Ph:040-64510222 E-Mail:
platinum_hyd@yahoo.com
Platinum Technologies Testing Tools
system, etc. * Performance testing - term often used interchangeably with 'stress' and
'load' testing. Ideally 'performance' testing (and any other 'type' of testing) is defined in
requirements documentation or QA or Test Plans. * Usability testing - testing for 'user-
friendliness'. Clearly this is subjective, and will depend on the targeted end-user or
customer. User interviews, surveys, video recording of user sessions, and other
techniques can be used. Programmers and testers are usually not appropriate as usability
testers. * Install/uninstall testing - testing of full, partial, or upgrade install/uninstall
processes. * Recovery testing - testing how well a system recovers from crashes,
hardware failures, or other catastrophic problems. * Failover testing - typically used
interchangeably with 'recovery testing' * Security testing - testing how well the system
protects against unauthorized internal or external access, willful damage, etc; may
require sophisticated testing techniques. * Compatability testing - testing how well
software performs in a particular hardware/software/operating system/network/etc.
environment. * Exploratory testing - often taken to mean a creative, informal software
test that is not based on formal test plans or test cases; testers may be learning the
software as they test it. * Ad-hoc testing - similar to exploratory testing, but often taken
to mean that the testers have significant understanding of the software before testing it.
* Context-driven testing - testing driven by an understanding of the environment, culture,
and intended use of software. For example, the testing approach for life-critical medical
equipment software would be completely different than that for a low-cost computer
game. * User acceptance testing - determining if software is satisfactory to an end-user
or customer. * Comparison testing - comparing software weaknesses and strengths to
competing products. * Alpha testing - testing of an application when development is
nearing completion; minor design changes may still be made as a result of such testing.
Typically done by end-users or others, not by programmers or testers. * Beta testing -
testing when development and testing are essentially completed and final bugs and
problems need to be found before final release. Typically done by end-users or others,
not by programmers or testers. * Mutation testing - a method for determining if a set of
test data or test cases is useful, by deliberately introducing various code changes ('bugs')
and retesting with the original test data/cases to determine if the 'bugs' are detected.
Proper implementation requires large computational resources.
11.What are 5 common problems in the software development process? * Solid
requirements - clear, complete, detailed, cohesive, attainable, testable requirements that
are agreed to by all players. Use prototypes to help nail down requirements. In 'agile'-
type environments, continuous coordination with customers/end-users is necessary. *
Realistic schedules - allow adequate time for planning, design, testing, bug fixing, re-
testing, changes, and documentation; personnel should be able to complete the project
without burning out. * Adequate testing - start testing early on, re-test after fixes or
changes, plan for adequate time for testing and bug-fixing. 'Early' testing ideally includes
unit testing by developers and built-in testing and diagnostic capabilities. * Stick to
initial requirements as much as possible - be prepared to defend against excessive
changes and additions once development has begun, and be prepared to explain
consequences. If changes are necessary, they should be adequately reflected in related
schedule changes. If possible, work closely with customers/end-users to manage
expectations. This will provide them a higher comfort level with their requirements
decisions and minimize excessive changes later on. * Communication - require
walkthroughs and inspections when appropriate; make extensive use of group

#411,AnnapurnaBlock, AdithyaEnclave, Ameerpet, Hyderabad. 30


Ph:040-64510222 E-Mail:
platinum_hyd@yahoo.com
Platinum Technologies Testing Tools
communication tools - e-mail, groupware, networked bug-tracking tools and change
management tools, intranet capabilities, etc.; insure that information/documentation is
available and up-to-date - preferably electronic, not paper; promote teamwork and
cooperation; use protoypes if possible to clarify customers' expectations.
12.What is software 'quality'? * Quality software is reasonably bug-free, delivered
on time and within budget, meets requirements and/or expectations, and is maintainable.
However, quality is obviously a subjective term. It will depend on who the 'customer' is
and their overall influence in the scheme of things. A wide-angle view of the 'customers'
of a software development project might include end-users, customer acceptance testers,
customer contract officers, customer management, the development organization's. *
Management/accountants/testers/salespeople, future software maintenance engineers,
stockholders, magazine columnists, etc. Each type of 'customer' will have their own slant
on 'quality' - the accounting department might define quality in terms of profits while an
end-user might define quality as user-friendly and bug-free.
13.What is 'good code'? * * 'Good code' is code that works, is bug free, and is readable
and maintainable. Some organizations have coding 'standards' that all developers are
supposed to adhere to, but everyone has different ideas about what's best, or what is too
many or too few rules. There are also various theories and metrics, such as McCabe
Complexity metrics. It should be kept in mind that excessive use of standards and rules
can stifle productivity and creativity. 'Peer reviews', 'buddy checks' code analysis tools,
etc. can be used to check for problems and enforce standards. For C and C++ coding,
here are some typical ideas to consider in setting rules/standards; these may or may not
apply to a particular situation: * Minimize or eliminate use of global variables. * Use
descriptive function and method names - use both upper and lower case, avoid
abbreviations, use as many characters as necessary to be adequately descriptive (use of
more than 20 characters is not out of line); be consistent in naming conventions. * Use
descriptive variable names - use both upper and lower case, avoid abbreviations, use as
many characters as necessary to be adequately descriptive (use of more than 20
characters is not out of line); be consistent in naming conventions. * Function and
method sizes should be minimized; less than 100 lines of code is good, less than 50 lines
is preferable. * Function descriptions should be clearly spelled out in comments
preceding a function's code. * Organize code for readability. * Use whitespace
generously - vertically and horizontally. * Each line of code should contain 70
characters max. * One code statement per line. * Coding style should be consistent
throught a program (eg, use of brackets, indentations, naming conventions, etc.) * In
adding comments, err on the side of too many rather than too few comments; a common
rule of thumb is that there should be at least as many lines of comments (including
header blocks) as lines of code. * No matter how small, an application should include
documentaion of the overall program function and flow (even a few paragraphs is better
than nothing); or if possible a separate flow chart and detailed program documentation.
* Make extensive use of error handling procedures and status and error logging. * For
C++, to minimize complexity and increase maintainability, avoid too many levels of
inheritance in class heirarchies (relative to the size and complexity of the application).
Minimize use of multiple inheritance, and minimize use of operator overloading (note
that the Java programming language eliminates multiple inheritance and operator
overloading.) * For C++, keep class methods small, less than 50 lines of code per
method is preferable. * For C++, make liberal use of exception handlers.

#411,AnnapurnaBlock, AdithyaEnclave, Ameerpet, Hyderabad. 31


Ph:040-64510222 E-Mail:
platinum_hyd@yahoo.com
Platinum Technologies Testing Tools
14.What is 'good design'? * * 'Design' could refer to many things, but often refers to
'functional design' or 'internal design'. Good internal design is indicated by software code
whose overall structure is clear, understandable, easily modifiable, and maintainable; is
robust with sufficient error-handling and status logging capability; and works correctly
when implemented. Good functional design is indicated by an application whose
functionality can be traced back to customer and end-user requirements.For programs
that have a user interface, it's often a good idea to assume that the end user will have
little computer knowledge and may not read a user manual or even the on-line help; some
common rules-of-thumb include: * The program should act in a way that least surprises
the user * It should always be evident to the user what can be done next and how to exit
* The program shouldn't let the users do something stupid without warning them.
15.What is SEI? CMM? CMMI? ISO? IEEE? ANSI? Will it help? * SEI =
'Software Engineering Institute' at Carnegie-Mellon University; initiated by the U.S.
Defense Department to help improve software development processes. * CMM =
'Capability Maturity Model', now called the CMMI ('Capability Maturity Model
Integration'), developed by the SEI. It's a model of 5 levels of process 'maturity' that
determine effectiveness in delivering quality software. It is geared to large organizations
such as large U.S. Defense Department contractors. However, many of the QA processes
involved are appropriate to any organization, and if reasonably applied can be helpful.
Organizations can receive CMMI ratings by undergoing assessments by qualified
auditors. * Level 1 - characterized by chaos, periodic panics, and heroic efforts required
by individuals to successfully complete projects. Few if any processes in place; successes
may not be repeatable. * Level 2 - software project tracking, requirements management,
realistic planning, and configuration management processes are in place; successful
practices can be repeated. * Level 3 - standard software development and maintenance
processes are integrated throughout an organization; a Software Engineering Process
Group is is in place to oversee software processes, and training programs are used to
ensure understanding and compliance. * Level 4 - metrics are used to track
productivity, processes, and products. Project performance is predictable, and quality is
consistently high. * Level 5 - the focus is on continouous process improvement. The
impact of new processes and technologies can be predicted and effectively implemented
when required. * Perspective on CMM ratings: During 1997-2001, 1018 organizations
were assessed. Of those, 27% were rated at Level 1, 39% at 2, 23% at 3, 6% at 4, and 5%
at 5. (For ratings during the period 1992-96, 62% were at Level 1, 23% at 2, 13% at 3,
2% at 4, and 0.4% at 5.) The median size of organizations was 100 software
engineering/maintenance personnel; 32% of organizations were U.S. federal contractors
or agencies. For those rated at Level 1, the most problematical key process area was in
Software Quality Assurance. * ISO = 'International Organisation for Standardization' -
The ISO 9001:2000 standard (which replaces the previous standard of 1994) concerns
quality systems that are assessed by outside auditors, and it applies to many kinds of
production and manufacturing organizations, not just software. It covers documentation,
design, development, production, testing, installation, servicing, and other processes. The
full set of standards consists of: (a)Q9001-2000 - Quality Management Systems:
Requirements; (b)Q9000-2000 - Quality Management Systems: Fundamentals and
Vocabulary; (c)Q9004-2000 - Quality Management Systems: Guidelines for Performance
Improvements. To be ISO 9001 certified, a third-party auditor assesses an organization,
and certification is typically good for about 3 years, after which a complete reassessment

#411,AnnapurnaBlock, AdithyaEnclave, Ameerpet, Hyderabad. 32


Ph:040-64510222 E-Mail:
platinum_hyd@yahoo.com
Platinum Technologies Testing Tools
is required. Note that ISO certification does not necessarily indicate quality products - it
indicates only that documented processes are followed. Also see http://www.iso.ch/ for
the latest information. In the U.S. the standards can be purchased via the ASQ web site at
http://e-standards.asq.org/ * IEEE = 'Institute of Electrical and Electronics Engineers' -
among other things, creates standards such as 'IEEE Standard for Software Test
Documentation' (IEEE/ANSI Standard 829), 'IEEE Standard of Software Unit Testing
(IEEE/ANSI Standard 1008), 'IEEE Standard for Software Quality Assurance Plans'
(IEEE/ANSI Standard 730), and others. * ANSI = 'American National Standards
Institute', the primary industrial standards body in the U.S.; publishes some software-
related standards in conjunction with the IEEE and ASQ (American Society for Quality).
* Other software development/IT management process assessment methods besides
CMMI and ISO 9000 include SPICE, Trillium, TickIT, Bootstrap, ITIL, MOF, and
CobiT.
16.What is the 'software life cycle'? * The life cycle begins when an application is
first conceived and ends when it is no longer in use. It includes aspects such as initial
concept, requirements analysis, functional design, internal design, documentation
planning, test planning, coding, document preparation, integration, testing, maintenance,
updates, retesting, phase-out, and other aspects.
17.Will automated testing tools make testing easier? * Possibly For small projects,
the time needed to learn and implement them may not be worth it. For larger projects, or
on-going long-term projects they can be valuable. * A common type of automated tool
is the 'record/playback' type. For example, a tester could click through all combinations
of menu choices, dialog box choices, buttons, etc. in an application GUI and have them
'recorded' and the results logged by a tool. The 'recording' is typically in the form of text
based on a scripting language that is interpretable by the testing tool. If new buttons are
added, or some underlying code in the application is changed, etc. the application might
then be retested by just 'playing back' the 'recorded' actions, and comparing the logging
results to check effects of the changes. The problem with such tools is that if there are
continual changes to the system being tested, the 'recordings' may have to be changed so
much that it becomes very time-consuming to continuously update the scripts.
Additionally, interpretation and analysis of results (screens, data, logs, etc.) can be a
difficult task. Note that there are record/playback tools for text-based interfaces also, and
for all types of platforms. * Another common type of approach for automation of
functional testing is 'data-driven' or 'keyword-driven' automated testing, in which the test
drivers are separated from the data and/or actions utilized in testing (an 'action' would be
something like 'enter a value in a text box'). Test drivers can be in the form of automated
test tools or custom-written testing software. The data and actions can be more easily
maintained - such as via a spreadsheet - since they are separate from the test drivers. The
test drivers 'read' the data/action information to perform specified tests. This approach
can enable more efficient control, development, documentation, and maintenance of
automated tests/test cases. * Other automated tools can include: * Code analyzers -
monitor code complexity, adherence to standards, etc. * Coverage analyzers - these
tools check which parts of the code have been exercised by a test, and may be oriented to
code statement coverage, condition coverage, path coverage, etc. * Memory analyzers -
such as bounds-checkers and leak detectors. * Load/performance test tools - for testing
client/server and web applications under various load levels. * Web test tools - to check
that links are valid, HTML code usage is correct, client-side and server-side programs

#411,AnnapurnaBlock, AdithyaEnclave, Ameerpet, Hyderabad. 33


Ph:040-64510222 E-Mail:
platinum_hyd@yahoo.com
Platinum Technologies Testing Tools
work, a web site's interactions are secure. * Other tools - for test case management,
documentation management, bug reporting, and configuration management

#411,AnnapurnaBlock, AdithyaEnclave, Ameerpet, Hyderabad. 34


Ph:040-64510222 E-Mail:
platinum_hyd@yahoo.com

You might also like