Professional Documents
Culture Documents
Software QA Testing Questioners
Software QA Testing Questioners
There are unclear software requirements because there is miscommunication as to what the software should or shouldn't
do.
Software complexity. All of the followings contribute to the exponential growth in software and system complexity: Windows
interfaces, client-server and distributed applications, data communications, enormous relational databases and the sheer
size of applications.
Programming errors occur because programmers and software engineers, like everyone else, can make mistakes.
As to changing requirements, in some fast-changing business environments, continuously modified requirements are a fact
of life. Sometimes customers do not understand the effects of changes, or understand them but request them anyway. And
the changes require redesign of the software, rescheduling of resources and some of the work already completed have to
be redone or discarded and hardware requirements can be effected, too.
Bug tracking can result in errors because the complexity of keeping track of changes can result in errors, too.
Time pressures can cause problems, because scheduling of software projects is not easy and it often requires a lot of
guesswork and when deadlines loom and the crunch comes, mistakes will be made.
Code documentation is tough to maintain and it is also tough to modify code that is poorly documented. The result is bugs.
Sometimes there is no incentive for programmers and software engineers to document their code and write clearly
documented, understandable code. Sometimes developers get kudos for quickly turning out code, or programmers and
software engineers feel they cannot have job security if everyone can understand the code they write, or they believe if the
code was hard to write, it should be hard to read.
Software development tools , including visual tools, class libraries, compilers, scripting tools, can introduce their own bugs.
Other times the tools are poorly documented, which can create additional bugs.
Requirements are poorly written when requirements are unclear, incomplete, too general, or not testable; therefore there will
be problems.
The schedule is unrealistic if too much work is crammed in too little time.
Software testing is inadequate if none knows whether or not the software is any good until customers complain or the system
crashes.
It's extremely common that new features are added after development is underway.
Miscommunication either means the developers don't know what is needed, or customers have unrealistic expectations and
therefore problems are guaranteed.
very time-consuming task to continuously update the scripts. Another problem with such tools is the interpretation of the results (screens,
data, logs, etc.) that can be a time-consuming task. You CAN learn to use automated testing tools, with little or no outside help. Get CAN
get free information. Click on a link!
Q13. Give me five solutions to problems that occur during software development.
A: Solid requirements, realistic schedules, adequate testing, firm requirements and good communication.
1.
2.
3.
4.
5.
Ensure the requirements are solid, clear, complete, detailed, cohesive, attainable and testable. All players should agree to
requirements. Use prototypes to help nail down requirements.
Have schedules that are realistic. Allow adequate time for planning, design, testing, bug fixing, re-testing, changes and
documentation. Personnel should be able to complete the project without burning out.
Do testing that is adequate. Start testing early on, re-test after fixes or changes, and plan for sufficient time for both testing and
bug fixing.
Avoid new features. Stick to initial requirements as much as possible. Be prepared to defend design against changes and
additions, once development has begun and be prepared to explain consequences. If changes are necessary, ensure they're
adequately reflected in related schedule changes. Use prototypes early on so customers' expectations are clarified and
customers can see what to expect; this will minimize changes later on.
Communicate. Require walkthroughs and inspections when appropriate; make extensive use of e-mail, networked bug-tracking
tools, tools of change management. Ensure documentation is available and up-to-date. Use documentation that is electronic,
not paper. Promote teamwork and cooperation.
Good test engineers have a "test to break" attitude. We, good test engineers, take the point of view of the customer, have a strong desire
for quality and an attention to detail. Tact and diplomacy are useful in maintaining a cooperative relationship with developers and an
ability to communicate with both technical and non-technical people. Previous software development experience is also helpful as it
provides a deeper understanding of the software development process, gives the test engineer an appreciation for the developers' point
of view and reduces the learning curve in automated test tool programming.
like eye strain either. If the resume is mechanically challenging, they just throw it aside for one that is easier on the eyes. Three, there are
lots of resumes out there these days, and that is also part of the problem. Four, in light of the current scanning scenario, more than one
page is not a deterrent because many will scan your resume into their database. Once the resume is in there and searchable, you have
accomplished one of the goals of resume distribution. Five, resume readers don't like to guess and most won't call you to clarify what is
on your resume. Generally speaking, your resume should tell your story. If you're a college graduate looking for your first job, a one-page
resume is just fine. If you have a longer story, the resume needs to be longer. Please put your experience on the resume so resume
readers can tell when and for whom you did what. Short resumes -- for people long on experience -- are not appropriate. The real
audience for these short resumes is people with short attention spans and low IQ. I assure you that when your resume gets into the right
hands, it will be read thoroughly.
Q17. What makes a good QA/Test Manager?
A: QA/Test Managers are familiar with the software development process; able to maintain enthusiasm of their team and promote a
positive atmosphere; able to promote teamwork to increase productivity; able to promote cooperation between Software and Test/QA
Engineers, have the people skills needed to promote improvements in QA processes, have the ability to withstand pressures and say
*no* to other managers when quality is insufficient or QA processes are not being adhered to; able to communicate with technical and
non-technical people; as well as able to run meetings and keep them focused.
Please note, the process of developing test cases can help find problems in the requirements or design of an application, since it requires
you to completely think through the operation of the application. For this reason, it is useful to prepare test cases early in the
development cycle, if possible.
Q27. What if the project isn't big enough to justify extensive testing?
A: Consider the impact of project errors, not the size of the project. However, if extensive testing is still not justified, risk analysis is again
needed and the considerations listed under "What if there isn't enough time for thorough testing?" do apply. The test engineer then
should do "ad hoc" testing, or write up a limited test plan based on the risk analysis.
Ensure the code is well commented and well documented; this makes changes easier for the developers.
Use rapid prototyping whenever possible; this will help customers feel sure of their requirements and minimize
changes.
In the project's initial schedule, allow for some extra time to commensurate with probable changes.
Move new requirements to a 'Phase 2' version of an application and use the original requirements for the 'Phase 1'
version.
Negotiate to allow only easily implemented new requirements into the project; move more difficult, new requirements
into future versions of the application.
Ensure customers and management understand scheduling impacts, inherent risks and costs of significant
requirements changes. Then let management or the customers decide if the changes are warranted; after all, that's
their job.
Balance the effort put into setting up automated testing with the expected effort required to redo them to deal with
changes.
Design some flexibility into automated test scripts;
Focus initial automated testing on application aspects that are most likely to remain unchanged;
Devote appropriate effort to risk analysis of changes, in order to minimize regression-testing needs;
Design some flexibility into test cases; this is not easily done; the best bet is to minimize the detail in the test cases,
or set up only higher-level generic-type test plans;
Focus less on detailed test plans and test cases and more on ad-hoc testing with an understanding of the added risk
this entails.
Q29. What if the application has functionality that wasn't in the requirements?
A: It may take serious effort to determine if an application has significant unexpected or hidden functionality, which it would indicate
deeper problems in the software development process. If the functionality isn't necessary to the purpose of the application, it should be
removed, as it may have unknown impacts or dependencies that were not taken into account by the designer or the customer.
If not removed, design information will be needed to determine added testing needs or regression testing needs. Management should be
made aware of any significant added risks as a result of the unexpected functionality. If the functionality only affects areas, such as minor
improvements in the user interface, it may not be a significant risk.
Q33. Why do you recommended that we test during the design phase?
A: Because testing during the design phase can prevent defects later on. We recommend verifying three things...
1.
2.
3.
A: The objective of regression testing is to ensure the software remains intact. A baseline set of data and scripts is maintained and
executed to verify changes introduced during the release have not "undone" any previous code. Expected results from the baseline are
compared to results of the software under test. All discrepancies are highlighted and accounted for, before testing proceeds to the next
level.
10
This methodology can be used and molded to your organization's needs. Rob Davis believes that using this methodology is important in
the development and in ongoing maintenance of his customers' applications.
A description of the required hardware and software components, including test tools. This information comes from
the test environment, including test tool data.
A description of roles and responsibilities of the resources required for the test and schedule constraints. This
information comes from man-hours and schedules.
Testing methodology. This is based on known standards.
Functional and technical requirements of the application. This information comes from requirements, change request,
technical and functional design documents.
Requirements that the system can not provide, e.g. system limitations.
An approved and signed off test strategy document, test plan, including test cases.
11
Test cases and scenarios are designed to represent both typical and unusual situations that may occur in the application.
Test engineers define unit test requirements and unit test cases. Test engineers also execute unit test cases.
It is the test team that, with assistance of developers and clients, develops test cases and scenarios for integration and system
testing.
Test scenarios are executed through the use of test procedures or scripts.
Test procedures or scripts define a series of steps necessary to perform one or more test scenarios.
Test procedures or scripts include the specific data that will be used for testing the process or transaction.
Test scripts are mapped back to the requirements and traceability matrices are used to ensure each test is within scope.
Test data is captured and base lined, prior to testing. This data serves as the foundation for unit and system testing and used
to exercise system functionality in a controlled environment.
Some output data is also base-lined for future comparison. Base-lined data is used to support future application maintenance
via regression testing.
A pretest meeting is held to assess the readiness of the application and the environment and data to be tested. A test
readiness document is created to indicate the status of the entrance criteria of the release.
Approved documents of test scenarios, test cases, test conditions and test data.
Reports of software design issues, given to software developers for correction.
The output from the execution of test procedures is known as test results. Test results are evaluated by test engineers to
determine whether the expected results have been obtained. All discrepancies/anomalies are logged and discussed with the
software team lead, hardware test lead, programmers, software engineers and documented for further investigation and
resolution. Every company has a different process for logging and reporting bugs/defects uncovered during testing.
A pass/fail criteria is used to determine the severity of a problem, and results are recorded in a test summary report. The
severity of a problem, found during system testing, is defined in accordance to the customer's risk assessment and recorded in
their selected tracking tool.
Proposed fixes are delivered to the testing environment, based on the severity of the problem. Fixes are regression tested and
flawless fixes are migrated to a new baseline. Following completion of the test, members of the test team prepare a summary
report. The summary report is reviewed by the Project Manager, Software QA Manager and/or Test Team Lead.
12
After a particular level of testing has been certified, it is the responsibility of the Configuration Manager to coordinate the
migration of the release software components to the next test level, as documented in the Configuration Management Plan.
The software is only migrated to the production environment after the Project Manager's formal acceptance.
The test team reviews test document problems identified during testing, and update documents where appropriate.
Approved test documents, e.g. Test Plan, Test Cases, Test Procedures.
Test tools, including automated test tools, if applicable.
Developed scripts.
Changes to the design, i.e. Change Request Documents.
Test data.
Availability of the test team and project team.
General and Detailed Design Documents, i.e. Requirements Document, Software Design Document.
A software that has been migrated to the test environment, i.e. unit tested code, via the Configuration/Build Manager.
Test Readiness Document.
Document Updates.
Log and summary of the test results. Usually this is part of the Test Report. This needs to be approved and signed-off with
revised testing deliverables.
Changes to the code, also known as test fixes.
Test document problems uncovered as a result of testing. Examples are Requirements document and Design Document
problems.
Reports on software design issues, given to software developers for correction. Examples are bug reports on code issues.
Formal record of test incidents, usually part of problem tracking.
Base-lined package, also known as tested source and object code, ready for migration to the next level
Testing issues requiring resolution. Usually this requires additional negotiation at the project management level.
13
Alpha testing,
Beta testing, and
Mutation testing.
Q80. What is the difference between reliability testing and load testing?
A: Load testing is a blanket term that is used in many different ways across the professional software testing community. The term, load
testing, is often used synonymously with stress testing, performance testing, reliability testing, and volume testing. Load testing generally
stops short of stress testing. During stress testing, the load is so great that errors are the expected results, though there is gray area in
between stress testing and load testing.
Q81. What is the difference between volume testing and load testing?
A: Load testing is a blanket term that is used in many different ways across the professional software testing community. The term, load
testing, is often used synonymously with stress testing, performance testing, reliability testing, and volume testing. Load testing generally
stops short of stress testing. During stress testing, the load is so great that errors are the expected results, though there is gray area in
between stress testing and load testing.
14
to catch bugs quickly. Then, (and this is called second stage of alpha testing), the software is handed over to us, the software QA staff,
for additional testing in an environment that is similar to the intended use.
15
Black box testing considers neither the code itself, nor the "inner workings" of the software.
Q102. What is the difference between a software fault and a software failure?
A: Software failure occurs when the software does not do what the user expects to see. A software fault, on the other hand, is a hidden
programming error. A software fault becomes a software failure only when the exact computation conditions are met, and the faulty
portion of the code is executed on the CPU. This can occur during normal usage. Or, when the software is ported to a different hardware
platform. Or, when the software is ported to a different complier. Or, when the software gets extended
users get discouraged, before shareholders loose their cool and before employees get bogged down. We, test engineers help the work of
software development staff, so the development team can devote its time to build up the product. We, test engineers also promote
continual improvement. They provide documentation required by FDA, FAA, other regulatory agencies, and your customers. We, test
16
engineers save your company money by discovering defects EARLY in the design process, before failures occur in production, or in the
field. We save the reputation of your company by discovering bugs and design flaws, before bugs and design flaws damage the
reputation of your company.
17
Cyclomatic Complexity Metric (v(G)). Cyclomatic Complexity is a measure of the complexity of a module's decision structure.
It's the number of linearly independent paths and therefore, the minimum number of paths that should be tested.
Actual Complexity Metric (AC). Actual Complexity is the number of independent paths traversed during testing.
Module Design Complexity Metric (iv(G)). Module Design Complexity is the complexity of the design-reduced module, and
reflects the complexity of the module's calling patterns to its immediate subordinate modules. This metric differentiates
between modules that seriously complicate the design of a program they are part of, and modules that simply contain complex
computational logic. It is the basis upon which program design and integration complexities (S0 and S1) are calculated.
Essential Complexity Metric (ev(G)). Essential Complexity is a measure of the degree to which a module contains unstructured
constructs. This metric measures the degree of structuredness and the quality of the code. This metric is used to predict the
required maintenance effort and to help in the modularization process.
Pathological Complexity Metric (pv(G)). Pathological Complexity Metric is a measure of the degree to which a module contains
extremely unstructured constructs.
Design Complexity Metric (S0). Design Complexity Metric measures the amount of interaction between modules in a system.
Integration Complexity Metric (S1). Integration Complexity Metric measures the amount of integration testing necessary to
guard against errors.
Object Integration Complexity Metric (OS1). Object Integration Complexity Metric quantifies the number of tests necessary to
fully integrate an object or class into an OO system.
Global Data Complexity Metric (gdv(G)). Global Data Complexity Metric quantifies the cyclomatic complexity of a module's
structure as it relates to global/parameter data. It can be no less than one and no more than the cyclomatic complexity of the
original flowgraph.
Data Complexity Metric (DV). Data Complexity Metric quantifies the complexity of a module's structure as it relates to datarelated variables. It is the number of independent paths through data logic, and therefore, a measure of the testing effort with
respect to data-related variables.
Tested Data Complexity Metric (TDV). Tested Data Complexity Metric quantifies the complexity of a module's structure as it
relates to data-related variables. It is the number of independent paths through data logic that have been tested.
Data Reference Metric (DR). Data Reference Metric measures references to data-related variables independently of control
flow. It is the total number of times that data-related variables are used in a module.
Tested Data Reference Metric (TDR). Tested Data Reference Metric is the total number of tested references to data-related
variables.
Maintenance Severity Metric (maint_severity). Maintenance Severity Metric measures how difficult it is to maintain a module.
Data Reference Severity Metric (DR_severity). Data Reference Severity Metric measures the level of data intensity within a
module. It is an indicator of high levels of data related code; therefore, a module is data intense if it contains a large number of
data-related variables.
Data Complexity Severity Metric (DV_severity). Data Complexity Severity Metric measures the level of data density within a
module. It is an indicator of high levels of data logic in test paths, therefore, a module is data dense if it contains data-related
variables in a large proportion of its structures.
Global Data Severity Metric (gdv_severity). Global Data Severity Metric measures the potential impact of testing data-related
basis paths across modules. It is based on global data test paths.
Percent Public Data (PCTPUB). PCTPUB is the percentage of public and proteced data within a class.
18
Access to Public Data (PUBDATA) PUBDATA indicates the number of accesses to public and protected data.
Percent of Unoverloaded Calls (PCTCALL). PCTCALL is the number of non-overloaded calls in a system.
Number of Roots (ROOTCNT). ROOTCNT is the total number of class hierarchy roots within a program.
Fan-in (FANIN). FANIN is the number of classes from which a class is derived.
Maximum v(G) (MAXV). MAXV is the maximum cyclomatic complexity value for any single method within a class.
Maximum ev(G) (MAXEV). MAXEV is the maximum essential complexity value for any single method within a class.
Hierarchy Quality(QUAL). QUAL counts the number of classes within a system that are dependent upon their descendants.
Depth (DEPTH). Depth indicates at what level a class is located within its class hierarchy.
Lack of Cohesion of Methods (LOCM). LOCM is a measure of how the methods of a class interact with the data in a class.
Number of Children (NOC). NOC is the number of classes that are derived directly from a specified class.
Response For a Class (RFC). RFC is a count of methods implemented within a class plus the number of methods accessible to
an object of this class type due to inheritance.
Weighted Methods Per Class (WMC). WMC is a count of methods implemented within a class.
Program Length. Program length is the total number of operator occurences and the total number of operand occurences.
Program Volume. Program volume is the minimum number of bits required for coding the program.
Program Level and Program Difficulty. Program level and program difficulty is a measure of how easily a program is
comprehended.
Intelligent Content. Intelligent content shows the complexity of a given algorithm independent of the language used to express
the algorithm.
Programming Effort. Programming effort is the estimated mental effort required to develop a program.
Error Estimate. Error estimate calculates the number of errors in a program.
Programming Time. Programming time is the estimated amount of time to implement an algorithm.
Lines of Code
Lines of Comment
Lines of Mixed Code and Comments
Lines Left Blank
19
Q120. To learn to use WinRunner, should I sign up for a course at a nearby educational institution?
A: The cheapest, or free, education is sometimes provided on the job, by an employer, while one is getting paid to do a job that requires
the use of WinRunner and many other software testing tools. In lieu of a job, it is often a good idea to sign up for courses at nearby
educational institutions. Classroom education, especially non-degree courses in local, community colleges, tends to be cheap.
Q121. I don't have a lot of money. How can I become a good tester with little or no cost to me?
A: The cheapest, or free, education is sometimes provided on the job, by an employer, while one is getting paid to do a job that requires
the use of WinRunner and many other software testing tools.
20
Q127. Which of these roles are the best and most popular?
A: As a yardstick of popularity, if we count the number of applicants and resumes, Tester roles tend to be the most popular. Less popular
roles are roles of System Administrators, Test/QA Team Leads, and Test/QA Managers. The "best" job is the job that makes YOU happy.
The best job is the one that works for YOU, using the skills, resources, and talents YOU have. To find the best job, you need to
experiment, and "play" different roles. Persistence, combined with experimentation, will lead to success.
21
A: Verification takes place before validation, and not vice versa. Verification evaluates documents, plans, code, requirements, and
specifications. Validation, on the other hand, evaluates the product itself. The inputs of verification are checklists, issues lists,
walkthroughs and inspection meetings, reviews and meetings. The input of validation, on the other hand, is the actual testing of an actual
product. The output of verification is a nearly perfect set of documents, plans, specifications, and requirements document. The output of
validation, on the other hand, is a nearly perfect, actual product.
Q138. What is the difference between user documentation and user manual?
A: When a distinction is made between those who operate and use a computer system for its intended purpose, a separate user
documentation and user manual is created. Operators get user documentation, and users get user manuals.
22
23
Q164. Can you give me more information on software QA/testing, from a tester's point of view?
24
A: Yes, I can. You can visit my web site, and on pages www.robdavispe.com/free and www.robdavispe.com/free2 you can find answers
to many questions on software QA, documentation, and software testing, from a tester's point of view. As to questions and answers that
are not on my web site now, please be patient, as I am going to add more answers, as soon as time permits.
Q165. What is the difference between system testing and integration testing?
A: System testing is high level testing, and integration testing is a lower level testing. Integration testing is completed first, not the system
testing. In other words, upon completion of integration testing, system testing is started, and not vice versa. For integration testing, test
cases are developed with the express purpose of exercising the interfaces between the components. For system testing, on the other
hand, the complete system is configured in a controlled environment, and test cases are developed to simulate real life scenarios that
occur in a simulated real life test environment. The purpose of integration testing is to ensure distinct components of the application still
work in accordance to customer requirements. The purpose of system testing, on the other hand, is to validate an application's accuracy
and completeness in performing the functions as designed, and to test all functions of the system that are required in real life.
25
Q175. What types of white box testing can you tell me about?
A: White box testing is a testing approach that examines the application's program structure, and derives test cases from the
application's program logic. Clear box testing is a white box type of testing. Glass box testing is also a white box type of testing. Open
box testing is also a white box type of testing.
Q176. What types of black box testing can you tell me about?
A: Black box testing is functional testing, not based on any knowledge of internal software design or code. Black box testing is based on
requirements and functionality. Functional testing is also a black-box type of testing geared to functional requirements of an application.
System testing is also a black box type of testing. Acceptance testing is also a black box type of testing. Functional testing is also a black
box type of testing. Closed box testing is also a black box type of testing. Integration testing is also a black box type of testing.
26
Q183. What is the difference between a software bug and software defect?
A: A 'software bug' is a *nonspecific* term that means an inexplicable defect, error, flaw, mistake, failure, fault, or unwanted behavior of a
computer program. Other terms, e.g. 'software defect' and 'software failure', are *more specific*. While the term bug has been a part of
engineering jargon for many-many decades, there are many who believe the term 'bug' was named after insects that used to cause
malfunctions in electromechanical computers.
27
Internet, and whatever information you can lay your hands on. Two, get hands-on experience on how to use automated testing tools. If
there is a will, there is a way! You CAN do it, if you put your mind to it! You CAN learn to use WinRunner, and many other automated
testing tools, with little or no outside help. Click on a link!
Page 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28
28
________________________________________________________
1. About Ravi
Software QA;
Software Testing;
Software Verification;
Software Validation; and
Software Documentation.
________________________________________________________
2. Qualifications
3. Experience
29
4. Services
Rob can...
30
Verify software (i.e. ensure the product is designed to deliver all that functionality to the patient/customer (see
resume);
Analyze software performance (see resume);
Work with embedded, real-time software in C, C++, or Ada (see resume);
Conduct software testing (see resume);
Find lots of software issues (bugs, defects, MOLs, or problems) (see resume);
Document software issues (bugs, defects, MOLs, or problems) (see resume);
Verify software performance (see resume);
Work with software developers (see resume);
Inspect software and do software QA "walk-throughs";
Perform black box testing, white box testing, unit testing, incremental integration testing, integration testing,
functional testing, system testing, end-to-end testing, sanity testing, regression testing, acceptance testing, load
testing, performance testing, usability testing, install/un-install testing, recovery testing, security testing, compatibility
testing, exploratory testing, ad-hoc testing, user acceptance testing, comparison testing, alpha testing, beta testing,
or mutation testing (see resume);
Evaluate and report test results (see resume);
Track QA issues and fixes (see resume);
Re-test the software after one or more issues have been fixed (see resume);
Maintain and update the software QA test environment, all software QA test plans, software QA test cases and
software QA test procedures (see resume);
Do a good job and do it cheerfully, even when sometimes A) the code is poorly documented, B) schedule is
unrealistic, C) QA testing has been inadequate, D) too many new features have been added after development is
underway, E) software structure is not clear, not understandable, not easily modifiable or maintainable, F) project has
unclear, incomplete, too general, non-testable, or poorly documented requirement specifications, or G) requirements
are changing continuously (see resume);
Start the verification/testing process EARLY. Even when software requirements are not ready; when the software is
not ready. Verification/testing takes time and if all verification/testing activities start at the end of a project, there will
be no time left to do them properly. Writing software requirements takes time, too and if developers have no software
requirements till the end of a project, they cannot do their jobs properly. It is possible to start earlier. A good engineer
starts early. For example, based on user requirements, in the beginning of the software development cycle, Rob can
immediately derive software requirements. Then he can use these software requirements to verify whether or not the
user requirements contain any errors -- mostly logical errors like gaps in the user requirements, wrong assumptions,
etc. Developers can do a perfect job in writing code, but if the software requirements they receive contain errors, all
their work can become useless;
Work in all phases of software development life cycle (see resume);
Work in formalized or ad-hoc QA process environments (see resume); and
Make things happen and get things done (see resume).
________________________________________________________
5. FAQ
Do you have any questions?
6. Resume
Do you need a resume?
31
Netscape users, right click here, select "save link target as..." and hit "save" and then locate Rob's Word format
resume on your C drive.
Regardless of your choice of web browser, you can double-click here, view the resume, select "file", choose "save
as...", hit "save" and then locate Rob's Word format resume on your C drive.
If a dialog box pops up and asks you for a user name and password, hit "cancel" on the box. The box will close and
you will be able to view the resume.
Click here to view additional resumes in Word, HTML, Rich Text and Text formats.
________________________________________________________
________________________________________________________
32