Software Security Vulnerability Testing in Hostile Environments

You might also like

Download as pdf or txt
Download as pdf or txt
You are on page 1of 5

Software Security Vulnerability Testing in Hostile Environments

Herbert H. Thompson
Florida Institute of Technology Department of Computer Sciences 150 W. University Blvd. Melbourne, FloridaL 32901. U.S,A. ++1 321 795 4531

James A. Whittaker
Florida Institute of Technology Department of Computer Sciences 150 W. University Blvd. Melbourne, Florida. 32901. U.S.A. ++1 321 674 7638

Florence E. Mottay
Florida Institute of Technology Department of Computer Sciences 150 W. University Blvd. Melbourne, Florida. 32901. U.S.A. ++1 321 394 2159

hethomps@fit.edu

jw@cs.fit.edu

fmottay@se.fit.edu

ABSTRACT
Traditional Black box software testing can be effective at exposing some classes of software failures. Security class failures, however, do not tend to manifest readily using these techniques. The problem is that many security failures occur in stressed environments, which appear in the field, but are often neglected during testing because o f the difficulty to simulate these conditions. Software cart only be considered secure if it behaves securely under all operating environments. Hostile environment testing must thus be a part of any overall testing strategy. This paper describes this necessity and a black box approach for creating such environments in order to expose security vulnerabilities.

are errer-handling routines. Tests that invoke these routines, such as disk errors and network problems, are often only superficially explored. This is particularly true of web-enabled applications where the application's connection to the Internet is assumed to be stable and reliable during the execution of most test cases. Such applications are especially at risk of neglect because of their rapid development cycles that lend little time to testing on the whole. The problem is that extreme conditions, such as network failures during a remote transaction, disk write errors, memory failures etc., that are possible in the real world are sometimes disregarded during the testing phase due to the difficulty in simulating a hostile environment. It is during these periods o f sl3"ess that the software is most vulnerable and where carefully conceived security measures break down. If such situations are ignored, and other test cases pass, what we are left with is a dangerous illusion o f security, and quality in general. What is needed then is to integrate such failures into our test cases and to become aware of their impact on the security and integrity o f the product itself and the user's data. In this paper we first show how security procedures break down in hostile environments. We then go on to show measures we have taken to simulate such failures by intercepting and controlling responses to system calls made by the application. This provides a much needed, easy to implement, method to create a turbulent environment while executing selected tests, which, can reveal potentially severe security bugs that would otherwise escape testing and surface in the field. 2. THE IMPACT OF STRESS The definition of a security defect varies from application to application. In a very general sense, however, we can consider a security defect as any attribute o f the so/~ware that violates policies regarding access to resources [4]. Specifically, this may be a defect in the application that causes sensitive information to be written out to an unencrypted file or allows an attacker to deny access to a web server by authorized users. These defects come in many flavors. Some security vulnerabilities can be uncovered using conventional testing techniques, whose focus is on uncovering traditional, non-security class application defects. Unfortunately, there are many security defects that are very difficult to find using these methods. Indeed, there are many security class failures that only surface when the

Keywords
software failure, software defect, software testing, software security, fault injection.

I. INTRODUCTION
When building secure software systems, functionality and security are often in contention as development goals. Increased functionality leads to decreased security and, conversely, to achieve true security would require the software to be heavily restricted in its interactions with its environment. Such extreme measures would render the system unusable, yet, mission critical systems must be integrated into complex networks and be accessible to many. Such systems typically undergo rigorous testing, but, with limited resources and the ubiquitous force of market pressure, only a small subset o f the infinite test cases possible can be executed. Thus, a significant portion of the code is le/~ under-exercised or not traversed at all, Arguably the most neglected code paths during the testing process Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee providad that copies are n o t made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. To copy otherwise, or republish, to post on servers or to redisl3-ibuteto lists, requires prior specific permission and/or a fee. SAC 2002, Madrid, Spain 2002 ACM 1-58113-4,45-2/02/03...$5.00.

260

application under test is placed in a stressed environment. This stress can have many sources; load, memory or resource deprivation, application interoperability failures or faults in the system's environment. Such conditions can cause a cascade o f failures in the software. This paper is primarily concerned with placing the application under stress by simulating a faulty environment, which we refer to here as fault injection. Popular literature has many conflicting definitions of this term. It is used in some instances to refer to the purposeful insertion of coding errors into the application to observe how application stability is affected and to expose similar defects [6]. Others use it to describe environmental failure simulation, which is how we use it exclusively throughout this paper. When extraordinary conditions occur due to stress, error handling routines, if present, are executed. These are pathways through the application that do not add to its functionality. Instead, their role is to keep the functional code from failing. These error-handling routines, however, are notoriously subjected to far less testing than the functional code that they are created to protect [2]. To compound the problem, many failure situations are not conceived during design time and thus error handlers are added as afterthoughts, as situations are encountered in the test lab. With such limited exposure to testing, these code paths are fertile breeding grounds for many types of defects, but especially security defects. Why security defects? The primary motivation for writing error handling code is to save the application from failing catastrophically. Thus the concerns here are data corruption, system stability and overall integrity o f the application and its data. Some security failures have much more subtle symptoms, such as dumping secure date to a text file due to an unavailable network connection, which are often overlooked in these circumstances but create gaping security holes. Software can only be considered secure if it operates securely in all reasonable operating environments. Thus, to have any realistic picture o f the application's security vulnembilities, it must be exposed to environmental failures. Servers do run out o f disk space; network connectivity is sometimes intermittent and file permissions can be improperly set. Such conditions cannot be ignored as part o f an overall testing strategy. There are three basic approaches to simulating environmental failures in a lab situation: I. Code-Based F a u l t Injection Using this method, an application under test is stressed by modifying its functional coda so that the return values o f external function calls are hard coded to exercise specific code paths. This technique has some major drawbacks that will be discussed in section 5 of this paper.
-

3.

Runtime F a u l t Injection - Applications access external resources through function calls to the operating system, such as requesting memory, disk space etc. This method involves getting between the application and the OS and intercepting these calls. Using this method we can control responses from these calls and selectively deny resources such as memory, disk space etc. This is the main focus of our paper and will be discussed in detail in section 6.

Although this paper is largely concerned with using runtime fault injection as the primary means to introduce environmental failures into the system, it is important to note that the other two also play a rule in the security testing process. For example, code-based fault injection is often necessary to simulate failures within the application, specifically to create faulty return values to function calls made to functions within the same module.

3. S Y M P T O M S OF V U L N E R A B I L I T I E S
To effectively find security defects we first need to ]mow what to look for. Symptoms are seldom apparent; they result from the application interacting with its environment: the file system, registry, network and other applications [7]. Specifically, reading and writing files, writing to the registry or sending information across the network or interacting with components or other applications that have this ability can all be potential points o f security failure. These transactions happen behind the scenes, hidden away from our view o f the application. As a result, insecure side effects can occur during test execution that standard testing techniques are ill equipped to detect. Consider the example presented in [5], which describes an online music vendor that sells encrypted music over the Interact. These music files are cryptographieally bound to the purchaser's player such that the purchaser is the only person permitted to play a purchased file, the goal being to make the files difficult to dislribute. For performance reasons developers chose to store the current file being played in a temporary location, unencrypted. This seemed perfectly reasonable to the team working on the player given performance concerns o f on-the-fly decryption during playback. However, an astute hacker can monitor the software for file writes by using simple application spying techniques described in [3] or by using a disassembler. The unencrypted music file can then be captured and distributed freely. In this application, tests o f the playback feature would have been run repeatedly. Audio output would have been scrutinized for quality, skips, volume, etc. During the execution o f each test case the software would have exhibited the same insecure behavior o f creating an unancrypted temporary file. Using conventional testing techniques, the chances o f this being noticed are slim, and, even if it were observed during testing, it is unlikely that this behavior would have been classified as a defect. I f the application is placed under stress or not, any communication with the file system, the registry, the network, an output device or any component external to the software must be scrutinized for security-related side effects. Traditional testing techniques are not well suited to finding security defects because their goal is the verification o f the specification. They are good at finding defects such as action ,4 was supposed to display text B on the screen but it displayed text

External Stress Simulation - This typically involves simulating a high volume o f activity on the system. It is usually azcomplished by using an external application that does not interact directly with the application under test, or by limiting disk or memory resources with large files, background processes etc. There are many commercially available tools for such tasks and this method will not be discussed further in this paper.

261

C instead. They are not good, however, at exposing bugs such as action A was supposed to display text B on the screen, and did, but it also wrote text B out to a file. Such a bug could easily escape testing, but if text B were a password and the machine was accessible to other users, the supposedly secure data would be compromised, thus creating a severe security hole. In the next section we outline an approach for monitoring an application's external behavior for these symptoms o f security breaches.

4. M O N I T O R I N G F O R S Y M P T O M S
Where security is concerned, an application's interaction with its environment is its most critical behavior. These external activities can be observed b y monitoring the system calls the application makes and other interaction with local resources. Significant work has been done in actually restricting these calls at runtime to create a "sand box" like environment (see e.g., [1]). In most cases, however, the functionality that is traded for this increased security is too steep a price to pay. Monitoring and manipulating transactions with the environment during testing though can provide considerable insight into the application's interaction with system resources and, hence, into its security vuinerabilities. Every time an application has to write to disk or read from the registry or perform any system related task, it must execute a system call. W e can get between the application and its environment and monitor the system calls and bring these hidden actions to the foreground. Below we explore the application o f these .techniques. Consider the "content advisor" feature in Microsofl's Interact Explorer~. It allows a user to control the type o f sites that others who use their machine have access to on the Internet by password protecting undesired sites, categories o f sites or unrated sites as shown in Figure !. The security risk here is allowing access to a prohibited site.
I .....

Figure 2: Holodeck exposes the external resource responsible for providing site ratings, this is an invitation to security testers to explore this resource to expose security vulnerabmtles. In Figure 2 we show an application called Holodeck, developed at Florida Tech, which intercepts system calls and allows us to view the interaction between an application and its environment. In this figure we can see that Internet E x p l o r e r ~ makes a call to M S R A T I N G . D L L whenever w e navigate to a new web site. Once w e notice that there is an external dependency to determine whether or not to grant access to a site, then as security testers this signals a potential point o f failure that should be investigated. The information provided by Holodeck exposes the interaction between the application under test and local resources. The idea is for testers to carefully examine all such interactions and think carefully about the security implications o f each file, resource, and regisn'y entry that the application accesses. A r e these resources storing sensitive information? Are they accessible outside the application under test? If they are tampered with, h o w will the application under test respond? W e explore these questions in more detail in section 6.

.~ .... m ~ .
- , m ~ _ #dD = . . . , * ~ . . . , ~ , , ,

m~~~miwnr:~
. . . . . . . . . . . .

~. . ' ~ . e , ' ~ + ~ p
Jlmm~mtm ...J

S. T H E C L A S S I C A P P R O A C H : C O D E BASED FAULT INJECTION


The traditional method for forcing error conditions and executing error handling code is white box in nature. It involves modifying the source and hard coding return values to force the application traverse a particular code path. Consider the following example in which a system call is made on the Microsoft W i n d o w s platform perhaps similar to the call made b y lnternet E x p l o r e r ~ in the example above: Is
hModul e = LoadLibraryEx LOADLIBRARY hModule
=

,mit~j~tm~.~dflK~..,sa,.-~-i~b'~..!~,5".Jd.~.~-::~i'...~

.~:~i!~;~'..~:~'"~:~.~~

F i g u r e 1: S e c u r i t y in t e r m s o f implemented in Internet Explorer@

access

restriction

( T E X T ( " m s r a t i n g . d l I" ) , N U L L , AS DATAFILE); //Hard-Coded //LoadLibraryEx failure of

NULL;

The problem with testing this feature is that its implementation is hidden from the tester. In order to determine i f the implementation m a y have associated security risks, we need a tool to provide system-level design details.

Here, w e simulate a failure b y hard coding the return value N U L L for the function LoadLibraryEx. By using this approach we can

262

force the code to branch to a path that would be taken if such a failure were to actually occur. This approach, while effective at forcing a particular code path to execute, has several major problems associated with it. Testers are given a particular build of the software and often do not have access to the underlying source code. Even when access is available, testers typically lack the expertise or knowledge of the code's design and structure to effectively implement such branch testing. Another difficulty is the time consuming nature of implementing these situations at the code level [2]. Multiple builds with different failure conditions forced must be created and updated as the functional code is modified. This can become an expensive process because it is often necessary to create custom builds to execute very few test cases.

Figure 3 - Failing the call to L o a d L i b r a r y E x W by returning NULL Using similar methods, we can simulate many different error responses to create various failure scenarios. With this approach, we essentially isolate the application and selectively control any communications between it and the operating system or any external entity.

6. AN IMPROVED METHOD: RUNTIME FAULT INJECTION


We often assume that if a given behavior has been verified in one particular environment, those results hold for all. Extreme conditions that are possible in the real world are usually neglected during the testing phase because of the high cost to simulate these failures in the test lab using code based techniques. Traditional fault injection approaches are white box in nature. As described above, return values are hard-coded and the system's behavior is observed. What we propose is a black box approach to fault injection. We achieve this by monitoring system calls made by the application to the operating system and controlling the return values from these calls. For example, we can selectively deny write access to the hard drive or simulate a strained or intermittent network connection, by controlling responses from system calls. By inserting faults at runtime we place the software in a realistically hostile environment. Faults can be simulated without modifying the code of the application under test. By taking this approach we arc able to isolate single system calls and precisely control, their responses. Consider the code-based example presented in section 5 above. There, a white box approach is taken to force the failure of a system call to the Windows function LoadLibraryEx. We can create the same result in a non-intrusive manner by failing the call at runtime. Let's see what happens to Internet Explorer~ when we use Holodeck to alter the return values from the call made to MSRATING.DLL at runtime. We purposely fail the call to the LoadLibraryExW function by intercepting it and returning NULL back to Internet Explorer~) as shown in Figure 3 below. This simulates a failure in loading the library, which could occur for a variety of reasons on a users machine, perhaps the most common of which is simple deletion of the file MSRATING.DLL. In this environment we then navigate to a blocked web site. Intemet Explorer~ opens the page without prompting for a password. Figure 4 shows that if we then go into internet options we are now unable to alter the content settings and the relevant buttons are not selectablc; security measures have been bypassed and the user is now free to explore the Intemet unrestricted.

,t,~, :

rlel . .

(3ilLFnrl . . . .. .~. . . . . . .~.~ .... . :~.. -~...~ . . . . . . . . . . . . . .

... ~..

.~.......

..:..

..

..

~
. ~

~
.

:'
.:f

!:i
.......... ~

::'
. ..

: ' ~-:':~ .....


~.:..

"i I

. . . . .

~. . . . .

D~,,,,~... I

-.,,~'~,..

Figure 4: Blocking the r e t u r n value of MSRATING.DLL completely subverts the content rating feature of Internet
ExplorerQ.

This real world example shows that, given access to information about behavior, testers can be more aggressive towards finding security defects.

7. CONCLUSIONS
Error handling routines are subjected to much less testing than the functional code which they are designed to protect. When testing for security vulnerabilities, however, leaving any portion of the code untested is dangerous. This is especially Irue of error

263

handlers because they are usually created to patch a failure and are thus not constructed with security in mind. Therefore, executing test cases in faulty environments is a necessary part o f any sound security testing effort. It is in these situations that the s o l , rare is at its most vulnerable and security failures are exposed. In this paper, we demonstrated why this is the case and introduced a method to create faulty environmental conditions at runtime.

[5] Viega, J.; Kohno, T.; Potter, B., '~I'rust (and Mislrust) in Secure Applications", Communications of the ACM, Vol. 44, No. 2, pp. 31-36, February 2001. [6] Voas, J. and McGraw, O., Software fault injection: inoculating programs against errors, Wiley, NY, 1998. [7] Whittaker, J., "Software's invisible users," IEEE Software, Vol. 18, No. 3, pp. 84-88 (2001). About the Authors: Herbert It. Thompson is a Ph.D. student in the department o f mathematical sciences at the Florida Institute of Technology, Melbourne. He works as a research assistant in the Center for Software Engineering Research. His research interests include model-based software testing, computer security and the application of mathematics to computer science problems. James A. It'~ittaker is an associate professor of computer science at the Florida Institute o f Technology. He founded the Center for Software Engineering Research with grants from a number o f industry and government funding agencies. His research interests are software development and testing, information assurance and anti-cyber warfare. Florence E. Mottay is a graduate student in software engineering and a research assistant at the Center for Software Engineering Research, Florida Institute of Technology, Melbourne. Her research interests are in software testing, formal languages. mathematical models, and e-commerce.

8. A C K N O W L E D G M E N T S
This work was supported in part by separate grants from the Army Research Lab, Microsoft Research and Rational Software.

9. R E F E R E N C E S
[ 1] Bowden, T.;. Segal, M., "Remediation of ApplicationSpecific Security Vulnerabilities at Runtime", IEEE Software, Vol. 17, No. 5, pp. 59-67, September/October 2000. [2] Houlihan, P., "Targeted sol, ware fault insertion," Proceedings of STAR EAST 2001 (Software Testing Analysis and Review), Software Quality Engineering, Inc., Orlando FL, 2OO1. [3] Richter, J., Programming Applications for Microsoft Windows, Microsoft Press, 1997. [4] Viega, J. and McGraw, G., Building Secure Software, Addison-Wesley, 2001.

264

You might also like